genderequalitygoals

genderequalitygoals

Saturday, 1 April 2023

[New post] 10 Ways To Become A [Bad] Learning Scientist

Site logo image julianstodd posted: " So far, in our work towards the Learning Science book, Sae, Geoff, and I have written some thoughtful articles about complex ideas like Learning Ecosystems, Social Metacognition, and even the Nature of Knowledge itself. We've tried to provide a thoughtfu" Julian Stodd's Learning Blog

10 Ways To Become A [Bad] Learning Scientist

julianstodd

Apr 1

So far, in our work towards the Learning Science book, Sae, Geoff, and I have written some thoughtful articles about complex ideas like Learning Ecosystems, Social Metacognition, and even the Nature of Knowledge itself. We've tried to provide a thoughtful, practical, and research-grounded narrative. 

But… what if there were an easier way… a secret, accelerated path to success that skipped past the lengthy analysis and plodding methodologies?

In today's post, we're offering a handy guide that will help you to add sparkle to any idea, and provide the tips you need to wow clients and partners with the thinnest veneer of empiricism and credibility – without all of that boring work.

After all, on the 1st April, why strive and struggle, when there is a shortcut?

Read on for out top 10, all powerful tips that can turn your lacklustre L&D report into a powerful, scientific research paper, to impress your friends, and wow your boss:

Part 1: Finding Published Research

Our first ideas involve creative ways to use published research (the best sort, right?) to build on an academic foundation and prior evidence. These tips will help you create [the appearance of] rigour and quality, and let everyone know you've done your 'due diligence'. Pretty much whatever you are looking at, you can find some relevant articles to support it… Here's how: 

  1. Use evidence from individual laboratory studies (because the real world is just like a lab). Nearly every concept in L&D has been researched… somewhere. Frequently, these studies are conducted in deliberately constructed environments with the object in question (such as a particular instructional method) carefully isolated for evaluation in a helpful (unrealistic) vacuum. Some of these studies include just a handful of participants, and about half of the time, their positive results are just a statistical fluke [1] — so you're certain to find a publication with some shiny statistics to support whatever you might claim! When you have found it, just paste the citation into every document.
  1. Be creative in how you generalise research (because 'context' is just a detail really). Closely related to the recommendation above, this next piece of advice is to apply research findings broadly and into new spaces. Don't worry about the populations involved, whether the learning conditions were realistic, or if the results are replicable. Just search online, find an article with good numbers, et voilĂ ! If you need a good example of this, just look at the work on Growth Mindset: Empirically examined in one domain and context, and then widely generalised as if it were a universal 'thing'.
  2. Emphasise the 'statistical significance' (because like probably 99% of the time you'll be right). A lot of people aren't well-versed in parametric statistics, but most people in the L&D community have probably heard of 'alpha' or 'p-value.' Your best approach is to showcase that statistic prominently, and when you cite foundational research, make sure to emphasise its p-value (for example, "p < 0.05"). Consider adding exclamation marks! We advise liberally using the phrase 'statistically significant' or even just 'significant' when referring to research with a p-value of less than 0.05. Using the word 'significant' lends credibility to the research findings. Consider this the research equivalent of 'artisanal' (when describing cheese) or 'craft' (when discussing beer).

Part 2: Creating Original Research

Next, you probably need to do some 'validation' research on your own, so you can 'prove' that your specific L&D offering is effective. Here are some tips for getting the best results: 

  1. Use a pretest/posttest design (because 'better' is 'better' in every case). Let's say you've made a new training program and want to show how awesome it is. (Think: performance review coming up soon.) You need to collect some training outcomes. Here's how to make sure they look good. First, give participants a pretest, then do your training, and afterwards give them a posttest. Don't worry too much about what happens in between the tests. You'll probably get a medium effect size improvement [2] just from the retest effect [3] alone – which is like free progress really. And magically, this works best if the two tests are the same, but that's not actually even a requirement. Adding some test-prep and coaching into your training, you'll get even bigger results!
     
  2. Use Placebo and Hawthorne effects liberally (because if you can measure it, it counts). The medical community has studied [4] placebos extensively and found them to have massive impacts. Although the percentage varies depending on a study's purpose and participants, it's often around 20–30% – but can be upwards of 72% [5]. A related organisational phenomenon is the Hawthorne effect [6], which basically shows that when workers are given special attention and observed, their performance increases. So, you can easily find impressive results simply by creating an intervention that piques learners' Placebo/Hawthorne responses. Just fuel their expectations, give them some attention, and make sure they know that you're watching. This technique is particularly useful as it liberates you from actually creating effective learning.
  3. Design your experiment for success (because, why take the chance?). Once you have your pre- and posttests and Placebo/Hawthorne triggers ready to go, it's time to create the study's protocol. Some experimental designs work better than others. Specifically, you'll have bigger effect sizes [7] if you use (a) correlational or quasi-experimental designs (in other words, avoid participant randomization and blind/double-blind assignments!), (b) proximal testing (evaluations that closely mirror the intervention and are completed close to it, like a written posttest completed shortly after training), and (c) a small population (stick to fewer than 500 people). Remember: we use veneers because they allow us to use a valuable material (the beautiful veneer) very cost effectively. Think of your time as the veneer: the more you save, the more of The Mandolorian you can catch up on later.
  4. Count everything (because MEASUREMENT FOR THE WIN). We've already talked about collecting pre and posttest outcomes, but you'll need more than that! Collect data on everything, so that you have a lot to play with after the experiment. Start by asking for detailed demographic data, because you might find your experiment works best for left-handed, bilingual women ages 25 to 50 – so you'll need all of those variables in-hand to find that needle in the haystack. Next, collect data on anything that's countable, for example, number of hours spent in training or number of words read. You can also selectively count parts of self-response surveys, such as the number of items rated above 'satisfactory'. 
  5. Use statistical tricks (because it's not cheating if it's just maths). If you've followed our prior recommendations, then you already have some impressive results, but if you're still struggling (or want to boost the results further), you can massage the data. There's a large toolkit of data-dredging hacks [8] that (bad) scientists have perfected over the years, such as p-hacking [9] (manipulating the statistics to get a suitable p-value), fishing (playing with the statistics until some superficially nice-looking result appears, whatever it might be), or simply continuing to run the experiment [10] until you get enough data to support some desired result. This is all good: after all, what's the point of putting in the effort unless you can show success? Nobody ever learnt from getting anything wrong. And that's a [statistically significant] fact!

Part 3: Communicating About Your Amazing, Evidence-Based Results

We've made great progress so far. Using the foundations of western scientific methodology, we've been able to add real value at minimal effort. But there's one more step. After you've assembled a supportive literature review and conducted your own empirical testing, it's time to share your results. There are plenty of good guides to writing (bad) research articles, with excellent advice such as, "never explain the objectives of the paper in a single sentence…in particular never at the beginning" [11] and make sure to "use different terms for the same thing" [12]. In addition to that great guidance, we'll add two more suggestions:

  1. Build on personal experience (because feeling IS believing). People love personal stories, and we've all experienced education and training before – so, we're all mini-experts on the subject of learning. Work with that. Use your own experiences or, even better, reference common human experiences as naturalistic evidence. After all, we're all humans, and we all think and learn in the same ways. So, these common experiences will help people relate to your new L&D idea. Draw readers or customers in with anecdotes about personal experiences, and then generalise from those experiences to help explain and support your concept.
  2. Use snazzy terminology (because with a growth mindset, we can be neuro-informed): Like a well-tailored suit on a businessperson, certain words add polish that can make or break your L&D idea. At a minimum, make sure to use both 'Machine Learning' (ML) and 'Artificial Intelligence' [13] (AI). (Don't worry if you don't actually use AI because a lot of so-called AI startups don't either! [14]) Next, pick a few L&D terms that describe your idea or offering. Finally, include a few classic innovation words, like 'emerging' or 'cutting-edge', so that people know this is a new concept. Don't worry if this seems like hard work: Sae has put together a table to help. Start with the following prompt, and then select a word from each column to fill it in:

Our concept uses AI/ML and [column 1], [column 2] [column 3] to optimise [column 4].

Column 1 Column 2 Column 3 Column 4
innovative personalised algorithms employee engagement
emerging cloud-enabled serious games bench strength
virtualized data-driven analytics growth mindset
agile mobile-first master classes cross skilling
bleeding edge big data expert seminars design thinking
synchronous extended reality gamification double-loop learning
net-centric neuro-informed blended systems core competencies
real-time adaptive microlearning learning fidelity
context-aware evidence-based virtual classrooms team workflow
right-sized hybrid learning experiential learning your business ecosystem
digital neurolinguistic visualisations upskilling
higher-order self-paced instructional methods learner empowerment

Conclusion

To summarise: if you follow these 10 steps you should be well positioned to become a published learning scientist with a spate of innovative, evidence-based AI/ML concepts (among other more questionable descriptors) tied to your name. 

BONUS tip!

Make beautiful data visualisations. Any data represented in an infographic is automatically more valid than a table. Ideally you should embellish your presentations with animations. To avoid confusion, eliminate distractions such as standard deviation notions or error bars, which just get in the way of a good story. Instead, opt for basic graphs wherever possible, like bar charts with just one or two items. You can, for example, make a dashboard of vanity metrics such as number of hours spent learning, smile-sheet scores, change in pre- to posttest results. Basically, anything that is countable can be included (so long as the numbers look right, of course). And use orange, because it's a warm colour, and everyone loves a winner.

[1] https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/

[2] https://www.illuminateed.com/blog/2017/06/effect-size-educational-research-use/

[3] https://onlinelibrary.wiley.com/doi/full/10.1002/ets2.12300

[6] https://psycnet.apa.org/record/2000-13580-004

[7] https://evidenceforlearning.org.au/news/effect-sizes-in-education-bigger-is-better-right

[8] https://catalogofbias.org/biases/data-dredging-bias/

[9] https://files.de-1.osf.io/v1/resources/xy2dk/providers/osfstorage/623224d733d8540487f8ad21?action=download&direct&version=2

[10] https://theness.com/neurologicablog/index.php/p-hacking-and-other-statistical-sins/

[11] https://pubs.acs.org/doi/10.1021/ac2000169

[12] https://www.elsevier.com/connect/authors-update/10-tips-for-writing-a-truly-terrible-journal-article

[13] https://www.verdict.co.uk/ai-in-education-buzzwords-hyperbole/

[14] https://www.theverge.com/2019/3/5/18251326/ai-startups-europe-fake-40-percent-mmc-report

Comment
Like
Tip icon image You can also reply to this email to leave a comment.

Unsubscribe to no longer receive posts from Julian Stodd's Learning Blog.
Change your email settings at manage subscriptions.

Trouble clicking? Copy and paste this URL into your browser:
https://julianstodd.wordpress.com/2023/04/01/10-ways-to-become-a-bad-learning-scientist/

WordPress.com and Jetpack Logos

Get the Jetpack app to use Reader anywhere, anytime

Follow your favorite sites, save posts to read later, and get real-time notifications for likes and comments.

Download Jetpack on Google Play Download Jetpack from the App Store
WordPress.com on Twitter WordPress.com on Facebook WordPress.com on Instagram WordPress.com on YouTube
WordPress.com Logo and Wordmark title=

Learn how to build your website with our video tutorials on YouTube.


Automattic, Inc. - 60 29th St. #343, San Francisco, CA 94110  

at April 01, 2023
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

No comments:

Post a Comment

Newer Post Older Post Home
Subscribe to: Post Comments (Atom)

A Quick Update From ASUN

Autistic Substance Use Network ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏   ...

  • [New post] “You Might Go to Prison, Even if You’re Innocent”
    Delaw...
  • Autistic Mental Health Conference 2025
    Online & In-Person ͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏    ...
  • [Blog Post] Principle #16: Take care of your teacher self.
    Dear Reader,  To read this week's post, click here:  https://teachingtenets.wordpress.com/2025/07/02/aphorism-24-take-care-of-your-teach...

Search This Blog

  • Home

About Me

GenderEqualityDigest
View my complete profile

Report Abuse

Blog Archive

  • January 2026 (32)
  • December 2025 (52)
  • November 2025 (57)
  • October 2025 (65)
  • September 2025 (71)
  • August 2025 (62)
  • July 2025 (59)
  • June 2025 (55)
  • May 2025 (34)
  • April 2025 (62)
  • March 2025 (50)
  • February 2025 (39)
  • January 2025 (44)
  • December 2024 (32)
  • November 2024 (19)
  • October 2024 (15)
  • September 2024 (19)
  • August 2024 (2651)
  • July 2024 (3129)
  • June 2024 (2936)
  • May 2024 (3138)
  • April 2024 (3103)
  • March 2024 (3214)
  • February 2024 (3054)
  • January 2024 (3244)
  • December 2023 (3092)
  • November 2023 (2678)
  • October 2023 (2235)
  • September 2023 (1691)
  • August 2023 (1347)
  • July 2023 (1465)
  • June 2023 (1484)
  • May 2023 (1488)
  • April 2023 (1383)
  • March 2023 (1469)
  • February 2023 (1268)
  • January 2023 (1364)
  • December 2022 (1351)
  • November 2022 (1343)
  • October 2022 (1062)
  • September 2022 (993)
  • August 2022 (1355)
  • July 2022 (1771)
  • June 2022 (1299)
  • May 2022 (1228)
  • April 2022 (1325)
  • March 2022 (1264)
  • February 2022 (858)
  • January 2022 (903)
  • December 2021 (1201)
  • November 2021 (3152)
  • October 2021 (2609)
Powered by Blogger.