The variation advantage: how to master tennis, learn a language, or build better AI
<p>Want to become a better tennis player? If you repeatedly practise serving to the same spot, you’ll master serving to that <em>exact</em> location, if conditions remain similar. Practising your serve to a variety of locations will take much longer to master, but in the end you’ll be a better tennis player, and much more capable of facing a fierce opponent.</p>
<p>The reason why is all about variability: the more we’re exposed to, the better our neural networks are able to generalise and calculate which information is important to the task, and what is not. This also helps us learn and make decisions in new contexts.</p>
<p><strong>From fox to hounds</strong></p>
<p>This generalisation principle can be applied to many things, including learning languages or recognising dog breeds. For example, an infant will have difficulty learning what a ‘dog’ is if they are only exposed to chihuahuas instead of many dog breeds (chihuahuas, beagles, bulldogs etc.), which show the real variation of <em>Canis lupus familiaris</em>. Including information about what is <em>not</em> in the dog category – for example foxes – also helps us build generalisations, which helps us to eliminate irrelevant information.</p>
<p>“Learning from less variable input is often fast, but may fail to generalise to new stimuli,” says Dr Limor Raviv, the senior investigator from the Max Planck Institute (Germany). “But these important insights have not been unified into a single theoretical framework, which has obscured the bigger picture.”</p>
<p>To better understand the patterns behind this generalisation framework, and how variability effects the human learning process and that of computers, Raviv’s research team explored over 150 studies on variability and generalisation across the fields of computer science, linguistics, motor learning, visual perception and formal education.</p>
<p><strong>Wax on, wax off</strong></p>
<p>The researchers found that there are at least four kinds of variability, including:</p>
<ul>
<li><strong>Numerosity</strong> (set size), which is the number of different examples; such as the number of locations on the tennis court a served ball could land</li>
<li><strong>Heterogeneity</strong> (differences between examples); serving to the same spot versus serving to different spots</li>
<li><strong>Situational</strong> (context) diversity; facing the same opponent on the same court or a different component on a different court</li>
<li><strong>Scheduling</strong> (interleaving, spacing); how frequently you practice, and in what order do you practice components of a task</li>
</ul>
<p>“These four kinds of variability have never been directly compared—which means that we currently don’t know which is most effective for learning,” says Raviv.</p>
<div class="newsletter-box">
<div id="wpcf7-f6-p191362-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div>
</div>
<p>According to the ‘Mr Miyagi principle’, inspired by the 1984 movie <em>The Karate Kid</em>, practising unrelated skills – such as waxing cars or painting fences – might actually benefit the learning of other skills: in the movie’s case, martial arts.</p>
<p><strong>Lemon or lime?</strong></p>
<p>So why does including variability in training slow things down? One theory is that there are always exceptions to the rules, which makes learning and generalising harder.</p>
<p>For example, while colour is important for distinguishing lemons from limes, it wouldn’t be helpful for telling cars and trucks apart. Then there are atypical examples – such as a chihuahua that doesn’t look like a dog, and a fox that does, but isn’t.</p>
<p>So as well as learning a rule to make neural shortcuts, we also have to learn exceptions to these rules, which makes learning slower and more complicated. This means that when training is variable, learners have to actively reconstruct memories, which takes more effort.</p>
<p><strong>Putting a face to a name</strong></p>
<p>So how do we train ourselves and computers to recognise faces? The illustration below is an example of variations of a fox for machine learning. Providing several variations – including image rotation, colour and partial masking – improves the machine’s ability to generalise (in this case, to identify a fox). This data augmentation technique is an effective way of expanding the amount of available data by providing variations of the same data point, but it slows down the speed of learning.</p>
<p>Humans are the same: the more variables we’re presented with, the harder it is for us to learn – but eventually it pays off in a greater ability to generalise knowledge in new contexts.</p>
<p>“Understanding the impact of variability is important for literally every aspect of our daily life. Beyond affecting the way we learn language, motor skills, and categories, it even has an impact on our social lives.” explains Raviv. “For example, face recognition is affected by whether people grew up in a small community (fewer than 1000 people) or in larger community (over 30,000 people). Exposure to fewer faces during childhood is associated with diminished face memory.”</p>
<p>The learning message for both humans and AI is clear: variation is key. Switch up your tennis serve, play with lots of different dogs, and practice language with a variety of speakers. Your brain (or algorithm) will thank you for it… eventually.</p>
<p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=191362&title=The+variation+advantage%3A+how+to+master+tennis%2C+learn+a+language%2C+or+build+better+AI" width="1" height="1" /></p>
<div id="contributors">
<p><em><a href="https://cosmosmagazine.com/people/behaviour/the-variation-advantage-how-to-master-tennis-learn-a-language-or-build-better-ai/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/qamariya-nasrullah" target="_blank" rel="noopener">Qamariya Nasrullah</a>. Qamariya Nasrullah holds a PhD in evolutionary development from Monash University and an Honours degree in palaeontology from Flinders University.</em></p>
<p><em>Image: Getty Images</em></p>
</div>