Placeholder Content Image

REVIEW: Super-intelligent, dog-detecting robot lawn mower

<p>I was recently invited to an onsite demonstration of a brand new line of lawn mowers that were pitched as being not just a lawn mower, but a furry-friend dodging, grass-grooming marvel of modern technology.</p> <p>According to the specs, the <a href="https://au.worx.com/vision-technology/" target="_blank" rel="noopener">WORX LANDROID® Vision</a> is the world’s first advanced AI, "unbox &amp; mow" robot lawn mower. "No wire. No satellite. No beacons. No time between unboxing and mowing."</p> <p>Using a combination of HRD camera, the latest AI smarts and a deeply trained neural network to identify grass to mow and obstacles to avoid, it features the innovative "Cut-to-Edge" function, multi-zone management and adaptive auto-scheduling. Plus an<span style="font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;"> optional LED headlight safe night-mowing (apparently, unlike conventional robots, Vision sees nocturnal animals and stays away from them).</span></p> <p>But the real test for me was always going to be: how would something like the Vision get along with my dog, Rosie? I was offered the chance to try out one of the mowers for a few weeks, and so I jumped at it.</p> <p>But let's talk about Rosie for a moment. Now, this little ball of fur thinks she's the queen of the backyard. She zooms around like a tiny tornado, and honestly I think she believes the grass is her personal chew toy. So, when I introduced the LANDROID into the mix, I was half expecting chaos and half hoping for a miracle.</p> <p>Lo and behold, this mower is not just a lawn whisperer; it's a puppy ninja. The WORX LANDROID has some sort of superpower in its sensors, allowing it to detect my pup's presence and skilfully manoeuvre around her. It was like watching a graceful dance between technology and canine curiosity.</p> <p>For the duration of the test, Rosie basically appointed herself as the official supervisor of lawn maintenance, proudly watching from a safe distance (and sometimes not so safe) as the LANDROID worked its magic.</p> <p>But let's not forget about the real star of the show: the lawn itself. The LANDROID doesn't just dodge around obstacles; it trims with precision, leaving my yard looking like a freshly coiffed celebrity. It's like having a personal stylist for my grass – one that never sleeps. </p> <p>And the best part? I get to sit back, relax and sip my lemonade while the LANDROID does all the heavy lifting (or should I say, mowing). It's like having a reliable little garden gnome, except this one runs on electricity and has impeccable dodging skills.</p> <p>So if you want a lawn mower that's not only efficient but also entertaining, look no further than the <a href="https://au.worx.com/vision-technology/" target="_blank" rel="noopener">WORX LANDROID Vision</a>. It's the perfect blend of technology, pet sensitivity and grass-grooming prowess. Plus, it's the only mower I know that can outmanoeuvre a puppy – and that is definitely something to bark about.</p> <p><em>Images: Alex Cracknell</em></p>

Home & Garden

Placeholder Content Image

Be careful around the home – children say Alexa has emotions and a mind of its own

<p>Is technology ticklish? Can a smart speaker get scared? And does the robot vacuum mind if you put it in the cupboard when you go on holidays?</p> <div> <p>Psychologists from Duke University in the US asked young children some pretty unusual questions to better understand how they perceive different technologies.</p> <p>The researchers interviewed 127 children aged 4 – 11 years old visiting a science museum with their families. They asked a series of questions seeking children’s opinions on whether technologies – including an Amazon Alexa smart speaker, a Roomba vacuum cleaner and a Nao humanoid robot – can think, feel and act on purpose, and whether it was ok to neglect, yell or mistreat them.</p> <p>In general, the children thought Alexa was more intelligent than a Roomba, but believed neither technology should be yelled at or harmed. </p> <p>Lead author Teresa Flanagan says “even without a body, young children think the Alexa has emotions and a mind.” </p> <p>“Kids don’t seem to think a Roomba has much mental abilities like thinking or feeling,” she says. “But kids still think we should treat it well. We shouldn’t hit or yell at it even if it can’t hear us yelling.”</p> <p>Overall, children rejected the idea that technologies were ticklish and or could feel pain. But they thought Alexa might get upset after someone is mean to it.</p> <p>While all children thought it was wrong to mistreat technology, the survey results suggest the older children were, the more likely they were to consider it slightly more acceptable to harm technology.</p> <p>Children in the study gave different justifications for why they thought it wasn’t ok to hurt technology. One 10-year-old said it was not okay to yell at the technology because, “the microphone sensors might break if you yell too loudly,” whereas another 10-year-old said it was not okay because “the robot will actually feel really sad.”</p> <p>The researchers say the study’s findings offer insights into the evolving relationship between children and technology and raise important questions about the ethical treatment of AI and machines in general. For example, should parents model good behaviour for by thanking technologies for their help?</p> <p>The results are <a href="https://psycnet.apa.org/doiLanding?doi=10.1037/dev0001524" target="_blank" rel="noreferrer noopener">published</a> in <em>Developmental Psychology</em>. </p> </div> <div id="contributors"> <p><em>This article was originally published on <a href="https://cosmosmagazine.com/technology/be-careful-around-the-home-children-say-alexa-has-emotions-and-a-mind-of-its-own/" target="_blank" rel="noopener">cosmosmagazine.com</a> and was written by Petra Stock. </em></p> <p><em>Images: Getty</em></p> </div>

Technology

Placeholder Content Image

3 smart appliances to make your life easier

<p dir="ltr">It’s time to get digital but don’t worry, all you need is a set of batteries and a charging cord.</p> <p dir="ltr">From vacuuming to mowing the lawn, here are some helpful devices that will make your life easier. </p> <p dir="ltr"><strong>1. <a href="https://www.binglee.com.au/products/irobot-braava-jet-m6-robot-mop-m613200?utm_source=CommissionFactory&amp;utm_medium=referral&amp;cfclick=346864d5d0bf44a58923574774cfdf9e" target="_blank" rel="noopener">Robotic Vacuum</a></strong></p> <p dir="ltr">The concept of a robotic vacuum is not at all new. The Roomba vacuum, arguably the most iconic robot vacuum cleaner out there has been out for over a decade but if you do not own one, do yourself a favour and get one! Or something similar. </p> <p dir="ltr">A robot vacuum will make bending over a thing of the past. They are a self-propelled floor cleaner that uses a rotating brush or brushes to pick up dirt and debris. They work on their own without any human intervention, just press the button and let the little robot clean your home.</p> <p dir="ltr"><strong>2. <a href="https://www.ecovacs.com/au/winbot-window-cleaning-robot/winbot-w1-pro?cfclick=d2d2a30255d642df868b7ab3d6850b67">Robotic Window Cleaner</a></strong></p> <p dir="ltr">Cleaning windows is one of the most tiresome jobs in terms of cleaning, so rest those arms and get yourself a robotic window cleaner.</p> <p dir="ltr">This revolutionary window cleaner suctions itself onto the glass and gives your windows the gleam they deserve. Once again, no human intervention, just press the button and watch in amazement. </p> <p dir="ltr"><strong>3. <a href="https://www.amazon.com.au/WORX-LANDROID-Robotic-POWERSHARE-Battery/dp/B09V2DQGC1/?tag=homestolove-trx0000057-22">Robotic Lawn Mower</a></strong></p> <p dir="ltr">If you have a big lawn, then this is the way to go. Lawn mowers that you can ride are certainly a better option than those you hold, but the robotic lawn mower allows you to cut your grass from the comfort of your living room.  </p> <p dir="ltr">They’re capable of cutting areas of up to 1000sqm. It measures the size of your lawn, the soil composition and can identify different grass species to make sure it’s cut at the right time based on growth rate and seasonality!</p> <p dir="ltr">Work smarter, not harder. </p> <p><span id="docs-internal-guid-90ac8f63-7fff-60bf-1904-739cd411e0a9"></span></p> <p dir="ltr"><em>Image credit: Getty</em></p>

Home Hints & Tips

Placeholder Content Image

Tactile robot with a sense of touch can fold laundry

<p>Why can you buy a robot vacuum cleaner easily, but not one that folds laundry or irons clothes? Because fabric is actually a very difficult thing for robots to manipulate. But scientists have made a breakthrough with a robot designed to have tactile senses.</p> <p>Fabric is soft, and deformable, and requires a few different senses firing to pick up. This is why the fashion industry is so <a href="https://cosmosmagazine.com/people/garment-supply-chain-slavery/" target="_blank" rel="noreferrer noopener">labour-intensive</a>: it’s too hard to automate.</p> <p>“Humans look at something, we reach for it, then we use touch to make sure that we’re in the right position to grab it,” says David Held, an assistant professor in the School of Computer Science, and head of the Robots Perceiving and Doing Lab, at Carnegie Mellon University, US.</p> <p>“A lot of the tactile sensing humans do is natural to us. We don’t think that much about it, so we don’t realise how valuable it is.”</p> <p>When we’re picking up a shirt, for instance, we’re feeling the top layer, sensing lower layers of cloth, and grasping the layers below.</p> <p>But even with cameras and simple sensors, robots can usually only feel the top layer.</p> <p>But Held and colleagues have figured out how to get a robot to do more. “Maybe what we need is tactile sensing,” says Held.</p> <p>The Carnegie Mellon researchers, along with Meta AI, have developed a robotic ‘skin’ called <a href="https://ai.facebook.com/blog/reskin-a-versatile-replaceable-low-cost-skin-for-ai-research-on-tactile-perception/" target="_blank" rel="noreferrer noopener">ReSkin</a>.</p> <p>It’s an elastic <a href="https://cosmosmagazine.com/science/explainer-what-is-a-polymer/" target="_blank" rel="noreferrer noopener">polymer</a>, filled with tiny magnetic sensors.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p220637-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> <form class="wpcf7-form mailchimp-ext-0.5.62 spai-bg-prepared init" action="/technology/laundry-folding-robot/#wpcf7-f6-p220637-o1" method="post" novalidate="novalidate" data-status="init"> <p style="display: none !important;"><span class="wpcf7-form-control-wrap referer-page"><input class="wpcf7-form-control wpcf7-text referer-page" name="referer-page" type="hidden" value="https://cosmosmagazine.com/technology/" data-value="https://cosmosmagazine.com/technology/" aria-invalid="false" /></span></p> <p><!-- Chimpmail extension by Renzo Johnson --></form> </div> </div> <p>“By reading the changes in the magnetic fields from depressions or movement of the skin, we can achieve tactile sensing,” says Thomas Weng, a Ph.D. student in Held’s lab, and a collaborator on the project.</p> <p>“We can use this tactile sensing to determine how many layers of cloth we’ve picked up by pinching, with the sensor.”</p> <p>The ReSkin-coated robot finger could successfully pick up both one and two layers of cloth from a pile, working with a range of different textures and colours.</p> <p>“The profile of this sensor is so small, we were able to do this very fine task, inserting it between cloth layers, which we can’t do with other sensors, particularly optical-based sensors,” says Weng.</p> <p>“We were able to put it to use to do tasks that were not achievable before.”</p> <p>The robot is not yet capable of doing your laundry: next on the researchers list is teaching it to smooth crumpled fabric, choosing the correct number of layers to fold, then folding in the right direction.</p> <p>“It really is an exploration of what we can do with this new sensor,” says Weng.</p> <p>“We’re exploring how to get robots to feel with this magnetic skin for things that are soft, and exploring simple strategies to manipulate cloth that we’ll need for robots to eventually be able to do our laundry.”</p> <p>The researchers are presenting a <a href="https://sites.google.com/view/reskin-cloth" target="_blank" rel="noreferrer noopener">paper</a> on their laundry-folding robot at the 2022 International Conference on Intelligent Robots and Systems in Kyoto, Japan.</p> <p><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=220637&amp;title=Tactile+robot+with+a+sense+of+touch+can+fold+laundry" width="1" height="1" /></p> <p><!-- End of tracking content syndication --></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/technology/laundry-folding-robot/" target="_blank" rel="noopener">This article</a> was originally published on Cosmos Magazine and was written by Ellen Phiddian. </em></p> <p><em>Image: </em><em>Carnegie Mellon University</em></p> </div>

Technology

Placeholder Content Image

Realistic androids coming closer, as scientists teach a robot to share your laughter

<p>Do you ever laugh at an inappropriate moment?</p> <p>A team of Japanese researchers has taught a robot when to laugh in social situations, which is a major step towards creating an android that will be “like a friend.”</p> <p>“We think that one of the important functions of conversational AI is empathy,” says Dr Koji Inoue, an assistant professor at Kyoto University’s Graduate School of Informatics, and lead author on a paper describing the research, <a href="https://doi.org/10.3389/frobt.2022.933261" target="_blank" rel="noreferrer noopener">published</a> in <em>Frontiers in Robotics and AI</em>.</p> <p>“Conversation is, of course, multimodal, not just responding correctly. So we decided that one way a robot can empathize with users is to share their laughter, which you cannot do with a text-based chatbot.”</p> <p>The researchers trained an AI with data from 80 speed dating dialogues, from a matchmaking marathon with Kyoto University students. (Imagine meeting a future partner at exercise designed to teach a robot to laugh…)</p> <p>“Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn’t easy, because as you know, most laughter is actually not shared at all,” says Inoue.</p> <p>“We had to carefully categorise exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.”</p> <p>They then added this system to a hyper-realistic android named <a href="https://robots.ieee.org/robots/erica/" target="_blank" rel="noreferrer noopener">Erica</a>, and tested the robot on 132 volunteers.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p214084-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> <form class="wpcf7-form mailchimp-ext-0.5.62 spai-bg-prepared init" action="/technology/robot-laugh/#wpcf7-f6-p214084-o1" method="post" novalidate="novalidate" data-status="init"> <p style="display: none !important;"><span class="wpcf7-form-control-wrap referer-page"><input class="wpcf7-form-control wpcf7-text referer-page" name="referer-page" type="hidden" value="https://cosmosmagazine.com/technology/" data-value="https://cosmosmagazine.com/technology/" aria-invalid="false" /></span></p> <p><!-- Chimpmail extension by Renzo Johnson --></form> </div> </div> <p>Participants listened to one of three different types of dialogue with Erica: one where she was using the shared laughter system, one where she didn’t laugh at all, and one where she always laughed whenever she heard someone else do it.</p> <p>They then gave the interaction scores for empathy, naturalness, similarity to humans, and understanding.</p> <p>The researchers found that the shared-laughter system scored higher than either baseline.</p> <p>While they’re pleased with this result, the researchers say that their system is still quite rudimentary: they need to categorise and examine lots of other types of laughter before Erica’s chuckling naturally.</p> <p>“There are many other laughing functions and types which need to be considered, and this is not an easy task. We haven’t even attempted to model unshared laughs even though they are the most common,” says Inoue.</p> <p>Plus, it doesn’t matter how realistic a robot’s laugh is if the rest of its conversation is unnatural.</p> <p>“Robots should actually have a distinct character, and we think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style,” says Inoue.</p> <p>“We do not think this is an easy problem at all, and it may well take more than 10 to 20 years before we can finally have a casual chat with a robot like we would with a friend.”</p> <p><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=214084&amp;title=Realistic+androids+coming+closer%2C+as+scientists+teach+a+robot+to+share+your+laughter" width="1" height="1" /></p> <p><!-- End of tracking content syndication --></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/technology/robot-laugh/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/ellen-phiddian" target="_blank" rel="noopener">Ellen Phiddian</a>. Ellen Phiddian is a science journalist at Cosmos. She has a BSc (Honours) in chemistry and science communication, and an MSc in science communication, both from the Australian National University.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology

Placeholder Content Image

Supermarket delivery by robot better for the climate

<p>Along with their <a href="https://twitter.com/historymatt/status/1525776275939418113" target="_blank" rel="noreferrer noopener">cult following on social media</a>, autonomous delivery robots travelling on footpaths could be the most climate-friendly way to do your grocery shopping.</p> <p>Around the world, <a href="https://cosmosmagazine.com/people/will-covid-19-change-our-cities/" target="_blank" rel="noreferrer noopener">COVID-19 has seen a change</a> in the way people shop for groceries. Instead of driving to the supermarket more people are ordering online for pick-up or home delivery, and even in some places, delivery <a href="https://cosmosmagazine.com/technology/robotics/drone-delivery-groceries-canberra/" target="_blank" rel="noreferrer noopener">by drone</a> or robot.</p> <p>In the United States supermarket home delivery services grew 54% between 2019 and 2020. In Australia, Woolworths and Coles experienced <a href="https://theconversation.com/coles-and-woolworths-are-moving-to-robot-warehouses-and-on-demand-labour-as-home-deliveries-soar-166556" target="_blank" rel="noreferrer noopener">unprecedented demand.</a></p> <p>The rapid growth in e-commerce has seen an increased focus on the greenhouse gas emissions associated with the <a href="https://cosmosmagazine.com/earth/sustainability/to-help-the-environment-should-you-shop-in-store-or-online/" target="_blank" rel="noreferrer noopener">‘last-mile’ delivery</a>.</p> <p>A study by University of Michigan researchers and the Ford Motor Co modelled the emissions associated with the journey of a 36-item grocery basket from shop to home via a number of alternative transport options. Their study is <a href="https://pubs.acs.org/doi/pdf/10.1021/acs.est.2c02050" target="_blank" rel="noreferrer noopener">published</a> in the journal <em>Environmental Science &amp; Technology</em>.</p> <p>“This research lays the groundwork for understanding the impact of e-commerce on greenhouse gas emissions produced by the grocery supply chain,” says the study’s senior author Greg Keoleian<a href="https://seas.umich.edu/research/faculty/greg-keoleian" target="_blank" rel="noopener">,</a> director of the Centre for Sustainable Systems at University of Michigan School for Environment and Sustainability.</p> <p>The researchers modelled 72 different ways the groceries could travel from the warehouse to the customer. Across all options, the results showed ‘last-mile’ transport emissions to be the major source of <a href="https://cosmosmagazine.com/earth/food-transport-emissions-cost/">supply chain emissions</a>.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p201307-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> <form class="wpcf7-form mailchimp-ext-0.5.62 spai-bg-prepared init" action="/earth/climate/robot-delivery-better-for-the-climate/#wpcf7-f6-p201307-o1" method="post" novalidate="novalidate" data-status="init"> <p style="display: none !important;"><span class="wpcf7-form-control-wrap referer-page"><input class="wpcf7-form-control wpcf7-text referer-page" name="referer-page" type="hidden" value="https://cosmosmagazine.com/" data-value="https://cosmosmagazine.com/" aria-invalid="false" /></span></p> <p><!-- Chimpmail extension by Renzo Johnson --></form> </div> </div> <p>They found the conventional option of driving to the supermarket in a petrol or diesel car to be the most polluting, creating six kilograms of carbon dioxide (CO<sub>2</sub>). All other choices had lower emissions, with footpath delivery robots the cleanest for the climate, at one kg CO<sub>2</sub>.</p> <p>A customer who switched to an electric vehicle could halve their emissions. But they could achieve a similar impact on emissions by reducing their shopping frequency. Without buying a new car, households who halved the frequency of supermarket trips reduced emissions by 44%.</p> <p>Keoleian says the study emphasises the “important role consumers can serve in reducing emissions through the use of trip chaining and by making carefully planned grocery orders.” Trip chaining refers to combining grocery shopping with other errands.</p> <p>All home delivery options had lower emissions than in-store shopping – in part due to the efficiencies gained in store operation and transport – with the potential to cut emissions by 22 – 65%.</p> <p>Footpath robots are being trialled in cities across the United States, Europe and China. These four or six wheeled robots carry items like supermarket shopping or retail items over short distances. Most have a delivery range around three kilometres.</p> <figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"> <div class="wp-block-embed__wrapper"> <div class="entry-content-asset"> <div class="embed-wrapper"> <div class="inner"> <div class="twitter-tweet twitter-tweet-rendered spai-bg-prepared" style="display: flex; max-width: 500px; width: 100%; margin-top: 10px; margin-bottom: 10px;"><iframe id="twitter-widget-0" class="spai-bg-prepared" style="position: static; visibility: visible; width: 500px; height: 612px; display: block; flex-grow: 1;" title="Twitter Tweet" src="https://platform.twitter.com/embed/Tweet.html?creatorScreenName=CosmosMagazine&amp;dnt=true&amp;embedId=twitter-widget-0&amp;features=eyJ0ZndfdHdlZXRfZWRpdF9iYWNrZW5kIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19yZWZzcmNfc2Vzc2lvbiI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfcmVzdWx0X21pZ3JhdGlvbl8xMzk3OSI6eyJidWNrZXQiOiJ0d2VldF9yZXN1bHQiLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3NlbnNpdGl2ZV9tZWRpYV9pbnRlcnN0aXRpYWxfMTM5NjMiOnsiYnVja2V0IjoiaW50ZXJzdGl0aWFsIiwidmVyc2lvbiI6bnVsbH0sInRmd19leHBlcmltZW50c19jb29raWVfZXhwaXJhdGlvbiI6eyJidWNrZXQiOjEyMDk2MDAsInZlcnNpb24iOm51bGx9LCJ0ZndfZHVwbGljYXRlX3NjcmliZXNfdG9fc2V0dGluZ3MiOnsiYnVja2V0Ijoib2ZmIiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9lZGl0X2Zyb250ZW5kIjp7ImJ1Y2tldCI6Im9mZiIsInZlcnNpb24iOm51bGx9fQ%3D%3D&amp;frame=false&amp;hideCard=false&amp;hideThread=false&amp;id=1525776275939418113&amp;lang=en&amp;origin=https%3A%2F%2Fcosmosmagazine.com%2Fearth%2Fclimate%2Frobot-delivery-better-for-the-climate%2F&amp;sessionId=84ec360f0f0db6f38136f997db6585736d09d60a&amp;siteScreenName=CosmosMagazine&amp;theme=light&amp;widgetsVersion=b7df0f50e1ec1%3A1659558317797&amp;width=500px" frameborder="0" scrolling="no" allowfullscreen="allowfullscreen" data-tweet-id="1525776275939418113"></iframe></div> </div> </div> </div> </div> </figure> <p><a>Starship robots</a> is one example. Since launching in 2014, their robots have completed three million autonomous home deliveries in cities across Estonia, the United Kingdom, Finland and the United States.</p> <p><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=201307&amp;title=Supermarket+delivery+by+robot+better+for+the+climate" width="1" height="1" /></p> <p><!-- End of tracking content syndication --></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/earth/climate/robot-delivery-better-for-the-climate/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/petra-stock" target="_blank" rel="noopener">Petra Stock</a>. Petra Stock has a degree in environmental engineering and a Masters in Journalism from University of Melbourne. She has previously worked as a climate and energy analyst.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology

Placeholder Content Image

Patch me up, Scotty! Remote surgery robot destined for ISS

<p>Strap yourself in so you don’t float away, select the required procedure, lie back and relax as your autonomous surgery robot patches you up from whatever space ailment bothers you. Sound far-fetched?</p> <p>Not according to Professor Shane Farritor, from the University of Nebraska-Lincoln, who <a href="https://news.unl.edu/newsrooms/today/article/husker-developed-surgery-robot-to-be-tested-aboard-international-space/" target="_blank" rel="noreferrer noopener">has just received funding from NASA</a> to prepare his miniature surgical robot for a voyage to the International Space Station (ISS) in 2024.</p> <p>MIRA, which stands for “miniaturised in vivo robotic assistant” is comparatively little for a surgery-performing machine – small enough to fit inside a microwave-sized experimental locker within the ISS. The brainchild of Farritor and colleagues at the start-up company Virtual Incision, MIRA has been under development for almost 20 years.</p> <p>The ultimate aim for MIRA is to be able to perform surgery autonomously and remotely, which has far-reaching ramifications for urgent surgery in the field – whether that’s in the depths of space, a remote location or even <a href="http://bionics.seas.ucla.edu/publications/JP_11.pdf" target="_blank" rel="noreferrer noopener">in a war-torn region</a>.</p> <p>Initially MIRA won’t go near anyone’s body. Once on the ISS, it will autonomously perform tasks designed to mimic the movements required for surgery, such as cutting stretched rubber bands and pushing metal rings along a wire.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p200559-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> <form class="wpcf7-form mailchimp-ext-0.5.62 spai-bg-prepared init" action="/health/remote-surgery-robot-destined-for-iss/#wpcf7-f6-p200559-o1" method="post" novalidate="novalidate" data-status="init"> <p style="display: none !important;"><span class="wpcf7-form-control-wrap referer-page"><input class="wpcf7-form-control wpcf7-text referer-page spai-bg-prepared" name="referer-page" type="hidden" value="https://cosmosmagazine.com/technology/" data-value="https://cosmosmagazine.com/technology/" aria-invalid="false" /></span></p> <p><!-- Chimpmail extension by Renzo Johnson --></form> </div> </div> <p>Being autonomous is important as it won’t need to access bandwidth to communicate back to Earth.</p> <p>MIRA has already successfully completed surgery-like tasks via remote operation including a colon resection.</p> <p>Space is the next frontier.</p> <p>Farritor says, as people go further and deeper into space, they might need surgery. “We’re working toward that goal.”</p> <p>The stint on the ISS will not only mark the most autonomous operation so far, but it will also provide insight into how such devices might function in zero gravity.</p> <p>The dream goal is for MIRA to function entirely on its own, says Farritor. Just imagine: “the astronaut flips a switch, the process starts, and the robot does its work by itself. Two hours later, the astronaut switches it off and it’s done”.</p> <p>As anyone who has seen the scene in the movie, <a href="https://www.youtube.com/watch?v=Ue4PCI0NamI" target="_blank" rel="noreferrer noopener">The Martian</a>, can attest, it would certainly make pulling a wayward antenna spike out of yourself from within a deserted Martian habitat station far more comfortable.</p> <p><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=200559&amp;title=Patch+me+up%2C+Scotty%21+Remote+surgery+robot+destined+for+ISS" width="1" height="1" /></p> <p><!-- End of tracking content syndication --></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/health/remote-surgery-robot-destined-for-iss/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/clare-kenyon" target="_blank" rel="noopener">Clare Kenyon</a>. Clare Kenyon is a science writer for Cosmos. She is currently wrangling the death throes of her PhD in astrophysics, has a Masters in astronomy and another in education, and has classroom experience teaching high school science, maths and physics. Clare also has diplomas in music and criminology and a graduate certificate of leadership and learning.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology

Placeholder Content Image

A robot dog with a virtual spinal cord can learn to walk in just one hour

<p>We’ve all seen those adorable clips of newborn giraffes or foals first learning to walk on their shaky legs, stumbling around until they finally master the movements.</p> <p>Researchers wanted to know how animals learn to walk and learn from their stumbling, so they built a four-legged, dog-sized robot to simulate it, according to a new study <a href="https://www.nature.com/articles/s42256-022-00505-4" target="_blank" rel="noreferrer noopener">reported</a> in <em>Nature Machine Intelligence</em>.</p> <p>They found that it took their robot and its virtual spinal cord just an hour to get its walking under control.</p> <p>Getting up and going quickly is essential in the animal kingdom to avoid predators, but learning how to co-ordinate leg muscles and tendons takes time.</p> <p>Initially, baby animals rely heavily on hard-wired spinal cord reflexes to co-ordinate muscle and tendon control, while motor control reflexes help them to avoid falling and hurting themselves during their first attempts.</p> <p>More precise muscle control must be practised until the nervous system adapts to the muscles and tendons, and the young are then able to keep up with the adults.</p> <p>“As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes,” says first author Dr Felix Ruppert, a former doctoral student in the Dynamic Locomotion research group at the Max Planck Institute for Intelligent Systems (MPI-IS), Germany.</p> <p>“If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks.”</p> <figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"> <div class="wp-block-embed__wrapper"> <div class="entry-content-asset"> <div class="embed-wrapper"> <div class="inner"><iframe title="Learning Plastic Matching of Robot Dynamics in Closed-loop Central Pattern Generators" src="https://www.youtube.com/embed/LPL6nvs_GEc?feature=oembed" width="500" height="281" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div> </div> </div> </div> </figure> <p><strong>Building a virtual spinal cord to learn how to walk</strong></p> <p>The researchers designed a <a href="https://cosmosmagazine.com/health/machine-learning-tool-brain-injury/" target="_blank" rel="noreferrer noopener">learning algorithm</a> to function as the robot’s spinal cord and work as what’s known as a Central Pattern Generator (CPG). In humans and animals, the CPGs are networks of neurons in the spinal cord that, without any input from the brain, produce periodic muscle contractions.</p> <p>These are important for rhythmic tasks like breathing, blinking, digestion and walking.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p198628-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> <form class="wpcf7-form mailchimp-ext-0.5.62 spai-bg-prepared init" action="/technology/robot-machine-learning-to-walk/#wpcf7-f6-p198628-o1" method="post" novalidate="novalidate" data-status="init"> <p style="display: none !important;"><span class="wpcf7-form-control-wrap referer-page"><input class="wpcf7-form-control wpcf7-text referer-page spai-bg-prepared" name="referer-page" type="hidden" value="https://cosmosmagazine.com/technology/" data-value="https://cosmosmagazine.com/technology/" aria-invalid="false" /></span></p> <p><!-- Chimpmail extension by Renzo Johnson --></form> </div> </div> <p>The CPG was simulated on a small and lightweight computer that controlled the motion of the robot’s legs and it was positioned on the robot where the head would be on a dog.</p> <p>The robot – which the researchers named Morti – was designed with sensors on its feet to measure information about its movement.</p> <p>Morti learnt to walk while having no prior explicit “knowledge” of its leg design, motors, or springs by continuously comparing the expected data (modelled from the virtual spinal cord) against the sensor data as it attempted to walk.</p> <p> “Our robot is practically ‘born’ knowing nothing about its leg anatomy or how they work,” Ruppert explains. “The CPG resembles a built-in automatic walking intelligence that nature provides and that we have transferred to the robot. The computer produces signals that control the legs’ motors and the robot initially walks and stumbles.</p> <p>“Data flows back from the sensors to the virtual spinal cord where sensor and CPG data are compared. If the sensor data does not match the expected data, the learning algorithm changes the walking behaviour until the robot walks well and without stumbling.”</p> <p>Sensor data from the robot’s feet are continuously compared with the expected touch-down data predicted by the robot’s CPG. If the robot stumbles, the learning algorithm changes how far the legs swing back and forth, how fast the legs swing, and how long a leg is on the ground.</p> <p>“Changing the CPG output while keeping reflexes active and monitoring the robot stumbling is a core part of the learning process,” Ruppert says.</p> <p>Within one hour, Morti can go from stumbling around like a newborn animal to walking, optimising its movement patterns faster than an animal and increasing its energy efficiency by 40%.</p> <p>“We can’t easily research the spinal cord of a living animal. But we can model one in the robot,” says co-author Dr Alexander Badri-Spröwitz, head of the Dynamic Locomotion research group.</p> <p>“We know that these CPGs exist in many animals. We know that reflexes are embedded; but how can we combine both so that animals learn movements with reflexes and CPGs?</p> <p>“This is fundamental research at the intersection between robotics and biology. The robotic model gives us answers to questions that biology alone can’t answer.”</p> <p><!-- Start of tracking content syndication. Please do not remove this section as it allows us to keep track of republished articles --></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=198628&amp;title=A+robot+dog+with+a+virtual+spinal+cord+can+learn+to+walk+in+just+one+hour" width="1" height="1" /></p> <p><!-- End of tracking content syndication --></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/technology/robot-machine-learning-to-walk/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/imma-perfetto" target="_blank" rel="noopener">Imma Perfetto</a>. Imma Perfetto is a science writer at Cosmos. She has a Bachelor of Science with Honours in Science Communication from the University of Adelaide.</em></p> <p><em>Dynamic Locomotion Group (YouTube)</em></p> </div>

Technology

Placeholder Content Image

New “sweaty” living skin for robots might make your skin crawl

<p dir="ltr">A team of Japanese scientists have crafted the first living skin for robots that not only resembles our skin in texture, but it also repels water and has self-healing functions just like ours.</p> <p dir="ltr">To craft the skin, the team submerged a robotic finger into a cylinder filled with collagen and human dermal fibroblasts - the two main components that make up our skin’s connective tissues. The way that this mixture shrank and conformed to the finger that gave it such a realistic appearance - making for a large leap forward in terms of creating human-like appearances for robots.</p> <p><span id="docs-internal-guid-699f2960-7fff-1b2e-d849-c1bc95a796a9">“The finger looks slightly ‘sweaty’ straight out of the culture medium,” <a href="https://www.scimex.org/newsfeed/this-robots-sweaty-living-skin-that-can-heal-might-make-your-skin-crawl" target="_blank" rel="noopener">says</a> Shoji Takeuchi, a professor at the University of Tokyo and the study’s first author. “Since the finger is driven by an electric motor, it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”</span></p> <p><img src="https://oversixtydev.blob.core.windows.net/media/2022/06/robot-finger1.jpg" alt="" width="1280" height="720" /></p> <p dir="ltr"><em>The team submerged the robotic finger into a mixture of collagen and human dermal fibroblasts to create the new skin. Image: Shoji Takeuchi</em></p> <p dir="ltr">Realism is a top priority for humanoid robots tasked with interacting with people in healthcare and the service industry, since looking human can improve communication efficiency and even make us like the robot more.</p> <p dir="ltr">Current methods of creating skin for robots use silicone, which effectively mimic human appearance but fall short in creating delicate textures, such as wrinkles, and in having skin-specific functions.</p> <p dir="ltr">Meanwhile, trying to tailor sheets of living skin - commonly used in skin grafting - is difficult when it comes to conforming to fingers, which have uneven surfaces and need to be able to move.</p> <p dir="ltr">“With that method, you have to have the hands of a skilled artisan who can cut and tailor the skin sheets,” Takeuchi says. “To efficiently cover surfaces with skin cells, we established a tissue moulding method to directly mould skin tissue around the robot, which resulted in a seamless skin coverage on a robotic finger.”</p> <p dir="ltr">Other experts have also noted that this level of realism could have the opposite effect, in a phenomenon known as the “uncanny valley” effect.</p> <p dir="ltr">“It is possible that the human-like appearance [of some robots] induces certain expectations but when they do not meet those expectations, they are found eerie or creepy,” Dr Burcu Ürgen, an assistant professor in psychology at Bilkent University, Turkey, who wasn’t involved in the study, told <em><a href="https://www.theguardian.com/science/2022/jun/09/scientists-make-slightly-sweaty-robotic-finger-with-living-skin" target="_blank" rel="noopener">The Guardian</a></em>. </p> <p dir="ltr">Professor Fabian Grabenhorst, a neuroscientist at the University of Oxford who studies the uncanny-valley effect, also told the publication that people might have an initial negative reaction to these kinds of robots, but that it could shift depending on their interactions with the robot.</p> <p dir="ltr">“Initially people might find it weird, but through positive experiences that might help people overcome those feelings,” he told The Guardian.</p> <p dir="ltr">“It seems like a fantastic technological innovation.”</p> <p dir="ltr">As exciting as this discovery is, Takeuchi adds that it’s “just the first step” in covering robots in living skin, with their future work looking to allow the skin to survive without constant nutrient supply and waste removal, as well as including hair follicles, nails, sweat glands and sensory neurons.</p> <p dir="ltr">“I think living skin is the ultimate solution to give robots the look and touch of living creatures since it is exactly the same material that covers animal bodies,” he says.</p> <p dir="ltr">Their study was published in the journal <em><a href="https://doi.org/10.1016/j.matt.2022.05.019" target="_blank" rel="noopener">Matter</a></em>.</p> <p><span id="docs-internal-guid-062b1015-7fff-6c39-2718-c1df1e65a8cd"></span></p> <p dir="ltr"><em>Image: Shoji Takeuchi</em></p>

Technology

Placeholder Content Image

The variation advantage: how to master tennis, learn a language, or build better AI

<p>Want to become a better tennis player? If you repeatedly practise serving to the same spot, you’ll master serving to that <em>exact</em> location, if conditions remain similar. Practising your serve to a variety of locations will take much longer to master, but in the end you’ll be a better tennis player, and much more capable of facing a fierce opponent.</p> <p>The reason why is all about variability: the more we’re exposed to, the better our neural networks are able to generalise and calculate which information is important to the task, and what is not. This also helps us learn and make decisions in new contexts.</p> <p><strong>From fox to hounds</strong></p> <p>This generalisation principle can be applied to many things, including learning languages or recognising dog breeds. For example, an infant will have difficulty learning what a ‘dog’ is if they are only exposed to chihuahuas instead of many dog breeds (chihuahuas, beagles, bulldogs etc.), which show the real variation of <em>Canis lupus familiaris</em>. Including information about what is <em>not</em> in the dog category – for example foxes – also helps us build generalisations, which helps us to eliminate irrelevant information.</p> <p>“Learning from less variable input is often fast, but may fail to generalise to new stimuli,” says Dr Limor Raviv, the senior investigator from the Max Planck Institute (Germany). “But these important insights have not been unified into a single theoretical framework, which has obscured the bigger picture.”</p> <p>To better understand the patterns behind this generalisation framework, and how variability effects the human learning process and that of computers, Raviv’s research team explored over 150 studies on variability and generalisation across the fields of computer science, linguistics, motor learning, visual perception and formal education.</p> <p><strong>Wax on, wax off</strong></p> <p>The researchers found that there are at least four kinds of variability, including:</p> <ul> <li><strong>Numerosity</strong> (set size), which is the number of different examples; such as the number of locations on the tennis court a served ball could land</li> <li><strong>Heterogeneity</strong> (differences between examples); serving to the same spot versus serving to different spots</li> <li><strong>Situational</strong> (context) diversity; facing the same opponent on the same court or a different component on a different court</li> <li><strong>Scheduling</strong> (interleaving, spacing); how frequently you practice, and in what order do you practice components of a task</li> </ul> <p>“These four kinds of variability have never been directly compared—which means that we currently don’t know which is most effective for learning,” says Raviv.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p191362-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>According to the ‘Mr Miyagi principle’, inspired by the 1984 movie <em>The Karate Kid</em>, practising unrelated skills – such as waxing cars or painting fences – might actually benefit the learning of other skills: in the movie’s case, martial arts.</p> <p><strong>Lemon or lime?</strong></p> <p>So why does including variability in training slow things down? One theory is that there are always exceptions to the rules, which makes learning and generalising harder.</p> <p>For example, while colour is important for distinguishing lemons from limes, it wouldn’t be helpful for telling cars and trucks apart. Then there are atypical examples – such as a chihuahua that doesn’t look like a dog, and a fox that does, but isn’t.</p> <p>So as well as learning a rule to make neural shortcuts, we also have to learn exceptions to these rules, which makes learning slower and more complicated. This means that when training is variable, learners have to actively reconstruct memories, which takes more effort.</p> <p><strong>Putting a face to a name</strong></p> <p>So how do we train ourselves and computers to recognise faces? The illustration below is an example of variations of a fox for machine learning. Providing several variations – including image rotation, colour and partial masking – improves the machine’s ability to generalise (in this case, to identify a fox). This data augmentation technique is an effective way of expanding the amount of available data by providing variations of the same data point, but it slows down the speed of learning.</p> <p>Humans are the same: the more variables we’re presented with, the harder it is for us to learn – but eventually it pays off in a greater ability to generalise knowledge in new contexts.</p> <p>“Understanding the impact of variability is important for literally every aspect of our daily life. Beyond affecting the way we learn language, motor skills, and categories, it even has an impact on our social lives.” explains Raviv. “For example, face recognition is affected by whether people grew up in a small community (fewer than 1000 people) or in larger community (over 30,000 people). Exposure to fewer faces during childhood is associated with diminished face memory.”</p> <p>The learning message for both humans and AI is clear: variation is key. Switch up your tennis serve, play with lots of different dogs, and practice language with a variety of speakers. Your brain (or algorithm) will thank you for it… eventually.</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=191362&amp;title=The+variation+advantage%3A+how+to+master+tennis%2C+learn+a+language%2C+or+build+better+AI" width="1" height="1" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/people/behaviour/the-variation-advantage-how-to-master-tennis-learn-a-language-or-build-better-ai/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/qamariya-nasrullah" target="_blank" rel="noopener">Qamariya Nasrullah</a>. Qamariya Nasrullah holds a PhD in evolutionary development from Monash University and an Honours degree in palaeontology from Flinders University.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology

Placeholder Content Image

Move over, Iron Chef, this metallic cook has just learned how to taste

<p>In an episode of <em>Futurama</em>, robot Bender wants to be a chef, but has to overcome the not inconsiderable hurdle of being incapable of taste. It was beautiful.</p> <p>Move over, Bender. A new robot has not only been programmed to taste, it has been trained to taste food at different stages of the cooking process to check for seasoning. Researchers from the University of Cambridge, UK, working with domestic appliances manufacturer Beko, hope the new robot will be useful in the development of automated food preparation.</p> <p>It’s a cliché of cooking that you must “taste as you go”. But tasting isn’t as simple as it may seem. There are different stages of the chewing process in which the release of saliva and digestive enzymes change our perception of flavour while chewing.</p> <p>The robot chef had already mastered the <a href="https://www.cam.ac.uk/research/news/a-good-egg-robot-chef-trained-to-make-omelettes" target="_blank" rel="noreferrer noopener">omelette</a> based on human tasters’ feedback. Now, results <a href="https://dx.doi.org/10.3389/frobt.2022.886074" target="_blank" rel="noreferrer noopener">published</a> in the <em>Frontiers in Robotics & AI</em> journal show the robot tasting nine different variations of scrambled eggs and tomatoes at three different stages of the chewing process to produce a “taste map”.</p> <p>Using machine-learning algorithms and the “taste as you go” approach, the robot was able to quickly and accurately judge the saltiness of the simple scrambled egg dish. The new method was a significant improvement over other tasting tech based on only a single sample.</p> <p>Saltiness was measured by a conductance probe attached to the robot’s arm. They prepared the dish, varying the number of tomatoes and amount of salt. “Chewed” food was passed through a blender, then tested for saltiness again.</p> <figure class="wp-block-video"><video src="../wp-content/uploads/2022/05/Unchewed-sampling-short.mp4" controls="controls" width="300" height="150"></video><figcaption>This robot ‘chef’ is learning to be a better cook by ‘tasting’ the saltiness of a simple dish of eggs and tomatoes at different stages of the cooking process, imitating a similar process in humans. Credit: Bio-Inspired Robotics Laboratory, University of Cambridge.</figcaption></figure> <p>“Most home cooks will be familiar with the concept of tasting as you go – checking a dish throughout the cooking process to check whether the balance of flavours is right,” said lead author Grzegorz Sochacki from the University of Cambridge’s Department of Engineering. “If robots are to be used for certain aspects of food preparation, it’s important that they are able to ‘taste’ what they’re cooking.”</p> <p>The new approach aims to mimic the continuous feedback provided to the human brain in the process of chewing, says Dr Arsen Abdulali, also from Cambridge’s Department of Engineering. “Current methods of electronic testing only take a single snapshot from a homogenised sample, so we wanted to replicate a more realistic process of chewing and tasting in a robotic system, which should result in a tastier end product.”</p> <p>“When a robot is learning how to cook, like any other cook, it needs indications of how well it did,” said Abdulali. “We want the robots to understand the concept of taste, which will make them better cooks. In our experiment, the robot can ‘see’ the difference in the food as it’s chewed, which improves its ability to taste.”</p> <p> “We believe that the development of robotic chefs will play a major role in busy households and assisted living homes in the future,” said senior Beko scientist Dr Muhammad W. Chugtai. “This result is a leap forward in robotic cooking, and by using machine and deep-learning algorithms, mastication will help robot chefs adjust taste for different dishes and users.” Next on the menu will be training robots to improve and expand the tasting abilities to oily or sweet food, for example. Sounds pretty sweet.</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=190155&title=Move+over%2C+Iron+Chef%2C+this+metallic+cook+has+just+learned+how+to+taste" width="1" height="1" data-spai-target="src" data-spai-orig="" data-spai-exclude="nocdn" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/technology/robot-machine-learning-taste/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/evrim-yazgin" target="_blank" rel="noopener">Evrim Yazgin</a>. Evrim Yazgin has a Bachelor of Science majoring in mathematical physics and a Master of Science in physics, both from the University of Melbourne.</em></p> <p><em>Image: Shutterstock</em></p> </div>

Technology

Placeholder Content Image

5 things to know about foundation models and the next generation of AI

<p>If you’ve seen photos of <a href="https://www.nytimes.com/2022/04/06/technology/openai-images-dall-e.html" target="_blank" rel="noopener">a teapot shaped like an avocado</a> or read a well-written article that <a href="https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3" target="_blank" rel="noopener">veers off on slightly weird tangents</a>, you may have been exposed to a new trend in artificial intelligence (AI).</p> <p>Machine learning systems called <a href="https://openai.com/dall-e-2/" target="_blank" rel="noopener">DALL-E</a>, <a href="https://openai.com/blog/gpt-3-edit-insert/" target="_blank" rel="noopener">GPT</a> and <a href="https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html" target="_blank" rel="noopener">PaLM</a> are making a splash with their incredible ability to generate creative work.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">DALL·E 2 is here! It can generate images from text, like "teddy bears working on new AI research on the moon in the 1980s".</p> <p>It's so fun, and sometimes beautiful.<a href="https://t.co/XZmh6WkMAS">https://t.co/XZmh6WkMAS</a> <a href="https://t.co/3zOu30IqCZ">pic.twitter.com/3zOu30IqCZ</a></p> <p>— Sam Altman (@sama) <a href="https://twitter.com/sama/status/1511715302265942024?ref_src=twsrc%5Etfw">April 6, 2022</a></p></blockquote> <p>These systems are known as “foundation models” and are not all hype and party tricks. So how does this new approach to AI work? And will it be the end of human creativity and the start of a deep-fake nightmare?</p> <p><strong>1. What are foundation models?</strong></p> <p><a href="https://arxiv.org/abs/2108.07258" target="_blank" rel="noopener">Foundation models</a> work by training a single huge system on large amounts of general data, then adapting the system to new problems. Earlier models tended to start from scratch for each new problem.</p> <p>DALL-E 2, for example, was trained to match pictures (such as a photo of a pet cat) with the caption (“Mr. Fuzzyboots the tabby cat is relaxing in the sun”) by scanning hundreds of millions of examples. Once trained, this model knows what cats (and other things) look like in pictures.</p> <p>But the model can also be used for many other interesting AI tasks, such as generating new images from a caption alone (“Show me a koala dunking a basketball”) or editing images based on written instructions (“Make it look like this monkey is paying taxes”).</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Our newest system DALL·E 2 can create realistic images and art from a description in natural language. See it here: <a href="https://t.co/Kmjko82YO5">https://t.co/Kmjko82YO5</a> <a href="https://t.co/QEh9kWUE8A">pic.twitter.com/QEh9kWUE8A</a></p> <p>— OpenAI (@OpenAI) <a href="https://twitter.com/OpenAI/status/1511707245536428034?ref_src=twsrc%5Etfw">April 6, 2022</a></p></blockquote> <p><strong>2. How do they work?</strong></p> <p>Foundation models run on “<a href="https://theconversation.com/what-is-a-neural-network-a-computer-scientist-explains-151897" target="_blank" rel="noopener">deep neural networks</a>”, which are loosely inspired by how the brain works. These involve sophisticated mathematics and a huge amount of computing power, but they boil down to a very sophisticated type of pattern matching.</p> <p>For example, by looking at millions of example images, a deep neural network can associate the word “cat” with patterns of pixels that often appear in images of cats – like soft, fuzzy, hairy blobs of texture. The more examples the model sees (the more data it is shown), and the bigger the model (the more “layers” or “depth” it has), the more complex these patterns and correlations can be.</p> <p>Foundation models are, in one sense, just an extension of the “deep learning” paradigm that has dominated AI research for the past decade. However, they exhibit un-programmed or “emergent” behaviours that can be both surprising and novel.</p> <p>For example, Google’s PaLM language model seems to be able to produce explanations for complicated metaphors and jokes. This goes beyond simply <a href="https://arxiv.org/abs/2204.02311" target="_blank" rel="noopener">imitating the types of data it was originally trained to process</a>.</p> <figure class="align-center "><img src="https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=333&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=333&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=333&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=418&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=418&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=418&amp;fit=crop&amp;dpr=3 2262w" alt="A user interacting with the PaLM language model by typing questions. The AI system responds by typing back answers." /><figcaption><span class="caption">The PaLM language model can answer complicated questions.</span> <span class="attribution"><a class="source" href="https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html" target="_blank" rel="noopener">Google AI</a></span></figcaption></figure> <p><strong>3. Access is limited – for now</strong></p> <p>The sheer scale of these AI systems is difficult to think about. PaLM has <em>540 billion</em> parameters, meaning even if everyone on the planet memorised 50 numbers, we still wouldn’t have enough storage to reproduce the model.</p> <p>The models are so enormous that training them requires massive amounts of computational and other resources. One estimate put the cost of training OpenAI’s language model GPT-3 at <a href="https://lambdalabs.com/blog/gpt-3/" target="_blank" rel="noopener">around US$5 million</a>.</p> <p>As a result, only huge tech companies such as OpenAI, Google and Baidu can afford to build foundation models at the moment. These companies limit who can access the systems, which makes economic sense.</p> <p>Usage restrictions may give us some comfort these systems won’t be used for nefarious purposes (such as generating fake news or defamatory content) any time soon. But this also means independent researchers are unable to interrogate these systems and share the results in an open and accountable way. So we don’t yet know the full implications of their use.</p> <p><strong>4. What will these models mean for ‘creative’ industries?</strong></p> <p>More foundation models will be produced in coming years. Smaller models are already being published in <a href="https://openai.com/blog/gpt-2-1-5b-release/" target="_blank" rel="noopener">open-source forms</a>, tech companies are starting to <a href="https://openai.com/blog/openai-api/" target="_blank" rel="noopener">experiment with licensing and commercialising these tools</a> and AI researchers are working hard to make the technology more efficient and accessible.</p> <p>The remarkable creativity shown by models such as PaLM and DALL-E 2 demonstrates that creative professional jobs could be impacted by this technology sooner than initially expected.</p> <p>Traditional wisdom always said robots would displace “blue collar” jobs first. “White collar” work was meant to be relatively safe from automation – especially professional work that required creativity and training.</p> <p>Deep learning AI models already exhibit super-human accuracy in tasks like <a href="https://theconversation.com/ai-could-be-our-radiologists-of-the-future-amid-a-healthcare-staff-crisis-120631" target="_blank" rel="noopener">reviewing x-rays</a> and <a href="https://www.macularsociety.org/about/media/news/breakthrough-artificial-intelligence-ai-helps-detect-dry-amd/" target="_blank" rel="noopener">detecting the eye condition macular degeneration</a>. Foundation models may soon provide cheap, “good enough” creativity in fields such as advertising, copywriting, stock imagery or graphic design.</p> <p>The future of professional and creative work could look a little different than we expected.</p> <p><strong>5. What this means for legal evidence, news and media</strong></p> <p>Foundation models will inevitably <a href="https://www.abc.net.au/news/2021-08-01/historic-decision-allows-ai-to-be-recognised-as-an-inventor/100339264" target="_blank" rel="noopener">affect the law</a> in areas such as intellectual property and evidence, because we won’t be able to assume <a href="https://www.smithsonianmag.com/smart-news/us-copyright-office-rules-ai-art-cant-be-copyrighted-180979808/" target="_blank" rel="noopener">creative content is the result of human activity</a>.</p> <p>We will also have to confront the challenge of disinformation and misinformation generated by these systems. We already face enormous problems with disinformation, as we are seeing in the <a href="https://theconversation.com/fake-viral-footage-is-spreading-alongside-the-real-horror-in-ukraine-here-are-5-ways-to-spot-it-177921" target="_blank" rel="noopener">unfolding Russian invasion of Ukraine</a> and the nascent problem of <a href="https://theconversation.com/3-2-billion-images-and-720-000-hours-of-video-are-shared-online-daily-can-you-sort-real-from-fake-148630" target="_blank" rel="noopener">deep fake</a> images and video, but foundation models are poised to super-charge these challenges.</p> <p><strong>Time to prepare</strong></p> <p>As researchers who <a href="https://www.admscentre.org.au/" target="_blank" rel="noopener">study the the effects of AI on society</a>, we think foundation models will bring about huge transformations. They are tightly controlled (for now), so we probably have a little time to understand their implications before they become a huge issue.</p> <p>The genie isn’t quite out of the bottle yet, but foundation models are a very big bottle – and inside there is a very clever genie.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/181150/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em><a href="https://theconversation.com/profiles/aaron-j-snoswell-1331146" target="_blank" rel="noopener">Aaron J. Snoswell</a>, Post-doctoral Research Fellow, Computational Law &amp; AI Accountability, <a href="https://theconversation.com/institutions/queensland-university-of-technology-847" target="_blank" rel="noopener">Queensland University of Technology</a> and <a href="https://theconversation.com/profiles/dan-hunter-1336925" target="_blank" rel="noopener">Dan Hunter</a>, Executive Dean of the Faculty of Law, <a href="https://theconversation.com/institutions/queensland-university-of-technology-847" target="_blank" rel="noopener">Queensland University of Technology</a></em></p> <p><em>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/robots-are-creating-images-and-telling-jokes-5-things-to-know-about-foundation-models-and-the-next-generation-of-ai-181150" target="_blank" rel="noopener">original article</a>.</em></p> <p><em>Image: OpenAI</em></p>

Technology

Placeholder Content Image

Pompeii’s ancient ruins guarded by a robot “dog”

<p dir="ltr">The Archaeological Park of Pompeii has found a unique way to patrol the historical archaeological areas and structures of Pompeii in Italy. </p> <p dir="ltr">Created by Boston Dynamics, a robot “dog” named Spot is being used to identify structural and safety issues at Pompeii: the ancient Roman city that was encased in volcanic ash following the 79 C.E. eruption of Mount Vesuvius.</p> <p dir="ltr">The robot is the latest addition to a broader initiative to transform Pompeii into a “Smart Archaeological Park” with “intelligent, sustainable and inclusive management.”</p> <p dir="ltr">The movement for this “integrated technological solution” began in 2013, when UNESCO threatened to remove the site from the World Heritage List unless drastic measures were taken to improve its preservation, after structural deficiencies started to emerge. </p> <p dir="ltr">The goal, as noted in the release, is to “improve both the quality of monitoring of the existing areas, and to further our knowledge of the state of progress of the works in areas undergoing recovery or restoration, and thereby to manage the safety of the site, as well as that of workers.”</p> <p dir="ltr">“We wish to test the use of these robots in the underground tunnels that were made by illegal excavators and which we are uncovering in the area around Pompeii, as part of a memorandum of understanding with the Public Prosecutor’s Office of Torre Annunziata,” said Pompeii’s director general Gabriel Zuchtriegel in a statement.</p> <p dir="ltr">In addition to having Spot the “dog” patrol the area, a laser scanner will also fly over the 163-acre site and record data, which will be used to study and plan further interventions to preserve the ancient ruins of Pompeii. </p> <p dir="ltr"><em>Image credits: Getty Images</em></p>

Art

Placeholder Content Image

Best stroke: Microswimmers that can deliver drugs around the body

<p>Picture an artificial cell: a self-propelling mixture of chemicals, somewhere between a thousandth and a tenth of a millimetre in size, able to travel around the body delivering medicines.</p> <p>This could become a reality with microswimmers – micrometre-sized blobs of liquid that can move independently, thanks to either chemical or physical mechanisms. There are plenty of naturally occurring microswimmers, but researchers have begun to tune artificial ones to do more interesting jobs.</p> <p>Artificial microswimmers can be very simple – last year, a group of researchers published a method for microswimmers you could <a href="https://cosmosmagazine.com/science/almost-home-made-microswimmers/" target="_blank" rel="noreferrer noopener">make at home</a> (provided you have a pipette and a microscope). But more complex “microrobots” have even more potential.</p> <p>Last month, for instance, researchers at the Max Planck Institute for Intelligent Systems, Germany, announced they’d developed light-powered microswimmers that can move through biological fluids.</p> <p>The researchers’ microswimmers are made from a porous substance called poly(heptazine imide) carbon nitride. This material comprises organic (carbon-containing) molecules linked together in a flat sheet, making it a “two-dimensional” <a href="https://cosmosmagazine.com/science/explainer-what-is-a-polymer/" target="_blank" rel="noreferrer noopener">polymer</a>.</p> <p>The microswimmers can be propelled forwards by light, and can also be triggered to release chemicals they’re holding – making them prime targets for drug delivery.</p> <p>Light-powered microswimmers aren’t an entirely new concept, though it had previously been tricky to make them work in biological environments.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p182933-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>“The use of light as the energy source of propulsion is very convenient when doing experiments in a petri dish or for applications directly under the skin,” says co-author Filip Podjaski.</p> <p>“There is just one problem: even tiny concentrations of salts prohibit light-controlled motion. Salts are found in all biological liquids – in blood, cellular fluids, digestive fluids etc.”</p> <p>But these microswimmers can move in even the most saline liquids. Podjaski says this is because of the porous nature of the material, as well as its light sensitivity.</p> <p>“In addition, in this material, light favours the mobility of ions, making the particle even faster,” he says.</p> <p>Currently, the microswimmers can release drugs in very acidic environments, but the researchers are still looking for other release mechanisms they can use. Artificial microswimmers are a long way from drug delivery or use in humans, but they’ve got plenty of exciting potential.</p> <p>“We hope to inspire many smart minds to find even better ways for controlling microrobots and designing a responsive function to the benefit of our society,” says co-author Metin Sitti.</p> <p>The findings were <a href="https://dx.doi.org/10.1126/scirobotics.abm1421" target="_blank" rel="noreferrer noopener">published</a> in <em>Science Robotics.</em></p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=182933&amp;title=Best+stroke%3A+Microswimmers+that+can+deliver+drugs+around+the+body" width="1" height="1" data-spai-target="src" data-spai-orig="" data-spai-exclude="nocdn" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/technology/materials/microswimmers-targeted-drug-delivery-light/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/ellen-phiddian" target="_blank" rel="noopener">Ellen Phiddian</a>. Ellen Phiddian is a science journalist at Cosmos. She has a BSc (Honours) in chemistry and science communication, and an MSc in science communication, both from the Australian National University.</em></p> <p><em>Image: Getty Images</em></p> </div>

Body

Placeholder Content Image

Archaeologists turn to robots to save Pompeii

<p dir="ltr">The city of Pompeii has experienced not one, but two deathly experiences - first from a volcanic eruption, then from neglect - and technology is now being used to keep it safe going into the future.</p> <p dir="ltr">Decades of neglect, mismanagement and scant maintenance of the popular ruins resulted in the 2010 collapse of a hall where gladiators once trained, nearly costing Pompeii its UNESCO World Heritage status.</p> <p dir="ltr">Despite this, Pompeii is facing a brighter future.</p> <p dir="ltr">The ruins were saved from further degradation due to the Great Pompeii Project, which saw about 105 million euros in European Union funds directed to the site, as long as it was spent promptly and effectively by 2016.</p> <p dir="ltr">Now, the Archaeological Park of Pompeii’s new director is looking to innovative technology to help restore areas of the ruins and reduce the impacts of a new threat: climate change.</p> <p dir="ltr">Archaeologist Gabriel Zuchtriegel, who was appointed director-general of the site in mid-2021, told the Associated Press that technology is essential “in this kind of battle against time”.</p> <p><span id="docs-internal-guid-95bf233a-7fff-da0a-2b03-4e06169e156c">“Some conditions are already changing and we can already measure this,” Zuchtriegel <a href="https://www.nzherald.co.nz/travel/pompeii-rebirth-of-italys-dead-city-that-nearly-died-again/XOOKT34VC3A6ZFG5BJLDC62FJI/" target="_blank" rel="noopener">said</a>.</span></p> <p><img src="https://oversixtydev.blob.core.windows.net/media/2022/02/pompeii1.jpg" alt="" width="1280" height="720" /></p> <p dir="ltr"><em>Archaeologists and scientists are joining forces to preserve and reconstruct artefacts found in Pompeii. Image: Pompeii Archeological Park (Instagram)</em></p> <p dir="ltr">So instead of relying on human eyes to detect signs of climate-caused deterioration on mosaic floors and frescoed walls across the site’s 10,000 excavated rooms, experts will rely on artificial intelligence (AI) and drones. </p> <p dir="ltr">The technology will provide experts with data and images in real-time, and will alert them to “take a closer look and eventually intervene before things happen”, Zuchtriegel said.</p> <p dir="ltr">Not only that, but AI and robots have been used to reassemble frescoes and artefacts that have crumbled into miniscule fragments that are difficult to reconstruct using human hands.</p> <p dir="ltr">“The amphorae, the frescoes, the mosaics are often brought to light fragmented, only partially intact or with many missing parts,” Zuchtriegel <a href="http://pompeiisites.org/comunicati/al-via-il-progetto-repair-la-robotica-e-la-digitalizzazione-al-servizio-dellarcheologia/" target="_blank" rel="noopener">said</a>.</p> <p dir="ltr">“When the number of fragments is very large, with thousands of pieces, manual reconstruction and recognition of the connections between the fragments is almost always impossible or in any case very laborious and slow.</p> <p><span id="docs-internal-guid-32168df9-7fff-f97f-2b16-a0c3c34e40be"></span></p> <p dir="ltr">“This means that various finds lie for a long time in archaeological deposits, without being able to be reconstructed and restored, let alone returned to the attention of the public.”</p> <p dir="ltr"><img src="https://oversixtydev.blob.core.windows.net/media/2022/02/pompeii2.jpg" alt="" width="1280" height="720" /></p> <p dir="ltr"><em>The robot uses mechanical arms and hands to position pieces in the right place. Image: Pompeii Archeological Park (Instagram)</em></p> <p dir="ltr">The “RePAIR” project, an acronym for Reconstructing the past: Artificial Intelligence and Robotics meet Cultural Heritage, has seen scientists from the Italian Institute of Technology create a robot to fix this problem.</p> <p dir="ltr"><span id="docs-internal-guid-2652855f-7fff-1a96-b469-dc8e29ac5886"></span></p> <p dir="ltr">It involves robots scanning the fragments and recognising them through a 3D digitisation system before placing them in the right position using mechanical arms and hands equipped with sensors.</p> <p dir="ltr"><img src="https://oversixtydev.blob.core.windows.net/media/2022/02/pompeii3.jpg" alt="" width="1280" height="720" /></p> <p dir="ltr"><em>The project will focus on frescoes in the House of the Painters at Work, which were shattered during WWII. Image: Pompeii Archeological Park (Instagram)</em></p> <p dir="ltr">One goal is to reconstruct the frescoed ceiling of the House of the Painters at Work, with was shattered by Allied bombing during World War II.</p> <p dir="ltr">The fresco in the Schola Armaturarum - the gladiators’ barracks - will also be the target of robotic repairs, after the weight of excavated sections of the city, rainfall accumulation and poor drainage resulted in the structure collapsing.</p> <p dir="ltr"><span id="docs-internal-guid-6dbfdf37-7fff-432f-0405-800c7e8da418"></span></p> <p dir="ltr"><em>Image: Pompeii Archeological Park (Instagram)</em></p>

Technology

Placeholder Content Image

Artist robot Ai-Da detained in Egypt on suspicion of espionage

<p><span style="font-weight: 400;">A robot with a flair for the arts was detained at the Egyptian border for 10 days ahead of a major exhibition. </span></p> <p><span style="font-weight: 400;">Ai-Da was set to present her artworks at the foot of the pyramids of Giza: the first ever art exhibition held in the historic area. </span></p> <p><span style="font-weight: 400;">The show, titled </span><em><span style="font-weight: 400;">Forever is Now</span></em><span style="font-weight: 400;">, is an annual event organised by </span><span style="font-weight: 400;">Art D’Égypte to support the art and culture scene in Egypt. </span></p> <p><span style="font-weight: 400;">Ai-Da’s digitally created artworks, and her presence at the event, was set to be the highlight of the show. </span></p> <p><span style="font-weight: 400;">However, Egyptian officials grew concerned when she arrived as her eyes feature cameras and an internet modem. </span></p> <p><span style="font-weight: 400;">Because of Ai-Da’s technology, officials at the Egyptian border grew concerned that she had been sent to the country as part of an espionage conspiracy. </span></p> <p><span style="font-weight: 400;">According to </span><a href="https://www.theguardian.com/world/2021/oct/20/egypt-detains-artist-robot-ai-da-before-historic-pyramid-show"><span style="font-weight: 400;">The Guardian</span></a><span style="font-weight: 400;">, British officials had to work intensively to get Ai-Da out of detainment before the beginning of the art show, </span></p> <p><span style="font-weight: 400;">Egyptian officials offered to let Ai-Da free if she had some of her gadgetry removed, to which Aiden Meller, Ai-Da’s creator, refused. </span></p> <p><span style="font-weight: 400;">They offered to remove her eyes as a security measure, but Aiden insisted that she uses her eyes to create her artwork. </span></p> <p><span style="font-weight: 400;">She was eventually released, with her eyes intact, and the show went ahead as scheduled. </span></p> <p><span style="font-weight: 400;">Ai-Da is able to make unique art thanks to specially designed technology developed by researchers at Oxford and Leeds University. </span></p> <p><span style="font-weight: 400;">Ai-Da’s key algorithm converts images she captures with her camera-eyes and converts them to drawings. </span></p> <p><span style="font-weight: 400;">The robot can also paint portraits, as her creators allowed her technology to analyse colours and techniques used by successful human artists. </span></p> <p><em><span style="font-weight: 400;">Image credits: Getty Images</span></em></p>

Art

Placeholder Content Image

Beware the robot bearing gifts

<div> <div class="copy"> <p>In a future filled with robots, those that pretend to be your friend could be more manipulative than those that exert authority, suggests a new study published in <em>Science Robotics.</em></p> <p>As robots become more common in the likes of education, healthcare and security, it is essential to predict what the relationship between humans and robots will be.</p> <div style="position: relative; display: block; max-width: 100%;"> <div style="padding-top: 56.25%;"><iframe src="https://players.brightcove.net/5483960636001/HJH3i8Guf_default/index.html?videoId=6273649735001" allowfullscreen="" allow="encrypted-media" style="position: absolute; top: 0px; right: 0px; bottom: 0px; left: 0px; width: 100%; height: 100%;"></iframe></div> </div> <p class="caption">Overview of authority HRI study conditions, setup, and robot behaviors. Credit: Autonomous Systems and Biomechatronics Lab, University of Toronto.</p> <p>In the <a rel="noreferrer noopener" href="https://www.science.org/doi/10.1126/scirobotics.abd5186?_ga=2.192393706.1796540797.1632092915-1153018146.1604894082" target="_blank">study</a>, led by Shane Saunderson and Goldie Nejat of the University of Toronto, Canada, researchers programmed a robot called Pepper to influence humans completing attention and memory tasks, by acting either as a friend or an authority figure.</p> <p>They found that people were more comfortable with, and more persuaded by, friendly Pepper.</p> <p>Authoritative Pepper was described by participants as “inhuman,” “creepy,” and giving off an “uncanny valley vibe”.</p> <p>“As it stands, the public has little available education or general awareness of the persuasive potential of social robots, and yet institutions such as banks or restaurants can use them in financially charged situations, without any oversight and only minimal direction from the field,” writes James Young, a computer scientist  from the University of Manitoba, Canada, in a related <a rel="noreferrer noopener" href="http://10.1126/scirobotics.abk3479" target="_blank">Focus</a>.</p> <p>“Although the clumsy and error-prone social robots of today seem a far cry from this dystopian portrayal, Saunderson and Nejat demonstrate how easily a social robot can leverage rudimentary knowledge of human psychology to shape their persuasiveness.”</p> <p class="has-text-align-center"><strong><em>Read more: <a rel="noreferrer noopener" href="https://cosmosmagazine.com/technology/robotics/meet-the-robots-representing-australia-at-the-robot-olympics/" target="_blank">Meet the robots representing Australia at the ‘robot Olympics’</a></em></strong></p> <p>To test a robot’s powers of persuasion, Pepper assumed two personas: one was as a friend who gave rewards, and the other was as an authoritative figure who dealt out punishment.</p> <p>A group of participants were each given $10 and told that the amount of money could increase or decrease, depending on their performance in set memory tasks.</p> <p>Friendly Pepper gave money for correct responses, and authoritative Pepper docked $10 for incorrect responses.</p> <p>The participants then completed tasks in the <a rel="noreferrer noopener" href="https://www.pearsonclinical.co.uk/Psychology/AdultCognitionNeuropsychologyandLanguage/AdultAttentionExecutiveFunction/TestofEverydayAttention(TEA)/TestofEverydayAttention(TEA).aspx" target="_blank">Test of Everyday Attention</a> toolkit, a cognition test based on real-life scenarios.</p> <p>After the participant made an initial guess, Pepper offered them an alternative suggestion – this was always the right answer. The participant could then choose to listen to Pepper or go with his or her original answer.</p> <p>The results showed that people were more willing to switch to friendly Pepper’s suggestions than those of authoritative Pepper.</p> <p><em>Image credit: Shutterstock</em></p> <p><em>This article was originally published on <a rel="noopener" href="https://cosmosmagazine.com/technology/robotics/beware-the-robot-bearing-gifts/" target="_blank">cosmosmagazine.com</a> and was written by Deborah Devis.</em></p> </div> </div>

Technology

Placeholder Content Image

Caves in northern Greece are being showcased by a robot tour guide

<p><span style="font-weight: 400;">A new tour guide in Greece is attracting tourists from all over the world, but for a very unusual reason. </span></p> <p><span style="font-weight: 400;">Persephone has been welcoming tourists to the Alistrati Cave in northern Greece since mid-July, but not all of the visitors are coming to see the caves. </span></p> <p><span style="font-weight: 400;">Persephone is the world’s first robot tour guide inside a cave, which covers the first 150 metres of the tour that is open to the public, before a human guide takes over. </span></p> <p><span style="font-weight: 400;">The robot can give its part of the tour in 33 languages and interact with visitors at a basic level in three languages. </span></p> <p><span style="font-weight: 400;">It can also answer most questions, but only in the Greek language. </span></p> <p><span style="font-weight: 400;">The robot’s name comes from an ancient Greek myth, where it was said that in a nearby plain that Pluto — the god of the underworld who was also known as Hades — abducted Persephone, with the consent of her father Zeus, to take her as his wife.</span></p> <p><span style="font-weight: 400;">Nikos Kartalis, the scientific director for the Alistrati site, said the idea of creating a robot guide came to him when he saw one on TV guiding visitors at an art gallery.</span></p> <p><span style="font-weight: 400;">Nikos said the robot finally became a reality after getting funding, with the build of the machine costing AUD$139,000.</span></p> <p><span style="font-weight: 400;">"We already have a 70 per cent increase in visitors compared to last year since we started using" the robot, says Kartalis.</span></p> <p><span style="font-weight: 400;">"People are enthusiastic, especially the children, and people who had visited in the past are coming back to see the robot guide."</span></p> <p><span style="font-weight: 400;">"It is something unprecedented for them, to have the ability to interact with their robot by asking it questions and the robot answering them," he said.</span></p> <p><span style="font-weight: 400;">The caves have been a regular tourist spot since they opened to visitors in 1998, with people coming from all over the world to explore the three million year old site.</span></p> <p><em><span style="font-weight: 400;">Image credit: YouTube</span></em></p>

Travel Trouble

Placeholder Content Image

Tesla unveils new humanoid robot at an awkward event

<p><span style="font-weight: 400;">Tesla CEO and billionaire Elon Musk has confused people with his latest tech product launch. </span></p> <p><span style="font-weight: 400;">At Tesla’s AI Day event, Musk announced his new humanoid “Tesla bot”, which prompted one analyst to call the project a “head-scratcher that will further agitate investors.”</span></p> <p><span style="font-weight: 400;">The entrepreneur said a 172cm, 56kg prototype robot could be ready as soon as next year. </span></p> <p><span style="font-weight: 400;">Instead of waiting until a prototype was ready for the launch, Musk brought out a man in a  latex bodysuit that was created to look like the robot’s design. </span></p> <p><span style="font-weight: 400;">In a bizarre twist, when the “robot” came on stage, they broke out in a dance routine lasting one minute before Musk took to the stage. </span></p> <p><span style="font-weight: 400;">Musk didn’t give many details on the Tesla bot, but insisted it will have a “profound” impact on the economy by driving down labour costs. </span></p> <p><span style="font-weight: 400;">“But not right now because this robot doesn’t work,” Musk noted, nonetheless insisting that, “In the future, physical work will be a choice.”</span></p> <p><span style="font-weight: 400;">“Talk to it and say, ‘please pick up that bolt and attach it to a car with that wrench,’ and it should be able to do that,” Musk said. </span></p> <p><span style="font-weight: 400;">“‘Please go to the store and get me the following groceries.’ That kind of thing. I think we can do that.”</span></p> <p><span style="font-weight: 400;">Musk says that the robot’s primary purpose will be to complete tasks that are “boring, repetitive and dangerous”, giving more free time to individuals who can afford the robot.</span></p> <p><span style="font-weight: 400;">After onlookers raised concerns, Musk said the robot will be designed so that humans can easily run away from or overpower it if needed. </span></p> <p><span style="font-weight: 400;">The Tesla CEO said the robot, which has been named Optimus, will run off the same chips and sensors as Tesla’s so-called Autopilot software, which has faced intense backlash from federal regulators and politicians. </span></p> <p><span style="font-weight: 400;">Twitter users reacted to the news of the Tesla bot with an abundance of memes, saying the idea seemed to be straight out of a movie that does not end well for humankind. </span></p> <p><span style="font-weight: 400;">Check out the unusual “prototype” unveiling below:</span></p> <p><iframe width="560" height="315" src="https://www.youtube.com/embed/TsNc4nEX3c4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></p> <p><em><span style="font-weight: 400;">Image credits: Getty Images/Youtube</span></em></p>

Technology

Our Partners