5 things to know about foundation models and the next generation of AI
<p>If you’ve seen photos of <a href="https://www.nytimes.com/2022/04/06/technology/openai-images-dall-e.html" target="_blank" rel="noopener">a teapot shaped like an avocado</a> or read a well-written article that <a href="https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3" target="_blank" rel="noopener">veers off on slightly weird tangents</a>, you may have been exposed to a new trend in artificial intelligence (AI).</p>
<p>Machine learning systems called <a href="https://openai.com/dall-e-2/" target="_blank" rel="noopener">DALL-E</a>, <a href="https://openai.com/blog/gpt-3-edit-insert/" target="_blank" rel="noopener">GPT</a> and <a href="https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html" target="_blank" rel="noopener">PaLM</a> are making a splash with their incredible ability to generate creative work.</p>
<blockquote class="twitter-tweet">
<p dir="ltr" lang="en">DALL·E 2 is here! It can generate images from text, like "teddy bears working on new AI research on the moon in the 1980s".</p>
<p>It's so fun, and sometimes beautiful.<a href="https://t.co/XZmh6WkMAS">https://t.co/XZmh6WkMAS</a> <a href="https://t.co/3zOu30IqCZ">pic.twitter.com/3zOu30IqCZ</a></p>
<p>— Sam Altman (@sama) <a href="https://twitter.com/sama/status/1511715302265942024?ref_src=twsrc%5Etfw">April 6, 2022</a></p></blockquote>
<p>These systems are known as “foundation models” and are not all hype and party tricks. So how does this new approach to AI work? And will it be the end of human creativity and the start of a deep-fake nightmare?</p>
<p><strong>1. What are foundation models?</strong></p>
<p><a href="https://arxiv.org/abs/2108.07258" target="_blank" rel="noopener">Foundation models</a> work by training a single huge system on large amounts of general data, then adapting the system to new problems. Earlier models tended to start from scratch for each new problem.</p>
<p>DALL-E 2, for example, was trained to match pictures (such as a photo of a pet cat) with the caption (“Mr. Fuzzyboots the tabby cat is relaxing in the sun”) by scanning hundreds of millions of examples. Once trained, this model knows what cats (and other things) look like in pictures.</p>
<p>But the model can also be used for many other interesting AI tasks, such as generating new images from a caption alone (“Show me a koala dunking a basketball”) or editing images based on written instructions (“Make it look like this monkey is paying taxes”).</p>
<blockquote class="twitter-tweet">
<p dir="ltr" lang="en">Our newest system DALL·E 2 can create realistic images and art from a description in natural language. See it here: <a href="https://t.co/Kmjko82YO5">https://t.co/Kmjko82YO5</a> <a href="https://t.co/QEh9kWUE8A">pic.twitter.com/QEh9kWUE8A</a></p>
<p>— OpenAI (@OpenAI) <a href="https://twitter.com/OpenAI/status/1511707245536428034?ref_src=twsrc%5Etfw">April 6, 2022</a></p></blockquote>
<p><strong>2. How do they work?</strong></p>
<p>Foundation models run on “<a href="https://theconversation.com/what-is-a-neural-network-a-computer-scientist-explains-151897" target="_blank" rel="noopener">deep neural networks</a>”, which are loosely inspired by how the brain works. These involve sophisticated mathematics and a huge amount of computing power, but they boil down to a very sophisticated type of pattern matching.</p>
<p>For example, by looking at millions of example images, a deep neural network can associate the word “cat” with patterns of pixels that often appear in images of cats – like soft, fuzzy, hairy blobs of texture. The more examples the model sees (the more data it is shown), and the bigger the model (the more “layers” or “depth” it has), the more complex these patterns and correlations can be.</p>
<p>Foundation models are, in one sense, just an extension of the “deep learning” paradigm that has dominated AI research for the past decade. However, they exhibit un-programmed or “emergent” behaviours that can be both surprising and novel.</p>
<p>For example, Google’s PaLM language model seems to be able to produce explanations for complicated metaphors and jokes. This goes beyond simply <a href="https://arxiv.org/abs/2204.02311" target="_blank" rel="noopener">imitating the types of data it was originally trained to process</a>.</p>
<figure class="align-center "><img src="https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=333&fit=crop&dpr=1 600w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=333&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=333&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=418&fit=crop&dpr=1 754w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=418&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/457594/original/file-20220412-10836-vaj8rb.gif?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=418&fit=crop&dpr=3 2262w" alt="A user interacting with the PaLM language model by typing questions. The AI system responds by typing back answers." /><figcaption><span class="caption">The PaLM language model can answer complicated questions.</span> <span class="attribution"><a class="source" href="https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html" target="_blank" rel="noopener">Google AI</a></span></figcaption></figure>
<p><strong>3. Access is limited – for now</strong></p>
<p>The sheer scale of these AI systems is difficult to think about. PaLM has <em>540 billion</em> parameters, meaning even if everyone on the planet memorised 50 numbers, we still wouldn’t have enough storage to reproduce the model.</p>
<p>The models are so enormous that training them requires massive amounts of computational and other resources. One estimate put the cost of training OpenAI’s language model GPT-3 at <a href="https://lambdalabs.com/blog/gpt-3/" target="_blank" rel="noopener">around US$5 million</a>.</p>
<p>As a result, only huge tech companies such as OpenAI, Google and Baidu can afford to build foundation models at the moment. These companies limit who can access the systems, which makes economic sense.</p>
<p>Usage restrictions may give us some comfort these systems won’t be used for nefarious purposes (such as generating fake news or defamatory content) any time soon. But this also means independent researchers are unable to interrogate these systems and share the results in an open and accountable way. So we don’t yet know the full implications of their use.</p>
<p><strong>4. What will these models mean for ‘creative’ industries?</strong></p>
<p>More foundation models will be produced in coming years. Smaller models are already being published in <a href="https://openai.com/blog/gpt-2-1-5b-release/" target="_blank" rel="noopener">open-source forms</a>, tech companies are starting to <a href="https://openai.com/blog/openai-api/" target="_blank" rel="noopener">experiment with licensing and commercialising these tools</a> and AI researchers are working hard to make the technology more efficient and accessible.</p>
<p>The remarkable creativity shown by models such as PaLM and DALL-E 2 demonstrates that creative professional jobs could be impacted by this technology sooner than initially expected.</p>
<p>Traditional wisdom always said robots would displace “blue collar” jobs first. “White collar” work was meant to be relatively safe from automation – especially professional work that required creativity and training.</p>
<p>Deep learning AI models already exhibit super-human accuracy in tasks like <a href="https://theconversation.com/ai-could-be-our-radiologists-of-the-future-amid-a-healthcare-staff-crisis-120631" target="_blank" rel="noopener">reviewing x-rays</a> and <a href="https://www.macularsociety.org/about/media/news/breakthrough-artificial-intelligence-ai-helps-detect-dry-amd/" target="_blank" rel="noopener">detecting the eye condition macular degeneration</a>. Foundation models may soon provide cheap, “good enough” creativity in fields such as advertising, copywriting, stock imagery or graphic design.</p>
<p>The future of professional and creative work could look a little different than we expected.</p>
<p><strong>5. What this means for legal evidence, news and media</strong></p>
<p>Foundation models will inevitably <a href="https://www.abc.net.au/news/2021-08-01/historic-decision-allows-ai-to-be-recognised-as-an-inventor/100339264" target="_blank" rel="noopener">affect the law</a> in areas such as intellectual property and evidence, because we won’t be able to assume <a href="https://www.smithsonianmag.com/smart-news/us-copyright-office-rules-ai-art-cant-be-copyrighted-180979808/" target="_blank" rel="noopener">creative content is the result of human activity</a>.</p>
<p>We will also have to confront the challenge of disinformation and misinformation generated by these systems. We already face enormous problems with disinformation, as we are seeing in the <a href="https://theconversation.com/fake-viral-footage-is-spreading-alongside-the-real-horror-in-ukraine-here-are-5-ways-to-spot-it-177921" target="_blank" rel="noopener">unfolding Russian invasion of Ukraine</a> and the nascent problem of <a href="https://theconversation.com/3-2-billion-images-and-720-000-hours-of-video-are-shared-online-daily-can-you-sort-real-from-fake-148630" target="_blank" rel="noopener">deep fake</a> images and video, but foundation models are poised to super-charge these challenges.</p>
<p><strong>Time to prepare</strong></p>
<p>As researchers who <a href="https://www.admscentre.org.au/" target="_blank" rel="noopener">study the the effects of AI on society</a>, we think foundation models will bring about huge transformations. They are tightly controlled (for now), so we probably have a little time to understand their implications before they become a huge issue.</p>
<p>The genie isn’t quite out of the bottle yet, but foundation models are a very big bottle – and inside there is a very clever genie.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/181150/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p>
<p><em><a href="https://theconversation.com/profiles/aaron-j-snoswell-1331146" target="_blank" rel="noopener">Aaron J. Snoswell</a>, Post-doctoral Research Fellow, Computational Law & AI Accountability, <a href="https://theconversation.com/institutions/queensland-university-of-technology-847" target="_blank" rel="noopener">Queensland University of Technology</a> and <a href="https://theconversation.com/profiles/dan-hunter-1336925" target="_blank" rel="noopener">Dan Hunter</a>, Executive Dean of the Faculty of Law, <a href="https://theconversation.com/institutions/queensland-university-of-technology-847" target="_blank" rel="noopener">Queensland University of Technology</a></em></p>
<p><em>This article is republished from <a href="https://theconversation.com" target="_blank" rel="noopener">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/robots-are-creating-images-and-telling-jokes-5-things-to-know-about-foundation-models-and-the-next-generation-of-ai-181150" target="_blank" rel="noopener">original article</a>.</em></p>
<p><em>Image: OpenAI</em></p>