The Future is Now…ish

Sep 30, 2022 Sabine Boston & Dieter Adam

Every day we get closer and closer to the sci-fi films and novels we’ve devoured over the centuries becoming reality. Since the theory of artificial intelligence became a trope of the media, we’ve seen all different sides of it. How it can make our lives easier to the point of becoming detrimental – such as in Wall-E, the 2008 animated film, how it can be reflective of the good aspects of humanity as seen from the robot, Data, in the tv show Star Trek, and, most commonly, how it can be our downfall as seen in the likes of 2001: A Space Odyssey. Some of our visions for the future are still seemingly far away (we still have 40 years to create and normalize the use of flying cars if we want to keep up with The Jetsons), but AI has become a very real technology, and the implications are as fascinating as they are disturbing.

OpenAI has been a leading research lab in the development of AI, and they’ve been launching new versions of their program DALL-E since 2021 for members of the public to have a chance at getting to use it first-hand. DALL-E is a text to image generator, allowing users to punch in combinations of words like “avocado armchair” and photo realistic images will be generated. With each new release of the program, the more realistic the images are becoming.

Of course, this then leads us into a series of moral dilemmas. What’s to stop deep fakes from creating false images of politicians doing illicit acts, of an average individual being edited to be in a compromising position, or of plain, old fashioned fraud taking place with false people and products being generated?

In plain and simple terms, nothing.

The problem is that the best way for this technology to learn, and the best way for us to learn how to safely deal with it, is for it to be available for public use, increasing the above risks. Organisations like OpenAI have tried to prevent any misdoings in DALL-E by prohibit the use of images of celebrities or politicians. OpenAI chief executive Sam Altman commented that “You have to learn from contact with reality… What users want to do with it, the ways that it breaks.”

We’ve seen in the recent past that the best way to test the breaking points of programs like this is putting aspects of it’s training in the hands of the general public. In 2016 Microsoft introduced an AI Chatbot to Twitter, “Tay”, who was designed to mimic human language and learn from interacting with people on the platform. It had the ability to reply to Twitter users and caption photos. Within 16 hours of the program being live it had put out over 96,000 tweets and had quickly begun to spout offensive rhetoric after being spammed by users with controversial statements. Microsoft were quick to then take it down, realizing that straight mimicry was perhaps not a great learning method for the bot.

OpenAI have back-and-forth-ed on how much to involve the outside world when training any new AI in order to avoid situations like the above. They are, of course, to some degree inevitable, and there have already been troubling examples of images being created from the ability to create deepfakes and realistic images. Legislation on how to react to these issues is still lacking the world over. AI researcher Maarten Sap stated, “There’s just a severe lack of legislation that limits the negative or harmful usage of technology. The United States is just really behind on that stuff.” In the United States there are some laws in individual states when it comes to deepfakes, but there’s nothing on a federal level at this stage. China has proposed to introduce criminal charges and fines for those who promote deepfake content. The New Zealand Justice System already foresees there being issues from deepfakes becoming more common and easier to create, but at this time what legislation surrounding these pending issues exists is ambiguous at best.

However, despite all of this, OpenAI are convinced that they’ve introduced improvements to their safety system and “…DALL-E is now ready to support these delightful and important use cases — while minimizing the potential harm from deepfakes,”. This confidence in their processes means they’ve now advanced another step towards realistic images by now allowing users to upload and edit images with realistic faces.

How this technology is particularly relevant to manufacturers may be unclear to most of us as yet – but herein lies the challenge, exactly. We need to identify opportunities for and threats to our business from new technologies early, before others do so to our detriment. That is as true for new technologies closer to home, like Industry 4.0, as it is for more ‘out there’ technologies like this one.

The future is becoming the present at a startling rate, perhaps we should start looking to our film catalogues to see how best to approach the next few years. Hopefully we’re looking to something closer to Meet the Robinsons than to The Matrix.

Leave a Reply

%d bloggers like this: