We've updated our Privacy and Cookies Policy
We've made some important changes to our Privacy and Cookies Policy and we want you to know what this means for you and your data.
Scarlett Johansson's AI row has echoes of Silicon Valley's bad old days
- Author, Zoe Kleinman
- Role, Technology editor
Top Stories
鈥淢ove fast and break things鈥 is a motto that continues to haunt the tech sector, some 20 years after it was coined by a young Mark Zuckerberg.
Those five words came to symbolise Silicon Valley at its worst - a combination of ruthless ambition and a rather breathtaking arrogance - profit-driven innovation without fear of consequence.
I was reminded of that phrase this week when the actor Scarlett Johansson clashed with OpenAI. Ms Johansson claimed both she and her agent had declined for her to be the voice of its new product for ChatGPT - and then when it was unveiled it sounded just like her anyway. OpenAI denies that it was an intentional imitation.
It鈥檚 a classic illustration of exactly what the creative industries are so worried about - being mimicked and eventually replaced by artificial intelligence.
Top Stories
There are echoes in all this of the macho Silicon Valley giants of old. Seeking forgiveness rather than permission as an unofficial business plan.
The tech firms of 2024 are extremely keen to distance themselves from that reputation.
And OpenAI wasn鈥檛 shaped from that mould. It was originally a non-profit organisation committed to investing any extra profits back into the business.
In 2019, when it formed a profit-making arm, the company said it would be led by the non-profit side, and there would be a cap on the returns for investors.
Not everybody was happy about the shift - it was said to have been a key reason behind co-founder Elon Musk's decision to walk away. And when OpenAI CEO Sam Altman was suddenly fired by the board late last year, one of the theories was that he wanted to move further away from the original mission. We never found out for sure.
But even if OpenAI has become more profit-driven, it still has to face up to its responsibilities.
Stuff of nightmares
Top Stories
In the world of policy-making, almost everyone is agreed on the need for clear boundaries to keep companies like OpenAI in line before disaster strikes.
So far, the AI giants have largely played ball on paper. At the world鈥檚 first AI Safety Summit six months ago, a group of tech bosses signed a voluntary pledge to create responsible, safe products that would maximise the benefits of AI technology and minimise its risks.
Those risks they spoke of were the stuff of nightmares - this was Terminator, Doomsday, AI-goes-rogue-and-destroys-humanity territory.
Last week, a draft UK government report from a group of 30 independent experts concluded that there was 鈥溾 that AI could generate a biological weapon or carry out a sophisticated cyber attack. The plausibility of humans losing control of AI was 鈥渉ighly contentious鈥, it said.
And when the summit reconvened earlier this week, the word 鈥渟afety鈥 had been removed entirely from the conference title.
Some people in the field have been saying for quite a while that the more immediate threat from AI tools was that they will replace jobs or cannot recognise skin colours. These are the real problems, says AI ethics expert Dr Rumman Chowdhury.
And there are further complications. That report claimed there was currently no reliable way of understanding exactly why AI tools generate the output that they do - even their developers aren鈥檛 sure. And the established safety testing practice known as red teaming, in which evaluators deliberately try to get an AI tool to misbehave, has no best-practice guidelines.
And at that follow-up summit this week, hosted jointly by the UK and South Korea in Seoul, tech firms committed to shelving a product if it didn鈥檛 meet certain safety thresholds - but these will not be set until the next gathering in 2025.
While the experts debate the nature of the threats posed by AI, the tech companies keep shipping products.
The past few days alone have seen the launch of ChatGPT-4O from OpenAI, Project Astra from Google, and CoPilot+ from Microsoft. The AI Safety Institute declined to say whether it had the opportunity to test these tools before their release.
OpenAI says it has a 10-point safety process, but one of its senior safety-focused engineers resigned earlier this week, saying his department had been 鈥渟ailing against the wind鈥 internally.
鈥淥ver the past years, safety culture and processes have taken a backseat to shiny products,鈥 Jan Leike posted on X.
There are, of course, other teams at OpenAI who continue to focus on safety and security. But there鈥檚 no official, independent oversight of what any of these companies are actually doing.
鈥淰olunteer agreements essentially are just a means of firms marking their own homework,鈥 says Andrew Strait, associate director of the Ada Lovelace Institute, an independent research organisation. 鈥淚t's essentially no replacement for legally binding and enforceable rules which are required to incentivise responsible development of these technologies."
鈥淲e have no guarantee that these companies are sticking to their pledges,鈥 says Professor Dame Wendy Hall, one of the UK鈥檚 leading computer scientists.
鈥淗ow do we hold them to account on what they鈥檙e saying, like we do with drugs companies or in other sectors where there is high risk?鈥
Tougher rules are coming. The EU passed its AI Act, the first law of its kind, and has tough penalties for non-compliance, but some argue it will impact users - who will have to risk-assess AI tools themselves - rather than those that develop the AI .
But this doesn鈥檛 necessarily mean that AI companies are off the hook.
鈥淲e need to move towards legal regulation over time but we can鈥檛 rush it,鈥 says Prof Hall. 鈥淪etting up global governance principles that everyone signs up to is really hard.鈥
鈥淲e also need to make sure it鈥檚 genuinely worldwide and not just the Western world and China that we are protecting.鈥
The overriding issue, as ever, is that regulation and policy move a lot more slowly than innovation.
Prof Hall believes the 鈥渟tars are aligning鈥 at government levels.
The question is whether the tech giants can be persuaded to wait for them.
成人快手 InDepth is the new home on the website and app for the best analysis and expertise from our top journalists. Under a distinctive new brand, we鈥檒l bring you fresh perspectives that challenge assumptions, and deep reporting on the biggest issues to help you make sense of a complex world. And we鈥檒l be showcasing thought-provoking content from across 成人快手 Sounds and iPlayer too. We鈥檙e starting small but thinking big, and we want to know what you think - you can send us your feedback by clicking on the button below.
InDepth is the new home for the best analysis from across 成人快手 News. Tell us what you think.
Top Stories
More to explore
Most read
Content is not available