Age: A couple of weeks old
TL;DR: An AI chatbot that can have conversations with humans, write code and pen Shakespearean sonnets.
Why’s someone made a chatbot to write Shakespearean sonnets? It doesn’t just pretend to be the bard; ChatGPT ain’t no one trick pony. It can also write university essays that would get full marks, tell jokes, guess at medical diagnoses and admit when it’s made mistakes.
What sort of mistakes does it make? It thought a kilo of beef weighed more than a kilo of compressed air and said crushed glass was a health supplement.
Well, we’re all human. Not ChatGPT — although it can understand human chit chat and generate human-like text better than any chatbot that’s come before.
How’s it so clever? It’s trained on 175bn pieces of information taken from the internet — from news articles, Wikipedia, social media and the like.
Is that a lot? It’s 570GB of data and about 300bn words.
Blimey. But its knowledge cutoff is September 2021, so it can’t answer any questions about current events.
So it doesn’t know about the tech downturn, and still thinks speedy grocery is a hot sector? It actually doesn’t know anything about the speedy grocery sector — I asked.
Humans 1 - ChatGPT 0. It’s not a competition.
But if it was, humans would win, right? Well there are some things ChatGPT can do better than humans — like solve very complex coding challenges in a matter of seconds.
3x a week
We tell you what's happening across startup Europe — and why it matters.
So you’re saying it won’t be long until the robots overthrow the human race? Exactly.
Really?! No, not really — but the release of ChatGPT has reignited the conversation about AI rendering jobs like programmers and journalists obsolete.
Bad news for you. Thankfully, most think we’re a little far from that becoming a reality. AI chatbots still currently lack the critical thinking and ethical decision-making skills that you need to be a journalist or a programmer.
So who can talk to ChatGPT? Anyone on the internet. Its developer — independent research body OpenAI — released the chatbot to the general public to test an update of its natural language processing technology GPT-3, which powers the programme.
Is that a smart idea? Haven’t there been some issues with AI chatbots when they’ve been released to the public in the past? Oh you mean like when Meta’s chatbot wouldn’t stop bad mouthing Mark Zuckerberg.
Yep. Or when Microsoft’s chatbot became racist after spending a day on Twitter.
Exactly, so what’s stopping ChatGPT from turning bad? OpenAI has programmed the chatbot with some fail safes. It won’t, for example, spout off about the merits of Nazi idealogy.
Good chatbot. It’s not foolproof, though, and users have been able to make ChatGPT say some pretty unsavoury stuff.
Oh dear. But patching holes like that is why OpenAI is testing its chatbot on the public and it’ll look to fix those problems when it releases the next generation of its AI tech, GPT-4, possibly sometime next year.
Couldn’t they have thought of a sexier name? GPT sounds like it could be a type of dishwasher. Well they couldn’t leave it as “generative pre-trained transformer”.