Home › Forums › Mayfly Data Logger › Mayfly sketch compiles, seems to upload, but doesn’t › Reply To: Mayfly sketch compiles, seems to upload, but doesn’t
Goodluck with tracking it down :), some cyber screw come loose.
Yes interesting to understand the basis of the technology. ChatGPT sounds coherent!! or as a “stochastic parrot”, have enough people written on the problem for it to be trained to sound coherent :).
A friend said they found it helps debug xls macros!!
Like handling a “knife”, we might need to figure out how to use the “cutting edge”. Here is an article on the roots of the technology
Technology
How does ChatGPT work and do AI-powered chatbots “think” like us?
The large language models behind the new chatbots are trained to predict which words are most likely to appear together – but “emergent abilities” suggest they might be doing more than that
25 July 2023 (newScientist)
By Edd Gent
2R59MEN ChatGPT, chatbots and AI
Illustronaut/Alamy
The current whirlwind of interest in artificial intelligence is largely down to the sudden arrival of a new generation of AI-powered chatbots capable of startlingly human-like text-based conversations. The big change came last year, when OpenAI released ChatGPT. Overnight, millions gained access to an AI producing responses that are so uncannily fluent that it has been hard not to wonder if this heralds a turning point of some sort.
There has been no shortage of hype. Microsoft researchers given early access to GPT4, the latest version of the system behind ChatGPT, argued that it has already demonstrated “sparks” of the long-sought machine version of human intellectual ability known as artificial general intelligence (AGI). One Google engineer even went so far as to claim that one of the company’s AIs, known as LaMDA, was sentient. The naysayers, meanwhile, insist that these AIs are nowhere near as impressive as they seem.
All of which can make it hard to know quite what you should make of the new AI chatbots. Thankfully, things quickly become clearer when you get to grips with how they work and, with that in mind, the extent to which they “think” like us.
At the heart of all these chatbots is a large language model (LLM) – a statistical model, or a mathematical representation of data, that is designed to make predictions about which words are likely to appear together.
LLMs are created by feeding huge amounts of text to a class of algorithms called deep neural networks, which are loosely inspired by the brain. The models learn complex linguistic patterns by playing a simple game: the algorithm takes a passage of text, randomly masks out some words and then tries to fill in the gaps. They are, in short, trained to predict the next word. And by repeating the process over and over, they can build up sophisticated models of how language works, says Mirella Lapata at the University of Edinburgh, UK.
Recent breakthroughs are largely down to a new type of neural network invented in 2017 called a “transformer”, which can process data far more efficiently than previous approaches. This made it possible to train much larger models on vast tracts of text scraped from the internet. Transformer-based systems are also much better at understanding context, says Lapata. Whereas older versions could only consider a few words either side of the missing one, transformers can process much longer strings of text, meaning they can tease out more complex and subtle linguistic relationships.
What turns otherwise unwieldy statistical models into smooth-talking chatbots, meanwhile, is humans rating the output of AIs on criteria like helpfulness and fluency. This data is then used to train a separate “preference model” that filters an LLM’s output. Put this together and you get what we have today, namely a text-based, computerised conversational partner often indistinguishable from a human. The fact that this was achieved using a premise as simple as next-word prediction caught a lot of people by surprise, says Tal Linzen at New York University.
But it is important to remember that the way these AIs operate almost certainly isn’t the way human cognitive processes work. “They learn in such a fundamentally different way from people that it makes it very improbable [that] they ‘think’ the same way people do,” says Linzen.
Here, the mistakes chatbots make are instructive. They are prone to confidently trumpeting falsehoods as facts, something often referred to as “hallucination”, because their output is entirely statistical. “It doesn’t do fact-checking,” says Lapata. “It just generates output that is likely or plausible, but not necessarily true.”
This has led some commentators to disparage chatbots as “stochastic parrots” and their output as nothing more than “a blurry JPEG of the web”. The gist of these jibes is that the new LLMs aren’t as impressive as they first appear – that what they do is merely the imperfect memorisation of training data cleverly stitched back together to give the false impression of understanding.
Emergent abilities
And yet there are some indications that LLMs might be doing more than just regurgitating training data, says Raphaël Millière at Columbia University in New York. Recent research suggests that, after training for long enough, models can develop more general rules that give them new skills. “You get this transition from memorisation to the formation of circuits inside the [neural] network that will be implementing certain algorithms or certain rules to solve the tasks,” he says.
This may help to explain why, as LLMs increase in size, they often experience sudden jumps in performance on certain problems, says Millière. This phenomenon has been referred to as “emergence” and has led to speculation about what other unexpected capabilities AI could develop.
It is important not to get carried away. “This term is very seductive,” says Millière. “It evokes things like the models suddenly becoming self-aware, or things like that. That’s not what we’re talking about.”
Even so, Millière thinks there is a “rich middle ground” between the naysayers and hype merchants. While these chatbots are far from replicating human cognition, in some narrow areas, they may not be so different from us. Digging into these similarities could not only advance AI, he says, but also sharpen our understanding of our own cognitive capabilities.
This story is part of a series in which we explore the most pressing questions about artificial intelligence. Read the other articles below
What generative AI really means for the economy, jobs and education | Forget human extinction – these are the real risks posed by AI today | How to use AI to make your life simpler, cheaper and more productive | The biggest scientific challenges that AI is already helping to crack | Can AI ever become conscious and how would we know if that happens?