Forum Replies Created
-
AuthorPosts
-
So good to play around with it, I asked bard.google.com a question that doesn’t have a lot written about it ..
“What is the quiescent power consumption of the Bristlemouth hardware”
AND it sounds coherent – however its “hallucinating” as not a google project. The 10milliwatts its quoting is the active transmission per Mhz.
I’ve got an open question to the designers .. (https://www.bristlemouth.org/ https://bristlemouth.discourse.group/t/what-is-the-expected-power-profile-and-quiescent-current/65)
Bard Answer:
<p data-sourcepos=”1:1-1:316″>The quiescent power consumption of the Bristlemouth hardware is 10 milliwatts (mW). This means that the hardware consumes very little power when it is not actively processing data. This makes it ideal for applications where power consumption is a critical factor, such as battery-powered devices or embedded systems.</p>
<p data-sourcepos=”3:1-3:270″>The Bristlemouth hardware is a neural network accelerator developed by Google AI. It is designed to be highly energy efficient, while still providing high performance. The quiescent power consumption of 10 mW is achieved through a number of design techniques, including:</p><ul data-sourcepos=”5:1-8:0″>
<li data-sourcepos=”5:1-5:29″>Using a low-power processor
<li data-sourcepos=”6:1-6:24″>Using low-power memory
<li data-sourcepos=”7:1-8:0″>Using a power-gating mechanism to turn off components when they are not in use<p data-sourcepos=”9:1-9:220″>The Bristlemouth hardware is a significant improvement over previous generations of neural network accelerators in terms of power efficiency. This makes it a valuable tool for developing energy-efficient AI applications.</p>
Goodluck with tracking it down :), some cyber screw come loose.
Yes interesting to understand the basis of the technology. ChatGPT sounds coherent!! or as a “stochastic parrot”, have enough people written on the problem for it to be trained to sound coherent :).
A friend said they found it helps debug xls macros!!
Like handling a “knife”, we might need to figure out how to use the “cutting edge”. Here is an article on the roots of the technology
Technology
How does ChatGPT work and do AI-powered chatbots “think” like us?
The large language models behind the new chatbots are trained to predict which words are most likely to appear together – but “emergent abilities” suggest they might be doing more than that25 July 2023 (newScientist)
By Edd Gent
2R59MEN ChatGPT, chatbots and AI
Illustronaut/AlamyThe current whirlwind of interest in artificial intelligence is largely down to the sudden arrival of a new generation of AI-powered chatbots capable of startlingly human-like text-based conversations. The big change came last year, when OpenAI released ChatGPT. Overnight, millions gained access to an AI producing responses that are so uncannily fluent that it has been hard not to wonder if this heralds a turning point of some sort.
There has been no shortage of hype. Microsoft researchers given early access to GPT4, the latest version of the system behind ChatGPT, argued that it has already demonstrated “sparks” of the long-sought machine version of human intellectual ability known as artificial general intelligence (AGI). One Google engineer even went so far as to claim that one of the company’s AIs, known as LaMDA, was sentient. The naysayers, meanwhile, insist that these AIs are nowhere near as impressive as they seem.
All of which can make it hard to know quite what you should make of the new AI chatbots. Thankfully, things quickly become clearer when you get to grips with how they work and, with that in mind, the extent to which they “think” like us.
At the heart of all these chatbots is a large language model (LLM) – a statistical model, or a mathematical representation of data, that is designed to make predictions about which words are likely to appear together.
LLMs are created by feeding huge amounts of text to a class of algorithms called deep neural networks, which are loosely inspired by the brain. The models learn complex linguistic patterns by playing a simple game: the algorithm takes a passage of text, randomly masks out some words and then tries to fill in the gaps. They are, in short, trained to predict the next word. And by repeating the process over and over, they can build up sophisticated models of how language works, says Mirella Lapata at the University of Edinburgh, UK.
Recent breakthroughs are largely down to a new type of neural network invented in 2017 called a “transformer”, which can process data far more efficiently than previous approaches. This made it possible to train much larger models on vast tracts of text scraped from the internet. Transformer-based systems are also much better at understanding context, says Lapata. Whereas older versions could only consider a few words either side of the missing one, transformers can process much longer strings of text, meaning they can tease out more complex and subtle linguistic relationships.
What turns otherwise unwieldy statistical models into smooth-talking chatbots, meanwhile, is humans rating the output of AIs on criteria like helpfulness and fluency. This data is then used to train a separate “preference model” that filters an LLM’s output. Put this together and you get what we have today, namely a text-based, computerised conversational partner often indistinguishable from a human. The fact that this was achieved using a premise as simple as next-word prediction caught a lot of people by surprise, says Tal Linzen at New York University.
But it is important to remember that the way these AIs operate almost certainly isn’t the way human cognitive processes work. “They learn in such a fundamentally different way from people that it makes it very improbable [that] they ‘think’ the same way people do,” says Linzen.
Here, the mistakes chatbots make are instructive. They are prone to confidently trumpeting falsehoods as facts, something often referred to as “hallucination”, because their output is entirely statistical. “It doesn’t do fact-checking,” says Lapata. “It just generates output that is likely or plausible, but not necessarily true.”
This has led some commentators to disparage chatbots as “stochastic parrots” and their output as nothing more than “a blurry JPEG of the web”. The gist of these jibes is that the new LLMs aren’t as impressive as they first appear – that what they do is merely the imperfect memorisation of training data cleverly stitched back together to give the false impression of understanding.
Emergent abilities
And yet there are some indications that LLMs might be doing more than just regurgitating training data, says Raphaël Millière at Columbia University in New York. Recent research suggests that, after training for long enough, models can develop more general rules that give them new skills. “You get this transition from memorisation to the formation of circuits inside the [neural] network that will be implementing certain algorithms or certain rules to solve the tasks,” he says.This may help to explain why, as LLMs increase in size, they often experience sudden jumps in performance on certain problems, says Millière. This phenomenon has been referred to as “emergence” and has led to speculation about what other unexpected capabilities AI could develop.
It is important not to get carried away. “This term is very seductive,” says Millière. “It evokes things like the models suddenly becoming self-aware, or things like that. That’s not what we’re talking about.”
Even so, Millière thinks there is a “rich middle ground” between the naysayers and hype merchants. While these chatbots are far from replicating human cognition, in some narrow areas, they may not be so different from us. Digging into these similarities could not only advance AI, he says, but also sharpen our understanding of our own cognitive capabilities.
This story is part of a series in which we explore the most pressing questions about artificial intelligence. Read the other articles below
What generative AI really means for the economy, jobs and education | Forget human extinction – these are the real risks posed by AI today | How to use AI to make your life simpler, cheaper and more productive | The biggest scientific challenges that AI is already helping to crack | Can AI ever become conscious and how would we know if that happens?great makes it easy to try it –
I cloned locally, opened the folder in VSC.
It pulled in the libs, and then I pressed the pio build button.
Advanced Memory Usage is available via “PlatformIO Home > Project Inspect”
RAM: [=== ] 34.9% (used 5724 bytes from 16384 bytes)
Flash: [======= ] 65.5% (used 85196 bytes from 130048 bytes)
Building .pio\build\mayfly\firmware.hexI’m just finished some other testing so my Mayfly was open, and I downloaded it
avrdude: 85196 bytes of flash written
Very occasionally I’ve had problems with pio getting out of sync or something. Perhaps you want to uninstall VSC. Then from the platformio page reinstall.
https://platformio.org/install/ide?install=vscode
Sometime “stuff happens” and thats why virtual machines where invented.
If you post your git link – and the project is set up right, I should be able to clone, and then open it in VSC/PIO and see if it builds.
<quote>”Before I go digging through the code, I’m wondering why code would compile and run fine one week, then be “structured to not compile” the next. Do you have any idea why this might happen?” </quote>
So a perspective – its your code, and its your environment – so i have no idea, just throwing out a suggestion based on the data of compilation size. The whole computer is built on “sand” and layers of software, and many items can go wrong. So welcome to the wonderful world of the computer.
So I put my code under git/source control/(or plain zip my source tree) , just for when these problems occur. Basic working practice. Believe me I’ve had the weird-ist problems that come up – and then I can see the change that’s been made and why things have changed. From the snapshot I can recreate the known working program into another director – and hope it works there.
Eyeballing your code, I’m afraid nothing jumps out to me. It could be in the platformio.ini . It would seem the clue is ~ your program size reflects the nature of the program that is being downloaded.
Beats me. Perhaps duplicate your blinkme, then copy and paste the above program into the file and see the size of program it produces.
@tahicksonstthomas-edu – like as you say thats interesting, all of investigations start with that.
Probably post the code – though it looks like it is structured to not compile – eyeball the code flow from first call to setup() and subsequent calls to main() and is your section of the code isolated.
SC typically shows colour coded syntax – though .ino as a code anomaly require special enabling. IMHO the easy way is to change the .ino to .cpp
I wonder what is the size of the code ?
From the bottom of the PIO when I build I have
RAM: [===== ] 52.2% (used 8551 bytes from 16384 bytes)
Flash: [========= ] 87.4% (used 113628 bytes from 130048 bytes)
@tahicksonstthomas-edu – just wondering if you tried rebooting, or tried it again.
VSC and also Platfromio regularly do updates – and sometimes just exiting and starting VSC again seems to work for me.
More recently for some reason two comport have appeared with a BlueTooth access.
You can also check what the USB enumerates as a COM port and then set it in the platformio.ini
<div>
<div>[env:mayfly]</div>
<div>upload_port = <COM#></div>
</div>
@nick Apparently try it with out clicking the “notify option” –
https://github.com/ODM2/ODM2DataSharingPortal/issues/668
not clear if the bug fix has gone live, or will be fixed on the next pass
@nick – I tried https://monitormywatershed.org/sites/TWItest/ and it doesn’t exist
What is valuable for your “error report” is the error that is occurring when you submit to MMW. That would be the screen shot you need.
Perhaps there is some field that needs to be filled in that is missing
I usually use a naming convention, as it has to be unique among all the sites, and I’m likely to be entering a number of sites
so could be nick_test_twi01 or
<organization or unique identifier><stream or function><local code>
So I go to https://monitormywatershed.org/ and check that I’m logged in – top right
Click “My Sites” – top left, then “register A New Site”
So I’m trying “nh_test_twi01”
enter details, select map and then under Notes “Test site to be deleted”
then enter ~ and its created it for me
https://monitormywatershed.org/sites/nh_test_twii01/
-
AuthorPosts