Welcome to EnviroDIY, a community for do-it-yourself environmental science and monitoring. EnviroDIY is part of WikiWatershed, an initiative of Stroud Water Research Center designed to help people advance knowledge and stewardship of fresh water.
New to EnviroDIY? Start here

neilh20

Forum Replies Created

Viewing 10 posts - 31 through 40 (of 371 total)
  • Author
    Posts
  • in reply to: Mayfly v1.1 technical questions forum thread #18049
    neilh20
    Participant

      Hi @shicks, just received two Mayfly 1.1 and noticed they are rev1.1B – just wondering what the differences are from  rev1.1A? 🙂

      I did pull the circuit diagram from https://www.envirodiy.org/mayfly/hardware/ however the resolution is so low, its not all readable. Just wondering if there is a pdf version. 🙂

      I also checked https://github.com/EnviroDIY/EnviroDIY_Mayfly_Logger/tree/master/hardware but couldn’t see anything.

      Many thanks

      neilh20
      Participant

        It seems to me its a working practice’s issue, repeatability and traceability- not a programming issue.  From a distance, seems like you should be able to see the code in the directory that you are building – however you do have a unique directory structure, and if opening HicksonDeployments.code-workspace – maybe its mispointed. I open “folders”.

        For the working directory that compiles and downloads, my suggestion was by putting the UUIDs in a separate file ms_cfg_uuids.h -you can have the example non real UUIDs  that get pushed to git.

        On building your managed target system, you drop in the unqieu ms_cfg_uuds.h into the directory and then compile it.

        That way there isn’t any messing around with the actual source code, and whats on a branch.

        I would suggest adding a “GEOGRAPHICAL_ID” in your ms_cfg_uuids.h so the UUIDs compiled are visible, traceable and debugable.

        On startup, my systems identify parts so I can manage the code – the code source,  the software name, compile and then the target .

        So from my debug logs, I’m currently trying to integrate the SIM7080 and was doing it previously with WiFi ESP32-wroom, I can follow what works and doesn’t

        [2023-08-12 09:52:20.083] —Boot(0) Sw Build: a\src\tu_xx02.cpp Aug 12 2023 09:50:19 b’neilh20′
        [2023-08-12 09:52:20.146] Sw Name: LT500/Modbus TO MMW LTE SIM7080
        [2023-08-12 09:52:20.146] ModularSensors version 0.34.1-abd
        [2023-08-12 09:52:20.146] TinyGSM Library version 0.11.6-aab

        [2023-08-12 09:52:20.225] Board: Assume Mayfly 1.1A

        [2023-08-12 09:52:22.360] GEOGRAPHICAL_ID:”230723 TU-RCtest07 simRiverX Monitor”

         

        earlier I was working on

        [2023-08-04 18:19:16.580] —Boot(0) Sw Build: a\src\tu_xx02.cpp Aug 4 2023 18:14:40 b’neilh20′
        [2023-08-04 18:19:16.632] Sw Name: LT500/Modbus TO MMW ESP32-wroom
        [2023-08-04 18:19:16.632] ModularSensors version 0.34.1-abd
        [2023-08-04 18:19:16.632] TinyGSM Library version 0.11.6-aab

        [2023-08-04 18:19:16.719] Board: Assume Mayfly 1.1A

        [2023-08-04 18:19:18.847] GEOGRAPHICAL_ID:”230723 TU-RCtest07 simRiverX Monitor”

        neilh20
        Participant

          Well you are now into working deployment practice’s, beyond programming,

          So how about simplifying  (I think you’re process is getting complicated . IHMO all the guideline haven’t identified what happens when you actually have a successful program to deploy .)

          Seems the question is for  a successful working Mayfly logger program ….. and how to scale it   to the field.

          …. you have a very good point

          ~~ grin ~~

           

          Seems to me a really SIMPLE  way of solving it is to put the UUIDs in an include file – and then manage the include file separately so they don’t go to github.  Say “ms_cfg_uuids.h”  This works if its you (and any similar programmers/colleagues) are building and deploying to the field and you can agree on labeling and where to keep the ms_cfg_uuids.h

          IMHO – this is a beta development scheme. For each Mayfly it still has to be be compiled with each separate ms_cfg_uuuds.h and then downloaded to each Mayfly, and tracked on a Mayfly basis.  IHMO2 this shows a weakness in EnviroDIY working practises , how to test a program to prove its successful.

          If you want to deploy a number (10->??) or so loggers all the same,  … what then … Deploying a number of Mayfly loggers becomes a bit more of a production line.

          What I do, in my fork, (and its a big step of functionality)  is put the UUIDs in a file ms_cfg.txt and then put it on the uSD. This is ideal for situations where colleagues are not programmers, but can configure a system through a built  .hex and then uniquely configure the system with ms_cfg.txt

          The program can be tested, and with the captured .HEX file easily deployed in multiple numbers.

          Its also easy to upgrade, as its the uSD that is the key to connecting it to MMW – not the program.

          I wrote about it here https://www.envirodiy.org/geographical-scaling-modularsensors/

          I also write about other features that I think are missing from the mainline https://github.com/neilh10/ModularSensors

          hope that is helpful 🙂

          neilh20
          Participant

            Hey glad to have helped!!.  Whew.!

            Its a challenge sometime to figure out what the foundations are and build off them. I’ve had to do that before, start with a known should be working, and then slowly add to it to find out what came adrift.   I often use Meld Merge to compare the base directories separately from PIO.   There is a gitkraken plugin of PIO to visualize the branches – however I get lost quickly.

            So your process of rebuilding from a known source is teaching moment of the challenges of source management.  The next challenge is traceability – and its mostly the school of hard knocks. Sara is doing an amazing job of fixing lib versions for the main release.

            You  have the Mayfly version built in nicely to you platformio.ini- EnviroDIY_ModularSensors@=0.34.0

            Can you imagine having your code all working, and distributed being used, and then coming back to make a modification – only to find its lost – you can’t trace what your release was.  Several professional level stories of that happening – and hence why Microsoft bought github for gizzilion$

            https://meldmerge.org/   though I ‘m currently on an early version 3.18.3 https://download.gnome.org/binaries/win32/meld/ as some lost functionality in later releases

            neilh20
            Participant

              So good to play around with it, I asked bard.google.com a question that doesn’t have a lot written about it ..

              “What is the quiescent power consumption of the Bristlemouth hardware”

              AND it sounds coherent – however its “hallucinating”  as not a google project. The 10milliwatts its quoting is the active transmission per Mhz.

              I’ve got an open question to the designers .. (https://www.bristlemouth.org/  https://bristlemouth.discourse.group/t/what-is-the-expected-power-profile-and-quiescent-current/65)

               

              Bard Answer:
              <p data-sourcepos=”1:1-1:316″>The quiescent power consumption of the Bristlemouth hardware is 10 milliwatts (mW). This means that the hardware consumes very little power when it is not actively processing data. This makes it ideal for applications where power consumption is a critical factor, such as battery-powered devices or embedded systems.</p>
              <p data-sourcepos=”3:1-3:270″>The Bristlemouth hardware is a neural network accelerator developed by Google AI. It is designed to be highly energy efficient, while still providing high performance. The quiescent power consumption of 10 mW is achieved through a number of design techniques, including:</p>

              <ul data-sourcepos=”5:1-8:0″>
              <li data-sourcepos=”5:1-5:29″>Using a low-power processor
              <li data-sourcepos=”6:1-6:24″>Using low-power memory
              <li data-sourcepos=”7:1-8:0″>Using a power-gating mechanism to turn off components when they are not in use

              <p data-sourcepos=”9:1-9:220″>The Bristlemouth hardware is a significant improvement over previous generations of neural network accelerators in terms of power efficiency. This makes it a valuable tool for developing energy-efficient AI applications.</p>

              neilh20
              Participant

                Goodluck with tracking it down :), some cyber screw come loose.

                 

                Yes interesting to understand the basis of the technology. ChatGPT sounds coherent!! or as a “stochastic parrot”, have enough people written on the problem for it to be trained to sound coherent :).

                A friend said they found it helps debug xls macros!!

                Like handling a “knife”, we might need to figure out how to use the “cutting edge”. Here is an article on the roots of the technology

                Technology

                How does ChatGPT work and do AI-powered chatbots “think” like us?
                The large language models behind the new chatbots are trained to predict which words are most likely to appear together – but “emergent abilities” suggest they might be doing more than that

                25 July 2023 (newScientist)

                By Edd Gent

                2R59MEN ChatGPT, chatbots and AI
                Illustronaut/Alamy

                The current whirlwind of interest in artificial intelligence is largely down to the sudden arrival of a new generation of AI-powered chatbots capable of startlingly human-like text-based conversations. The big change came last year, when OpenAI released ChatGPT. Overnight, millions gained access to an AI producing responses that are so uncannily fluent that it has been hard not to wonder if this heralds a turning point of some sort.

                There has been no shortage of hype. Microsoft researchers given early access to GPT4, the latest version of the system behind ChatGPT, argued that it has already demonstrated “sparks” of the long-sought machine version of human intellectual ability known as artificial general intelligence (AGI). One Google engineer even went so far as to claim that one of the company’s AIs, known as LaMDA, was sentient. The naysayers, meanwhile, insist that these AIs are nowhere near as impressive as they seem.

                All of which can make it hard to know quite what you should make of the new AI chatbots. Thankfully, things quickly become clearer when you get to grips with how they work and, with that in mind, the extent to which they “think” like us.

                At the heart of all these chatbots is a large language model (LLM) – a statistical model, or a mathematical representation of data, that is designed to make predictions about which words are likely to appear together.

                LLMs are created by feeding huge amounts of text to a class of algorithms called deep neural networks, which are loosely inspired by the brain. The models learn complex linguistic patterns by playing a simple game: the algorithm takes a passage of text, randomly masks out some words and then tries to fill in the gaps. They are, in short, trained to predict the next word. And by repeating the process over and over, they can build up sophisticated models of how language works, says Mirella Lapata at the University of Edinburgh, UK.

                Recent breakthroughs are largely down to a new type of neural network invented in 2017 called a “transformer”, which can process data far more efficiently than previous approaches. This made it possible to train much larger models on vast tracts of text scraped from the internet. Transformer-based systems are also much better at understanding context, says Lapata. Whereas older versions could only consider a few words either side of the missing one, transformers can process much longer strings of text, meaning they can tease out more complex and subtle linguistic relationships.

                 

                What turns otherwise unwieldy statistical models into smooth-talking chatbots, meanwhile, is humans rating the output of AIs on criteria like helpfulness and fluency. This data is then used to train a separate “preference model” that filters an LLM’s output. Put this together and you get what we have today, namely a text-based, computerised conversational partner often indistinguishable from a human. The fact that this was achieved using a premise as simple as next-word prediction caught a lot of people by surprise, says Tal Linzen at New York University.

                But it is important to remember that the way these AIs operate almost certainly isn’t the way human cognitive processes work. “They learn in such a fundamentally different way from people that it makes it very improbable [that] they ‘think’ the same way people do,” says Linzen.

                Here, the mistakes chatbots make are instructive. They are prone to confidently trumpeting falsehoods as facts, something often referred to as “hallucination”, because their output is entirely statistical. “It doesn’t do fact-checking,” says Lapata. “It just generates output that is likely or plausible, but not necessarily true.”

                This has led some commentators to disparage chatbots as “stochastic parrots” and their output as nothing more than “a blurry JPEG of the web”. The gist of these jibes is that the new LLMs aren’t as impressive as they first appear – that what they do is merely the imperfect memorisation of training data cleverly stitched back together to give the false impression of understanding.

                Emergent abilities
                And yet there are some indications that LLMs might be doing more than just regurgitating training data, says Raphaël Millière at Columbia University in New York. Recent research suggests that, after training for long enough, models can develop more general rules that give them new skills. “You get this transition from memorisation to the formation of circuits inside the [neural] network that will be implementing certain algorithms or certain rules to solve the tasks,” he says.

                This may help to explain why, as LLMs increase in size, they often experience sudden jumps in performance on certain problems, says Millière. This phenomenon has been referred to as “emergence” and has led to speculation about what other unexpected capabilities AI could develop.

                It is important not to get carried away. “This term is very seductive,” says Millière. “It evokes things like the models suddenly becoming self-aware, or things like that. That’s not what we’re talking about.”

                Even so, Millière thinks there is a “rich middle ground” between the naysayers and hype merchants. While these chatbots are far from replicating human cognition, in some narrow areas, they may not be so different from us. Digging into these similarities could not only advance AI, he says, but also sharpen our understanding of our own cognitive capabilities.

                This story is part of a series in which we explore the most pressing questions about artificial intelligence. Read the other articles below
                What generative AI really means for the economy, jobs and education | Forget human extinction – these are the real risks posed by AI today | How to use AI to make your life simpler, cheaper and more productive | The biggest scientific challenges that AI is already helping to crack | Can AI ever become conscious and how would we know if that happens?

                 

                neilh20
                Participant

                  great  makes it easy to try it –

                  I cloned locally, opened the folder in VSC.

                  It pulled in the libs, and then I pressed the pio build button.

                  Advanced Memory Usage is available via “PlatformIO Home > Project Inspect”
                  RAM: [=== ] 34.9% (used 5724 bytes from 16384 bytes)
                  Flash: [======= ] 65.5% (used 85196 bytes from 130048 bytes)
                  Building .pio\build\mayfly\firmware.hex

                  I’m just finished some other testing so my Mayfly was open, and I downloaded it

                  avrdude: 85196 bytes of flash written

                  Very occasionally I’ve had problems with pio getting out of sync or something.  Perhaps you want to uninstall VSC. Then from the platformio page reinstall.

                  https://platformio.org/install/ide?install=vscode

                   

                  neilh20
                  Participant

                    Sometime “stuff happens” and thats why virtual machines where invented.

                    If you post your git link  – and the project is set up right, I should be able to clone, and then open it in VSC/PIO and see if it builds.

                    neilh20
                    Participant

                      <quote>”Before I go digging through the code, I’m wondering why code would compile and run fine one week, then be “structured to not compile” the next. Do you have any idea why this might happen?” </quote>

                      So a perspective – its your code, and its your environment – so i have no idea, just throwing out a suggestion based on the data of compilation size.   The whole computer is built on “sand” and layers of software, and many items can go wrong. So welcome to the wonderful world of the computer.

                      So I put my code under git/source control/(or plain zip my source tree) , just for when these problems occur. Basic working practice.   Believe me I’ve had the weird-ist problems that come up – and then I can see the change that’s been made and why things have changed. From the snapshot I can recreate the known working program into another director – and hope it works there.

                      Eyeballing your code, I’m afraid nothing jumps out to me.  It could be in the platformio.ini .  It would seem the clue is ~ your program size reflects the nature of the program that is being downloaded.

                      Beats me. Perhaps duplicate your blinkme, then copy and paste the above program into the file and see the size of program it produces.

                      neilh20
                      Participant

                        @tahicksonstthomas-edu  – like as you say thats interesting, all of investigations start with that.

                        Probably post the code – though it looks like it is structured to not compile –  eyeball the code flow from first call to setup() and subsequent calls to main() and is your section of the code isolated.

                        SC typically shows colour coded syntax – though .ino as a code anomaly require special enabling. IMHO the easy way is to change the .ino to .cpp

                      Viewing 10 posts - 31 through 40 (of 371 total)