Home › Forums › Mayfly Data Logger › Modem Characterization
- This topic has 1 reply, 1 voice, and was last updated 2023-02-13 at 7:44 PM by neilh20.
-
AuthorPosts
-
-
2023-01-13 at 8:16 PM #17541
A few people seem to have mentioned modem failures. I’m wondering if people who have experienced modem failures could share what data they have on the the failures, part number, how they detected a failure, and how many months the devices where in the field.
I’m a hardware engineer, and when components start failing I look to pay attention to the conditions, and attempt to characterize the failures.
I’ve got a number (6) of devices using the Digi modules in the field, and a few more in the works, and so far (fingers crossed) all are operating through the storms (Hope it isn’t bad luck to mention it … oops)
Years ago I went to talk “building on sand” ~ which is literally what most semiconductor devices are – and they can have very specific electrical operational parameters, characterized in data sheets.
Failure conditions could be anything from some form of lightening strikes, or electrostatic discharges (ESD) – which are difficult to figure out if intermittent and easy if its a charcoal scar across the board- to internal “<span class=”fontstyle2″>non-volatile memory” configuration failures. In the early days of electronics there was a lot of discussion on how ESD could be caused by bad handling. Either way failures are really annoying, and to some degree its doing a good deed to share when they fail. Part of it is to attempt to figure out a way of stopping it or recovering the parts.</span>
Characterization for say Digi Xbee could be putting them in a programming XCTU board and seeing if they respond to commands, or read the SIM card, or maintain configuration..
I had one set of failures with a mega2560 processor, that it would occasionally erase its configuration on a slowly rising power signal – however it could be reprogrammed and recovered.
One company that I worked for had a product that was high end irrigation controllers, and they found they had a high rate of failures in Florida. Because Florida is largely built on a reef, lightening couldn’t find a way to ground very easily. The solution was to install with certified code grounding rods – which was an expensive process of two or three rods – but once done cut down the number of failures.
For critical devices, a Mean Time Between Failure (MTBF ) statistic is characterized, and then added up for all the devices on board. So this was done for the Mars Rover Opportunity, and it ended up exceeded its statistical design life and sent back lots of data. https://en.wikipedia.org/wiki/Mean_time_between_failures
-
2023-02-13 at 7:44 PM #17604
The other side of failure is looking at how to design for system stability.
Some components may appear to fail, like modems. However the software of course needs to be robust enough to handle network conditions.
A way of thinking about it is the “7 Layer OSI stack” and transmission reliability is attributed to Network Layer, or layer 3 https://en.wikipedia.org/wiki/OSI_model
-
-
AuthorPosts
- You must be logged in to reply to this topic.