Thread: Designing an oscilloscope software

  1. #1
    Registered User
    Join Date
    Aug 2007
    Posts
    85

    Designing an oscilloscope software

    Well, since my sender part, and the protocol is complete, i'm starting to design my oscilloscope receiver, renderer, basically, where all the action and smart computing will take place.

    I made a simple sample in Visual Basic 6, but such a processing task is far beyond what it can do (or my approach is bad)

    Basically, I need something like this (this time, I'll have to develop it in C++). Data is received continously on the serial port, at 115200 (highest speed for a default com port, not sure if the number is right). This data is decoded (it contains two values per 'frame') and split into frames. Each 'frame' goes somewhere in a buffer.

    Now, that's not really hard.
    The hard part is - all this data has to be rendered. The values are simple bytes (0-255). They represent two values read by the device from an Analog-to-Digital converter. So the rendering will represent a waveform or something similar, depending on what is read.

    My problem is, what approach should i use in rendering it? I will need an adjustable timebase, so that I can see small differences if I want to (i know there will be loses, and not everything will get to be rendered), but I can also choose a larger timebase, so I can see what happens for a longer time (and some averaging of values might be needed here). When exactly should I start rendering from the buffer?

    I was thinking of two approaches - one would be, let the buffer (normal one) fill, then render, then empty. The other one is, write in a circular buffer, and get the rendering use the last X positions (following writing pointer) every Y ms. The problem is here, data may be overwritten while rendering. What do you think? How should this be taken?

    There's also another catch which I'm not really able to figure. Oscilloscopes have a useful feature called "trigger and hold". Basically, if I'm seeing a sinewave, at each update, the sinewave will start from a different phase (position, so to say). If the signal is quick, I'll see a ton of gibberish, since a lot of sinewaves will overlap. In this case, the 'scope draws what it sees, and then it waits untill the current value reaches a threshold. When it reaches, it starts drawing it again. So, the wave will always start from the same position, and will look clean Hold is pretty much what is says - it won't trigger until the time has expired. I suppose this will require a different approach to call the 'render' function, but I'm still not sure.

    Please let me know how you would do this.
    I'm planning to use wxWidgets as a graphical interface, but if you think something else is better, do let me know.

    I know to use uc's, but my C/C++ skills aren't good at all. So I might ask for other stuff, too.

    When the software and device will be done and in good shape, I'll be happy to send a few to the ones that helped me most if they'll desire so.

    izua
    Last edited by _izua_; 09-06-2007 at 05:07 PM.

  2. #2
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    The way an ordinary digital scope works is, as you say, by "trigger and hold", which essentially means that you set it to a trigger level (voltage), and direction (rising or falling edge). Advanced versions also have "trigger on high that is shorter/longer than x time" (where x is a small fraction of a second, say 1 ms or 0.5ms). The really fancy models also have things like "trigger on A being high and B being low (for x time).

    But just seeing the voltage pass through a voltage limit in one direction (rising) would be sufficient to make a decent start.

    You should also consider a "trigger position". To begin with, you may want to stick that at a fixed distance into the "frame". If a frame is 1024 samples, make it 128 samples from the beginning of the frame.

    You probably want to use a circular buffer. When you see a trigger, count elements to frame minus trigger point, and store samples until that point. Start sending data from trigger-point minus trigger position (and you can start sending data as soon as the trigger is seen, assuming it's not preventing the uC from getting the data in correctly).

    I hope this helps.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #3
    Registered User
    Join Date
    Aug 2007
    Posts
    85
    The problem with buffering is that - the data may come in quicker than I can process it. And this way, I'll have to skip parts of the buffer - so, I can miss trigger events. I was thinking of storing the data in the buffer in a differential way, as it comes in, it is written as the difference between the last value and the current value. This way, I can 'look ahead' trigger conditions. Anyway..

    I think my problem now is rendering the data in 'direct' mode. When exactly should I start rendering, how should the input buffer work, and how will it react to the timebase? For example, if there is a long timebase, should it be displayed several times per update interval, showing 'old' samples as they go away? Or just wait the whole update interval (while the buffer fills) and then display the whole buffer?

  4. #4
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    I'm not sure what you mean by "data can come in faster than I can process it". Is your AD converter writing directly to memory, or is your uC reading the ADC and storing the result by software?

    If you are storing the result by software, you should definitely be able to do something like:
    Code:
    void sampleData() {
        value = readADC();   // I don't know if you NEED to read the ADC at all times, but just to be safe... 
        if (sampling) {
          buffer[curpos] = value;
          curpos = (curpos + 1) & (buffersize-1);
          if (!triggered && value > trigger_level) {
              triggered = 1;
              trig_pos = curpos;
              remaining = buffersize - trig_hold_off;
          } else if (triggered) {
              remaining--;
              if (remaining == 0)
                 sampling = 0;
          }
       }
    }
    Sampling should be set to 1 again when the current buffer has been sent to your PC. [You could be clever and start sampling once you've sent enough data to not overwrite data you already sent, or have two different buffers for the sample-data, one that is being sent, and one that is currenly sampled into - but you probably won't gain much from that].

    The above code obviously doesn't allow for more complex triggering. It wouldn't be too hard to trigger on falling edges as well. Just add another flag and an or-statement more in the "if (!triggered)" line.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #5
    Registered User
    Join Date
    Aug 2007
    Posts
    85
    That's an interesting approach. Well, I'm not reading the ADC at all. The uc is sending its values as it's reading the ADC, as fast as it can (but of course, it reaches only about 90% of the serial speed)

  6. #6
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    My approach was to do that in the uC code, because I suspect that can be done faster than in the PC. If you can send the data across to the PC at 90% of the 115200 bitrate, then I suppose you could try to just decode the data in the PC with a similar approach.

    (By the way, using (curpos+1) & (bufsize-1) assumes that bufsize is some number 2^n, e.g. 256, 512, 1024, etc. (It's a short way to do modulo - if you want a different size buffer in the PC, just use (curpos+1) % bufsize - there's so little speed difference between a divide instruction in a PC and a AND operation - only 10-20 times faster, but we don't need to care about that in a PC. But in a uC you may need to do the divide "by separate instructions", which is most like hundreds or thousands of clock-cycles. If so, use the approach of
    Code:
     
    curpos++; 
    if (curpos == bufsize) curpos = 0;
    That will be much faster than a divide. The AND operation is most likely faster than BOTH of these tho'.

    If you don't send data across the serial port, can you read the ADC faster? [By the way, you are supposed to read the ADC at timed intervals, selectable by the user, but I'd leave that until later if you don't have a way to do that already].

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  7. #7
    Registered User
    Join Date
    Aug 2007
    Posts
    85
    So by 'reading' the adc, you mean something like, sending a byte to the uc, and then the uc sends the read value back?

    The whole thing is timed pretty well, since i'm using the hardware USART. there's a read ADC-process (moving bits to the working register) -send loop, which is triggered by a timer at fixed intervals. The problem with timing comes when I send escape characters, as you can imagine but that's another story.

    Anyway, I'm still confused about drawing the buffer. Your idea for triggering sounds cool, and I definitely never thought of using powers of two for the buffer, but I think that's a bit too far ahead.

    For the moment, I just want to do a simple software that cand display data from the device (wiothout any triggering), and draw wavy lines when i move a pot. I never played with a real CRO, so I can barely imagine how the lines would look there.

    So- my current question - as the buffer gets filled, when exactly should I start drawing from it? This will have to do something with the timebase, and definitely with the computers processing power (I might have, for example, to stop the uc from sending data, if the buffer gets completly rewritten in a draw cycle. I doubt this will ever be the case in a modern computer, but heck knows where I'll use this device). A circular buffer sounds ok for the triggering part, but for the normal part, is it good?

    I've wrote above the two methods I was thinking of using, but I'm not sure if they're good at all.

    edit: i just noticed you have 2^10 posts. cool b-)
    Last edited by _izua_; 09-07-2007 at 06:03 AM.

  8. #8
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Well, I imagined that this would be done on the uC itself, and when I say "readADC", I mean reading the hardware.

    If you are processing on the PC, then you read in data for as long as it takes to trigger, then wait for the buffer to fill [this is exactly what a Tektronics DSO [Digital Storage Oscilloscope] does, it even says "filling buffer" if you run it very slowly with lots of samples [2M samples on the model I used, IIRC] - a CRO is slightly different, as it's just drawing based on the input (but using the trigger to sync things)].

    Again, you want to see some stuff before the trigger, so you want to save all the incomming data, then when trigger happens, count in X amount of data, where X is less than a full buffer.

    If you do the trigger detection on the PC, just use two buffers, one that you are drawing from, and another that you are receiving into. Use two threads, one that does the drawing.
    Code:
     // Pseudocode.
    void drawThread() {
       for(;;) {
          buffer = getNextBuffer();   // This waits for the 
          drawBuffer();
          releaseBuffer();
       }
    }
    
    void receiveThread() {
       for(;;) {
          value = receivePacket();
          insert value in receive_buffer;
          if (!triggered & trigger(value)) {
           set remaining;
           set triggered;
         }
         if (triggered)
            remaining --;
            if(!remaining)  {
               putBuffer(receive_buffer)
               triggered = 0;
            }
         } 
       }
    }
    putBuffer, getNextBuffer and releaseBuffer should use some form of events to signal between threads so that for example putBuffer waits for the "other" buffer to be ready before it switches to teh new buffer.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  9. #9
    Registered User
    Join Date
    Aug 2007
    Posts
    85
    oh, you thought i was doing trigger detection on the device?
    hmm, that's not a bad idea. i suppose it can be done.

    This "double-buffering" method sounds cool. I'll have to put it on paper and see all the possible cases. I'll get back if i don't get something

  10. #10
    Hardware Engineer
    Join Date
    Sep 2001
    Posts
    1,398
    Data is received continously on the serial port, at 115200
    That sounds like a bottleneck... At 11.5kHz, you are getting 10 samples per cycle, which is fairly crude for a 'scope, and I'd say that's your upper limit. (My 100MHz digital 'scope samples at 1.25GHz, which is also in the ballpark of 10 samples per cycle at the upper frequency limit.)

    I would take a different approach. I would put all of the "brains" in the microcontroller, and use the PC as a display device. That way, you can just send the data needed to "draw" the waveform. You would be missing a lot of data. It would work more like a "waveform capture" device...

    Capture a waveform, send it to the display.
    Continue capturing to your circular buffer.
    When you are done displaying the waveform, send another waveform.

    I suspect my digital 'scope does this to some extent too... In reality, your computer's raster display is only refreshing at 75Hz or so anyway, so you can't display all of the data in real time like you can on an analog 'scope with a vector CRT display.

    I wouldn't expect the PC's CPU to be a problem, but screen-updating can be slow. I've never used wxWidgets, so I don't know if it can help with that. On a Windows system, DirectX seems to be the way to get fast graphics, but I haven't used DirectX graphics either.

    And of course, your microcontroller could be a bottleneck too.

  11. #11
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Exactly, if the sampling/triggering is done on the microcontroller, you won't have to ship data unless you have triggered.

    DSO's do miss if you have two triggers close together, quite regularly, in my experience.

    If we have 1000 samples (or so) per trace, then it should be fairly quick to do a line-trace of that, even with regular 2D graphics on an average graphics card. PolyLine should be pretty good choice.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  12. #12
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    I can draw 10000 x 1000 points polyline in less than 1 second on my machine, so I expect that isn't going to limit the capability of the oscilloscope.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  13. #13
    Registered User
    Join Date
    Aug 2007
    Posts
    85
    Hmm, the problem is that, this little device (PIC 16F877) doesn't have enough memory to store all that samples (~ 256 bytes of RAM). I guess I could interface it with an external RAM chip, but then again, I could just buy I chip that uses USB and use that for the sending..

  14. #14
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    I guess the purpose of this excercise is to build something, but it's not going to be a real product. So do the triggering on the PC, that's fine.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  15. #15
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Drawing this is very simple. If the max is 255 and the min is 0 then 128 is the min and you can easily draw lines to represent this.

    But you won't achieve the speeds you are talking about...even in DirectGraphics.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Software Developer Opportunity in Delaware
    By Jdorazio in forum Projects and Job Recruitment
    Replies: 0
    Last Post: 04-28-2009, 10:58 AM
  2. Software Design/Test - Redmond, WA
    By IRVolt in forum Projects and Job Recruitment
    Replies: 2
    Last Post: 06-11-2008, 10:26 AM
  3. Why C Matters
    By DavidP in forum A Brief History of Cprogramming.com
    Replies: 136
    Last Post: 01-16-2008, 09:09 AM
  4. Adding trial period to software
    By BobS0327 in forum C Programming
    Replies: 17
    Last Post: 01-03-2006, 02:13 PM