I am a product developer for hydrographic software and would be interested in interfacing this sounder.
But ideally i would like to let the controls be done in the provided software and just listen in on a (local) socket. this saves me a lot of time implementing controls in our own software.
Is this possible in some way?
Let me discuss with the Eng. team and I will get back to you.
Hi Rich, Do you mean you would like to use SonarView to control the S500, but you would like for the data stream to your s/w via a socket?
Yes that is exactly what I mean!
Maybe if the packets that are sent were sent to a broadcast address this would already be possible.
This is not possible currently. Note that SonarView can save a log file, so that if your requirement is not real time, you could just use that. Perhaps in a future release we could direct that log file output from SonarView to a socket/stream of some sort.
Thanks for your answer, too bad for me ;-).
Maybe as a suggestion , can you make an example sonar log file available for download in the support section? I have been writing some code but could not yet test it since my client did not receive the actual unit. I searched on the internet but really could not find some real data.
Great idea. I’ll post a sample file here shortly.
I am analyzing the file you gave me.
What I really need (not only me but every user of hydrographic software) )is the used speed of sound. This is used to convert the measured depth back to travel time and re-apply other sound velo when required. Users tend to let this stay on 1500 and then afterwards re-process.
But it is not directly available for parsing, but I see there is a start and end range in millimeters so I can re-compute the used sound velo:
With this formula:
dSoundVelo = (0.002 * (pPayloadHeader->start_mm + pPayloadHeader->length_mm)) / pPayloadHeader->ping_duration_sec;
So when I parse from file a packet:
The ping duration of 0.19 msec is not correct since this gives me a sound speed of 100147.3 m/sec.
I would expect a sound speed of 10*2/1500 = 13.3 msec.
I also wonder about the sample rate of 2 MHz.
Surely 1024 samples without decimation would only cover a period of 0.512 msec?
I hope you can help me?
The Speed of Sound defaults to 1500m/s. So unless you change it by sending the appropriate command to the S500, that’s what it will be.
ping_duration_sec is the length of the ping pulse, so not the total “listening” time.
adc_sample_hz is 2MHz, or 4x the ping frequency. This is not the rate of the reported samples. The distance between the samples would be (length_mm - start_mm) / num_results.
Hope that helps.
Hi Larry, thanks for your answer.
I implemented as you proposed and this works fine.
Although it would have been nice to have the used sound velo since the reported range highly depends on it.
Ok, what is called ping duration is actually called “pulse length” or “pulse duration” in oceanography. Maybe you can updated the documentation as it is really confusing.
Good point. Documentation updated.