I have searched these forums for as much information as I could gather and I have looked through thethroughput_evalandspssource code and I am somewhat familiar with them and understand them to a degree. They do remain somewhat opaque for me... perhaps you all can humor me here.
I don't need super high throughput... >3k bytes/second. I have an interrupt that happens every 2ms that generates 6 bytes. But I have been unable to achieve this using the standard GATT notification mechanism (If I generate an interrupt every 4ms, I start losing data). So my first question is, does this seem wrong to anyone? I keep a queue of 20 byte packets to send via GATT notification. I check the queue after the GATTC_CMP_EVT message and schedule to send the next packet if one is available. The system can't keep up and it fails after about 100 packets (more if I make the queue bigger, much less if I clock at a 2ms interrupt period).
So, assuming the above won't work, I looked at thethroughput_eval项目。它看起来像它使用多个特点stics (the data appears to be written to all of them in parallel) and l2cap. It's hard for me to see what the client receiving the data is doing here as there is no example. Does the client have to interact with l2cap as well? Unfortunately, my Windows PC doesn't do anything but GATT so I'm not sure how to reconcile this.
SPS does something which I didn't think you could do. The characteristic (SPS_SERVER_TX) has a size of 128 characters. I didn't think you could notify an object that large (when I tried it, it didn't work). Perhaps I am missing some key aspect here but I also can't tell what the maximum throughput here should be.
Any advice from the gallery would be helpful.
Thanks,
marco
Hi marcodg,
To beggin with, is the sleep enabled in the system? You should be able to send data with this kind of rate, if the sleep is enabled you may lose data when the interrupts are disabled, also do you process these data? You said that the system fails, does it goes to a hardfault handler ? Also how do you send the notifications? You populate the queue in the interrupt handler and update the value in the database ?
No the client doesn't have to interact with the l2cap.
The DSPS project sends a mtu exchange command (this allows the host, if he accepts it, to receive bigger mtu's), you can try it if you like but i think that this is your problem.
Thanks MT_dialog
谢谢你的回复。不支持睡眠模式d. There is no processing of the data. The interrupt fills the queue and if the queue was empty when it puts one in, it sends an initial message to the streaming task to send the next item in the queue.. The streaming task processes this message (by putting the data in the database and sending the notification) in the normal course of things. When the GATTC_NOTIFY message comes, it checks to see if there are more packets to send and if so, sends itself a message to send the next item in the queue. And so it goes for infinity. It takes 3 interrupts to fill up a packet (18 bytes + 2 bytes of status). At an interrupt period of 8ms (125Hz, 24ms/packet) everything works great. In fact, the queue never gets more than 1 item put in it. With an interrupt period of 4ms (12ms/packet) the queue fills up. I get about 100 valid packets received at the client. I fully acknowledge that I may have screwed this machinery up somehow but if I did I don't know where... it's not that complicated.
(Note: packets are removed from the queue on the GATTC_NOTIFY to prevent multiple messages/packets in circulation. I had that problem at first causing the queue to go below empty... thankfully no humans or pets were harmed in the ensuing catastrophe.)
The connection interval is whatever the default is (I think 7.5 or 8ms IIRC).
thanks,
marco
Hi marcodg,
Can you try and set just a flag from the ISR....then check the flag in the app_asynch_trm and if the flag is set then send the message to the streaming task. Perhaps the message sending from the ISR to the streaming task causes the probelm you are facing. Can you upload a Smart Snippers image while you are connected and sending data (when it fails and when it doesn't)?
Thanks MT_dialog
Setting the flag from the ISR and sending the message as part of the main-loop doesn't seem to help. I hooked up a scope and flipped a GPIO bit. When I set the interrupt rate low enough (< 180 Hz, packet every 16.7ms) the time between when the packet is sent and when I get the GATTC_NOTIFY (and I can send the next message if available) is about 800us, with the occasional flash up to 1.2ms. It's pretty stable there. When the rate gets above 195Hz things go haywire on the scope but I was able to measure that the time between the packet was sent and GATTC_NOTIFY in many instances was >55ms. When enough of these long intervals go by, the queue fills up. I will work on getting the Smart Snippets image (I haven't used that software yet).
(Edit: Something appears to be amiss with SmartSnippets as it can't find the ftd2xx.dll. I do have Windows 10... )
Hi marcodg
An image from smart snippets would help, are you triggering the sending of the next packet by waiting the previous GATTC_NOTIFY completetion event ?
Can you try sending the messages without waiting ?
Thanks MT_dialog
Thank you for replying. I will need some coaching on "smart snippets". I got SmartSnippets to start up (downloaded the drivers) and can download the code and get it running. But after that, I'm not really sure what to do. I see a "Data Rate Monitor" on the lower right, but pushing the buttons don't seem to have an effect. I should note that I am using a PAN1740 module.
In other news, I tried increasing the MTU size from 23 to 87, thinking that if I could send fewer packets it would work. While it does allow me to increase the frequency (up to about 240Hz) it still fails in the same way. The time between when the packet is sent and GATTC_NOTIFY gets really long, about 60ms in this case, which is longer than the required packet frequency.
If I do not wait for the GATTC_NOTIFY and send a packet when it is ready, say, every 52.8ms corresponding to the 87 byte MTU (I require a packet every 32ms), packets get dropped at the source. I have a sequence number in the packet I send. The values are non-contiguous. Usually only one packet is dropped but I have seen as many as two.
I am continuing to examine the code to make sure I'm not messing up.
Still no luck here. I tried using l2cc (like the throughput_eval project) but at higher throughput rates, around where it was failing with GATT, the device would go into reset. At higher interrupt rates, the GATTC_NOTIFY message takes too long get sent, filling up the queue. If I don't wait for the message packets get dropped, even when they are only coming at around 1 packet per 50ms (87 byte MTU). I've tried varying the size of the MTU but that doesn't seem to help except that larger packets do marginally better.
I changed the code so that no real processing happens in the ISR itself. The kermel messages are sent as part of the app_asynch_trm() function.
The fact that GATTC_NOTIFY takes so long is a mystery because at lower interrupt rates, that send/wait process is only a couple of ms.
Hi marcdg,
Can you send us some Smart Snippets activity, maybe we can have a look and find something out.
Thanks MT_dialog
I am unable to get smart-snippets to work. The device is a Panasonic 1740. I have CFG_STREAMDATA defined as well as METRICS. I can download code to the device (jlink), but the datarate monitor only supports a com port. Usually the 'start peripheral' button has no effect, the rest of the time it displays an error.
I think I found the issue. Using wireshark I was able to track the conversation from the client (a PC running Windows 10). I can see the PC responding with empty PMU packets after every L2CAP fragment. Sometimes that response takes too long, forcing data to get backed up on my device. I'm not a BLE expert, so can you can confirm that a response from the client (in the form of an empty PMU) needs to occur before the next transmitted packet?
thanks
marco
Hi marcodg,
We couldn't tell much from the log you uploaded (in your other post, i assume that you are examining the same case), as some packets seem to be missing. In general, the host polls the device in every connection Interval with either empty PMUs or data packets (if something needs to be sent). With these packets the host has the ability to acknowledge that the previous packet the device sent was received and also perform some kind of flow control. If the packet is not acknowledged then it has to be resent. In other words, if the host explicitly doesn't acknowledge a packet then the device cannot send another one until this packet is eventually acknowledged from the host. This way the host can block the device from sending more packets. In any case though, the host always polls the device! Thus, what you reported, that the host stalls in sending the empty POLL packets, is a problem, if it indeed happens and is not produced by the sniffer which does not seem very reliable.
Thanks MT_dialog