我已经将这些论坛搜索了尽可能多的信息,并且我看过了throughput_eval和spssource code and I am somewhat familiar with them and understand them to a degree. They do remain somewhat opaque for me... perhaps you all can humor me here.
I don't need super high throughput... >3k bytes/second. I have an interrupt that happens every 2ms that generates 6 bytes. But I have been unable to achieve this using the standard GATT notification mechanism (If I generate an interrupt every 4ms, I start losing data). So my first question is, does this seem wrong to anyone? I keep a queue of 20 byte packets to send via GATT notification. I check the queue after the GATTC_CMP_EVT message and schedule to send the next packet if one is available. The system can't keep up and it fails after about 100 packets (more if I make the queue bigger, much less if I clock at a 2ms interrupt period).
所以,假设上面的不起作用,我看了throughput_eval项目。它看起来它使用多种特征(数据似乎并行地写入所有特征)和L2CAP。我很难看看在这里接收数据在这里在这里做什么,因为没有示例。客户是否必须与L2CAP进行互动?不幸的是,我的Windows PC没有任何东西,但Gatt所以我不确定如何协调它。
SPS does something which I didn't think you could do. The characteristic (SPS_SERVER_TX) has a size of 128 characters. I didn't think you could notify an object that large (when I tried it, it didn't work). Perhaps I am missing some key aspect here but I also can't tell what the maximum throughput here should be.
Any advice from the gallery would be helpful.
谢谢,
marco
嗨marcodg,
To beggin with, is the sleep enabled in the system? You should be able to send data with this kind of rate, if the sleep is enabled you may lose data when the interrupts are disabled, also do you process these data? You said that the system fails, does it goes to a hardfault handler ? Also how do you send the notifications? You populate the queue in the interrupt handler and update the value in the database ?
No the client doesn't have to interact with the l2cap.
The DSPS project sends a mtu exchange command (this allows the host, if he accepts it, to receive bigger mtu's), you can try it if you like but i think that this is your problem.
Thanks MT_dialog
谢谢你的回复。不支持睡眠模式d. There is no processing of the data. The interrupt fills the queue and if the queue was empty when it puts one in, it sends an initial message to the streaming task to send the next item in the queue.. The streaming task processes this message (by putting the data in the database and sending the notification) in the normal course of things. When the GATTC_NOTIFY message comes, it checks to see if there are more packets to send and if so, sends itself a message to send the next item in the queue. And so it goes for infinity. It takes 3 interrupts to fill up a packet (18 bytes + 2 bytes of status). At an interrupt period of 8ms (125Hz, 24ms/packet) everything works great. In fact, the queue never gets more than 1 item put in it. With an interrupt period of 4ms (12ms/packet) the queue fills up. I get about 100 valid packets received at the client. I fully acknowledge that I may have screwed this machinery up somehow but if I did I don't know where... it's not that complicated.
(Note: packets are removed from the queue on the GATTC_NOTIFY to prevent multiple messages/packets in circulation. I had that problem at first causing the queue to go below empty... thankfully no humans or pets were harmed in the ensuing catastrophe.)
The connection interval is whatever the default is (I think 7.5 or 8ms IIRC).
谢谢,
marco
嗨marcodg,
您可以尝试使用ISR的标志....然后在App_Asynch_trm中检查标志,如果设置标志,则将消息发送到流键任务。也许从ISR发送到Streaming任务的消息会导致您面临的概要。您是否可以在连接和发送数据时上传智能播播器图像(当它失败时以及没有)?
Thanks MT_dialog
Setting the flag from the ISR and sending the message as part of the main-loop doesn't seem to help. I hooked up a scope and flipped a GPIO bit. When I set the interrupt rate low enough (< 180 Hz, packet every 16.7ms) the time between when the packet is sent and when I get the GATTC_NOTIFY (and I can send the next message if available) is about 800us, with the occasional flash up to 1.2ms. It's pretty stable there. When the rate gets above 195Hz things go haywire on the scope but I was able to measure that the time between the packet was sent and GATTC_NOTIFY in many instances was >55ms. When enough of these long intervals go by, the queue fills up. I will work on getting the Smart Snippets image (I haven't used that software yet).
(编辑:某些东西似乎是SmartSnippets,因为它找不到ftd2xx.dll。我有Windows 10 ......)
Hi marcodg
An image from smart snippets would help, are you triggering the sending of the next packet by waiting the previous GATTC_NOTIFY completetion event ?
你能尝试在没有等待的情况下发送消息吗?
Thanks MT_dialog
Thank you for replying. I will need some coaching on "smart snippets". I got SmartSnippets to start up (downloaded the drivers) and can download the code and get it running. But after that, I'm not really sure what to do. I see a "Data Rate Monitor" on the lower right, but pushing the buttons don't seem to have an effect. I should note that I am using a PAN1740 module.
In other news, I tried increasing the MTU size from 23 to 87, thinking that if I could send fewer packets it would work. While it does allow me to increase the frequency (up to about 240Hz) it still fails in the same way. The time between when the packet is sent and GATTC_NOTIFY gets really long, about 60ms in this case, which is longer than the required packet frequency.
If I do not wait for the GATTC_NOTIFY and send a packet when it is ready, say, every 52.8ms corresponding to the 87 byte MTU (I require a packet every 32ms), packets get dropped at the source. I have a sequence number in the packet I send. The values are non-contiguous. Usually only one packet is dropped but I have seen as many as two.
I am continuing to examine the code to make sure I'm not messing up.
Still no luck here. I tried using l2cc (like the throughput_eval project) but at higher throughput rates, around where it was failing with GATT, the device would go into reset. At higher interrupt rates, the GATTC_NOTIFY message takes too long get sent, filling up the queue. If I don't wait for the message packets get dropped, even when they are only coming at around 1 packet per 50ms (87 byte MTU). I've tried varying the size of the MTU but that doesn't seem to help except that larger packets do marginally better.
I changed the code so that no real processing happens in the ISR itself. The kermel messages are sent as part of the app_asynch_trm() function.
The fact that GATTC_NOTIFY takes so long is a mystery because at lower interrupt rates, that send/wait process is only a couple of ms.
Hi marcdg,
你能发给我们一些智能片段活动,也许我们可以看看并找到一些东西。
Thanks MT_dialog
我无法聪明-snippets to work. The device is a Panasonic 1740. I have CFG_STREAMDATA defined as well as METRICS. I can download code to the device (jlink), but the datarate monitor only supports a com port. Usually the 'start peripheral' button has no effect, the rest of the time it displays an error.
I think I found the issue. Using wireshark I was able to track the conversation from the client (a PC running Windows 10). I can see the PC responding with empty PMU packets after every L2CAP fragment. Sometimes that response takes too long, forcing data to get backed up on my device. I'm not a BLE expert, so can you can confirm that a response from the client (in the form of an empty PMU) needs to occur before the next transmitted packet?
thanks
marco
嗨marcodg,
We couldn't tell much from the log you uploaded (in your other post, i assume that you are examining the same case), as some packets seem to be missing. In general, the host polls the device in every connection Interval with either empty PMUs or data packets (if something needs to be sent). With these packets the host has the ability to acknowledge that the previous packet the device sent was received and also perform some kind of flow control. If the packet is not acknowledged then it has to be resent. In other words, if the host explicitly doesn't acknowledge a packet then the device cannot send another one until this packet is eventually acknowledged from the host. This way the host can block the device from sending more packets. In any case though, the host always polls the device! Thus, what you reported, that the host stalls in sending the empty POLL packets, is a problem, if it indeed happens and is not produced by the sniffer which does not seem very reliable.
Thanks MT_dialog