Dear support,
现在我实现了GTL接口(奥维r UART) in a custom embedded project. My first goal is to get the prox_reporter_ext project working in combination with our external processor (DA1458x_SDK\5.0.2.1\projects\target_apps\ble_examples folder). I have used the host_proxr_sdk project as a reference (DA1458x_SDK\5.0.2.1\projects\host_apps\windows\proximity\reporter\ folder).
The challenge I'm facing is with memory allocation. Up until now we have avoided using malloc/free in our embedded code, simply to prevent introducing disastrous side effects such as memory leaks (also it might introduce new challenges in combination with our RTOS). In ble_msg of the aforementioned project I see that malloc and free are used via for example BleMsgAlloc and BleFreeMsg, the same goes for SendToMain in uart.c. Furthermore the receive buffer in UARTProc (uart.c) is 1000 bytes in size, with MAX_PACKET_LENGTH being 350 bytes (uart.h). Another 500 bytes are allocated in UARTSend (uart.c).
From what I understand from going through the sources and reading through UM-B-017 GTL interface in Integrated Process Application.pdf and UM-B-010_DA14580_581_583 Proximity application_v1.3.pdf, the GTL interface cannot be classified as a stop-and-wait protocol. In other words, multiple event packets / messages can be sent by the DA14580 to the external processor at any given moment. On the other hand the external processor can send a command packet / message whenever required by the application.
我完全理解使用动态的原因mory allocation, it makes sense with a variable number of packets and a variable PAR_LEN field value. However I would like to know if static memory allocation is a viable option (and achievable looking at memory requirements). In this case I would like to know what the max value for PAR_LEN is (the maximum number of bytes of Parameters that a message can contain) and how many packets / messages could potentially be sent by the DA14580. If feasible, I could create a circular buffer of X number of packets, each with MAX_PAR_LEN bytes of Parameters (we have 32kB of RAM available in total, so for example 3 packets of each 350 bytes with a separate read buffer of 350 bytes and a write buffer of 350 bytes (for asynchronous reading/writing) is not very realistic).
I would love to hear your thoughts on this. If at all possible, I would rather not use malloc / free.
Kind regards,
Arjan
Edit 02-11-2015
I have added information regarding endianness and data structure padding below, perhaps other forum users might find this useful as well.
Would love to hear how others implemented this in embedded systems with limited memory and/or where malloc/free was considered bad practice.
Kind regards,
Arjan
Hi abremen,
We never considered to do something like this, it can be done though. The maximum of potential packets that could be exchanged is fully depended on the current application, i suppose that you can dynamically count in the current implementation how many packets are being allocated and sent, using somekind of counter, and by that you are going to be able to judge to size of memory that should be pre-allocated.
Thanks MT_dialog
You can use dynamic memory allocation, and you can choose one implemented by static buffers instead of the origin one. What's more, you can add one debug function such as printf to the project DA14580_SDK_3.0.4.0\host_apps\windows\proximity\monitor to see how many packets are send and received. As far as I am concerned, 32kB of RAM is fully sufficient, maybe 5kB is enough.
@MT_dialog thank you for your response, I ended up using dynamic memory allocation for obvious reasons. However it was meaningful experimenting with static memory allocation.
@summer20100514 thank you for your response. Note that the 32kB of RAM I mentioned is for the entire system, not just the BLE related portion of it. So 5kB would still be a rather big chunk ;-).
Perhaps you can help me out with a new question; Are all data structures in the SDK explicitly aligned on the right boundaries? Or is it assumed that data structures will be packed in both DA14580 as well as host code (in order to use the SDK)? I notice sizeof is used quite often to determine the size of the different message parameters, I was just wondering whether or not potential implicit padding was taken into account.
Thanks,
Arjan
You can print out the address if you are not quite sure about the alignment.
@summer20100514 perhaps I should clarify my question.
Let's take struct proxr_enable_req for example:
// sdk\ble_stack\profiles\prox\proxr\proxr_task.h
/// Parameters of the @ref PROXR_ENABLE_REQ message
struct proxr_enable_req
{
/// Connection Handle
uint16_t conhdl;
/// Security level
uint8_t sec_lvl;
/// Saved LLS alert level to set in ATT DB
uint8_t lls_alert_lvl;
/// TX Power level
int8_t txp_lvl;
};
When creating a PROXR_ENABLE_REQ GTL packet, sizeof(struct proxr_enable_req) is used to determine the parameter length. In this case the compiled value of the struct, and thus the value for the packet parameter length, is 6; An additional byte of padding is added just after member txp_lvl. In other words; The last member is padded with the number of bytes required so that the total size of the structure is a multiple of the largest alignment of any structure member.
When casting the incoming byte array to the message's corresponding parameters struct type we need to be sure that this implicit padding is, or is not taken into account by both sender and receiver. In case of proxr_enable_req we wouldn't have a real issue dereferencing the individual members, as padding is merely added after the last member. However if you take the example below, individual members might not be where you'd expect them to be (depending on whether or not sender and receiver assume different rules regarding packing structures).
Long story short, I'm curious to learn from Dialog itself which design choice was made with the DA SDK. Are packet parameters sent with or without pad bytes in between fields?
I didn't notice the matter before, but it seemed nothing wrong happened. Maybe the padding byte is always added after the last member ?
Since I did not yet come across any document that explicitly specifies the data endianness and whether or not data structures are packed or that pad bytes exist in between struct members, my guess is that up until now the SDK has only been used in / intended for (32 bit) platforms with a little endian memory architecture without structs being packed (for example I did find any __attribute__(packed) in the SDK code).
As long as both systems use sizeof(struct ) for setting the GTL packet parameter length and cast the incoming and outgoing parameter byte array to these struct types, everything will work just fine (as both platforms expect the same struct member padding / member alignment).
Even tough I believe this is the case with how the SDK has been used so far, I would really appreciate a response from Dialog itself regarding this matter.
- Is it correct to assume that structs are not packed (implicit padding added by the compiler)?
- There is no explicit padding added to make sure that members align properly and struct sizes are a multiple of the size of its largest member? (i.e. this is left to the compiler)
Please note that I am absolutely fine with this approach (with our current host platform), as our host uses the 32-bit ARM Cortex-M0+ processor as well. Both DA14580 and host processor are configured with little-endian byte order. In other words, sizeof() will return the same result on both platforms and struct members will be aligned exactly the same on both platforms (with the same implicit padding added by the compiler).
I have worked quite a lot with these type of byte-oriented, packed based protocols in cross-platform solutions (with different memory architectures). Perhaps a good future addition for the SDK could be to add an extra level of abstraction, by specifying that no pad bytes exist in between fields and using a write/read module for writing/reading multi-byte data types in an endianness independent manner. This way you remove the platform-dependent aspect of it all.
I'm a big fan of the SDK (especially 5.0.3, love it!), I just like to get things cleared up as much as possible :-).
Looking forward to your response Dialog.
Kind regards,
Arjan
Hi abremen,
The alignment is left to the compiler to decide, the reason for that is because windows and ARM have the same endianess (little endian) and sizes. In case of a different architecture the compiler will fail and you will have to declare the structs explictly.
Thanks MT_dialog
Hi MT_dialog,
Thanks for the confirmation. This topic may now be closed.
Kind regards,
Arjan