在Tms320C6678开发板上做测试,使用mcsdk中的helloworld作为原型,使用NDK安装目录下的一个MulticastTest() 函数稍作修改,进行多播接收测试,
代码如下所示:
/*———————————————————————- */
/* MulticastTest() */
/* Test the Multicast socket API. */
/*———————————————————————- */
void dtask_udp_multicast (void)
{
SOCKET sudp1 = INVALID_SOCKET;
struct sockaddr_in sin1;
char buffer[1000];
int reuse = 1;
struct ip_mreq group;
fd_set msockets;
int iterations = 0;
int cnt;
CI_IPNET NA;
/* Raise priority to transfer data & wait for the link to come up. */
TaskSetPri(TaskSelf(), 1);
/* Allocate the file environment for this task */
fdOpenSession( TaskSelf() );
printf ("=== Executing Multicast Test on Interface 1 ===\n");
/* Create our UDP Multicast socket1 */
sudp1 = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if( sudp1 == INVALID_SOCKET )
{
printf ("Error: Unable to create socket\n");
return;
}
/* Set Port = 4040, leaving IP address = Any */
bzero( &sin1, sizeof(struct sockaddr_in) );
sin1.sin_family = AF_INET;
sin1.sin_port = htons(7003);
/* Print the IP address information only if one is present. */
if (CfgGetImmediate( 0, CFGTAG_IPNET, 1, 1, sizeof(NA), (UINT8 *)&NA) != sizeof(NA))
{
printf ("Error: Unable to get IP Address Information\n");
fdClose (sudp1);
return;
}
/* Set the Reuse Ports Socket Option for both the sockets. */
if (setsockopt(sudp1, SOL_SOCKET, SO_REUSEPORT, (char *)&reuse, sizeof(reuse)) < 0)
{
printf ("Error: Unable to set the reuse port socket option\n");
fdClose (sudp1);
return;
}
/* Now bind both the sockets. */
if (bind (sudp1, (PSA) &sin1, sizeof(sin1)) < 0)
{
printf ("Error: Unable to bind the socket.\n");
fdClose (sudp1);
return;
}
/* Now we join the groups for socket1
* Group: 233.0.0.1 */
group.imr_multiaddr.s_addr = inet_addr("233.0.0.1");
group.imr_interface.s_addr = NA.IPAddr;
if (setsockopt (sudp1, IPPROTO_IP, IP_ADD_MEMBERSHIP, (void *)&group, sizeof(group)) < 0)
{
printf ("Error: Unable to join multicast group\n");
fdClose (sudp1);
return;
}
printf ("—————————————–\n");
printf ("Socket Identifier %d has joined the following:-\n", sudp1);
printf (" – Group 233.0.0.1\n");
printf ("—————————————–\n");
while (iterations < 4)
{
/* Initialize the FD Set. */
FD_ZERO(&msockets);
FD_SET(sudp1, &msockets);
/* Wait for the multicast packets to arrive. */
cnt = fdSelect( (int)sudp1, &msockets, 0, 0 , 0);
if(FD_ISSET(sudp1, &msockets))
{
cnt = (int)recv (sudp1, (void *)&buffer, sizeof(buffer), 0);
if( cnt >= 0 )
printf ("Socket Identifier %d received %d bytes of multicast data\n", sudp1, cnt);
else
printf ("Error: Unable to receive data\n");
/* Increment the iterations. */
iterations++;
}
}
/* Once the packet has been received. Leave the Multicast group! */
if (setsockopt (sudp1, IPPROTO_IP, IP_DROP_MEMBERSHIP, (void *)&group, sizeof(group)) < 0)
{
printf ("Error: Unable to leave multicast group\n");
fdClose (sudp1);
return;
}
/* Leave only one of the multicast groups through the proper API. */
NtIPN2Str (group.imr_multiaddr.s_addr, &buffer[0]);
printf ("Leaving group %s through IP_DROP_MEMBERSHIP\n", buffer);
/* Once we get out of the loop close socket2; this should internally leave all the groups. */
fdClose (sudp1);
printf("== End Multicast Test ==\n\n");
TaskSleep(2000);
fdCloseSession( TaskSelf() );
TaskDestroy( TaskSelf() );
}
出现的问题是,程序运行起来之后,网络可以ping通,可以收到UDP包,但是收不到来自多播组的包
不知道是什么原因?
将上面的代码添加到C6455开发板的helloworld工程中,做多播测试是可以收到多播包的。
所以代码应该没有什么问题。
我想问的是C6678的网络接收中是不是把来自多播组的包识别成非法的包了,从而收不到?
是不是和Security 模块有关?
请各位大大们帮忙看看,多谢啊!
Andy Yin1:
6678 GE中有个ALE进行配置对地址进行过滤路由,你看看ndk user guide应该还有相关的API调用配置。
参考E2E相关帖子:
https://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/376796
https://e2e.ti.com/support/embedded/tirtos/f/355/t/202568
Wenguo Li1:
回复 Andy Yin1:
Andy Yin1
6678 GE中有个ALE进行配置对地址进行过滤路由,你看看ndk user guide应该还有相关的API调用配置。
参考E2E相关帖子:
https://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/376796
https://e2e.ti.com/support/embedded/tirtos/f/355/t/202568
Wenguo Li1:
回复 Andy Yin1:
结合以前NSP版本的驱动看了之后发现,NIMU_eth将NDK的好多功能都没有实现?
Emacioctl 都是空的,我看e2e上说NIMU_eth不会进一步更新,也就是这个问题不会解决了
有没有对EMAC驱动研究比较深入的大大们,想想办法丛哪开始改阿?
Wenguo Li1:
回复 Andy Yin1:
Andy Yin1
6678 GE中有个ALE进行配置对地址进行过滤路由,你看看ndk user guide应该还有相关的API调用配置。
参考E2E相关帖子:
https://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/376796
https://e2e.ti.com/support/embedded/tirtos/f/355/t/202568
Wenguo Li1:
回复 Wenguo Li1:
我已经改驱动改了好几周了,
ALE也设置了,
PA的包解析流程也看了,
但是实在是找不到解决的办法,试过很多次就是不行
不知道关键点在什么地方?
请求支持啊!
还是上面说的,我们希望提供支持费用解决这个问题,
驱动搞起来实在太痛苦了
Allen35065:
回复 Wenguo Li1:
6678的NIMU不支持多播,很遗憾目前也看不到有升级的计划。
665x和645x以及647x上使用的EMAC模块和C6678不一样,适配这些芯片的NSP可以支持多播,如果你需要多播的话,就只能参考这些代码进行修改了;
NDK里只用到了PA的ALE过滤的功能,这些你都已经做了,问题不大,主要问题还是在接收包队列管理方面,6678的队列管理机制和原来的接收管理不太一样;
你可以在配置好以后,发多播包,然后到6678 EMAC的统计信息寄存器里看是不是收到了包,如果确认物理上已经收到,那就主要是改接收包处理这一块。
Wenguo Li1:
回复 Allen35065:
Allen Yin ,您好,
非常感谢您的回复!!
1. 在EMAC的统计信息寄存器里,我可以看到,确实收到包了
2. 我看了好多帖子和资料,说丢包是丢在PA里面,如帖子http://www.deyisupport.com/question_answer/dsp_arm/c6000_multicore/f/53/p/12517/44658.aspx#44658
里面说到了,应该调用PA PDK的库函数 Pa_configExceptionRoute,增加以下的异常路由
我参考paunittest 例子,进行了异常路由的添加,代码如下:
#define T8_NUM_EXCEPTION_ROUTES 5static int t8ErouteTypes[] = { pa_EROUTE_L2L3_FAIL, pa_EROUTE_MAC_BROADCAST, pa_EROUTE_MAC_MULTICAST, pa_EROUTE_IP_BROADCAST, pa_EROUTE_IP_MULTICAST};
paReturn_t paret; Int cmdDest; UInt16 cmdSize;
paCmdReply_t cmdReply = { pa_DEST_HOST, /* Dest */ 0, /* Reply ID (returned in swinfo0) */ 0, /* Queue */ 0 }; /* Flow ID */
/* Issue the exception route command */ cmdReply.replyId = T8_CMD_SWINFO0_EROUTE_CFG_ID; testCommonConfigExceptionRoute (res_mgr_get_painstance(), T8_NUM_EXCEPTION_ROUTES, t8ErouteTypes, t8Eroutes, &cmdReply, &cmdDest, &cmdSize, &paret);
testCommonConfigExceptionRoute 与paunittest 中稍有不同,具体代码如下,
/* Provide an Exception Route configuration to the PA sub-system */Cppi_HostDesc *testCommonConfigExceptionRoute (Pa_Handle passHandle, int nRoute, int *routeTypes, paRouteInfo_t *eRoutes, paCmdReply_t *repInfo, Int *cmdDest, UInt16 *cmdSize, paReturn_t *paret){ Cppi_HostDesc *hd; //Qmss_Queue q; UInt32 psCmd;
/* Get a Tx free descriptor to send a command to the PA PDSP */ if ((QMSS_QPOP (gTxCmdFreeQHnd, QHANDLER_QPOP_FDQ_NO_ATTACHEDBUF, (Cppi_HostDesc **)&hd )) != NULL) { platform_write ("Error obtaining a Tx free descriptor \n"); return NULL; }
*cmdSize = hd->origBufferLen;
*paret = Pa_configExceptionRoute (passHandle, nRoute, routeTypes, eRoutes, (paCmd_t) hd->buffPtr, cmdSize, repInfo, cmdDest);
/* Restore the descriptor and return it on PA failure */ if (*paret != pa_OK) { return (NULL); }
#ifdef PA_SIM_BUG_4BYTES *cmdSize = (*cmdSize+3)& ~3;#endif
/* Setup the return for the descriptor */ //q.qMgr = 0; //q.qNum = gTxCmdReturnQHnd; //Cppi_setReturnQueue (Cppi_DescType_HOST, (Cppi_Desc *)hd, q);
/* Mark the packet as a configuration packet */ psCmd = ((uint32_t)(4 << 5) << 24); Cppi_setPSData (Cppi_DescType_HOST, (Cppi_Desc *)hd, (UInt8 *)&psCmd, 4);
hd->buffLen = *cmdSize; Cppi_setPacketLen (Cppi_DescType_HOST, (Cppi_Desc *)hd, *cmdSize);
return (hd);}
经过以上的添加,还是接收不到多播包,不知道是不是添加的方式不对还是有其他原因?
Allen Yin 能否给一个添加异常路由的具体实例代码啊?paunittest 中的实例是多路由的例子,
与NiMU的实现还是有些差距,这里,我不知道该如何正确的配置异常路由。
非常期待您的回复,谢谢!
Allen35065:
回复 Wenguo Li1:
我这里没有这方面的例子,没有NDK的开发更新的话也很难有更多的资源;
添加多播不容易做,可能需要修改接收包的整个链路。
Allen35065:
回复 Wenguo Li1:
从WIKI的解释看,PA只是过滤MAC,问题是多播的包会route到哪一个Queue映射中断,这个在NIMU里可能并没有实现
Q Will the NIMU driver uses the PA? Is there any release combined NIMU and PA? No. This NIMU layer calls underlying hardware driver/CSL. In case of keystone devices, the NIMU code does not use PA. It bypasses PA and sends packet directly to the QMSS Queue648. http://processors.wiki.ti.com/index.php/BIOS_MCSDK_2.0_User_Guide#Network_Interface_Management_Unit_.28NIMU.29_Driver TI provides both NDK and PA LLD. The application can either use NDK or PA LLD with a network stack provided by the customer himself. If you choose to use NDK, you application should interface with NDK only, i.e. invoking NDK APIs for all data traffic. If you choose to use PA LLD, you need to write your own network stack to interface with PA LLD and other low layer software stacks such as CPPI and QMSS LLDs. The NDK itself is device-independent. It uses the device-specific NIMU to interface with low-level device driver. In the current implementation, the TCI6678 NIMU only uses PASS to perform device MAC address filtering so that only broadcast, multicast and device-specific MAC packet will be delivered to NDK. In the egress direction, the TX packets are pushed to the CPSW queue (Q#648) directly. It is up to the platform team to determine how much functionality of PASS that NIMU will take advantage of in the future. On keystone devices, the NIMU layer is interface between NDK stack and the NETCP. It does not utilize PA subsystem currently. There is no release combined with the NIMU and PA.
Wenguo Li1:
回复 Allen35065:
Allen Yin
从WIKI的解释看,PA只是过滤MAC,问题是多播的包会route到哪一个Queue映射中断,这个在NIMU里可能并没有实现
Q Will the NIMU driver uses the PA? Is there any release combined NIMU and PA? No. This NIMU layer calls underlying hardware driver/CSL. In case of keystone devices, the NIMU code does not use PA. It bypasses PA and sends packet directly to the QMSS Queue648. http://processors.wiki.ti.com/index.php/BIOS_MCSDK_2.0_User_Guide#Network_Interface_Management_Unit_.28NIMU.29_Driver TI provides both NDK and PA LLD. The application can either use NDK or PA LLD with a network stack provided by the customer himself. If you choose to use NDK, you application should interface with NDK only, i.e. invoking NDK APIs for all data traffic. If you choose to use PA LLD, you need to write your own network stack to interface with PA LLD and other low layer software stacks such as CPPI and QMSS LLDs. The NDK itself is device-independent. It uses the device-specific NIMU to interface with low-level device driver. In the current implementation, the TCI6678 NIMU only uses PASS to perform device MAC address filtering so that only broadcast, multicast and device-specific MAC packet will be delivered to NDK. In the egress direction, the TX packets are pushed to the CPSW queue (Q#648) directly. It is up to the platform team to determine how much functionality of PASS that NIMU will take advantage of in the future. On keystone devices, the NIMU layer is interface between NDK stack and the NETCP. It does not utilize PA subsystem currently. There is no release combined with the NIMU and PA.