You might try a wired Ethernet to serial adapter, which would avoid the USB polling latency.
I thought the standard serial driver looked as the hardware to determine the maximum realistic baud rate. Some chips may claim 916k baud, but only if you run them at specific clock speeds (due to timing accuracy requirements of low clock divisors).
A typical serial port also has increased latency from the FIFO. There might be a way to turn off the FIFO (registry option), although this potentially risks dropping bytes if the interrupt response isn’t fast enough.
You realize Windows is not a hard real-time OS and you may not consistently get a 60 usec I/O latency. Sorry to be a messenger of reality.
You say you want to send/receive 32-bits in less than 60 usec, which back of envelope calculations say you will need a serial baud rate of > 60 usec / 32 (half a megabaud), which would also be about 15 usec per byte, which if you get a processor interrupt on every byte would be an interrupt rate of 66,666 interrupts sec, which is a pretty hefty interrupt rate. If you are going to both send 4 bytes and receive 4 bytes, a byte at a time, every 60 usec, that’s essentially greater than 125K I/Os/sec, getting out of the slow device range. Even if you read and write 4 bytes per I/O request, in a single interrupt, that’s approaching 20,000 interrupts/sec. Ethernet devices frequently throttle interrupt rates to something like 5000 interrupts/sec.
You might consider using a $5 microcontroller, which you download/upload batches of data to, and it can just sit there and talk to you FPGA. Your latency requirements are in kind of an ugly range, pretty slow for a real ($$) bus master DMA controller, but too fast for a slow dumb controller (serial port) or USB (slave polling latency latency). Perhaps one of the USB expert folks here could comment if USB interrupt endpoints might have less latency (Wikepedia claims < 1 usec latency, but that seems rather low).
You sound like a typical embedded hardware designer with not such a good grasp of the I/O realities of general purpose OSs. For you, 15 usec is pretty slow, to the OS, having to respond to 66K interrupts sec is pretty fast. The OS pretty much assumes I/O will either be trivia (like 115kbaud serial on a 16 byte FIFO, which is like a thousand interrupts/sec), or else there is a smart controller than handles the time critical stuff, and the OS deals with batches of data. Something like a storage or network controller can process millions of requests/sec, but only in batches with latency rather longer.
So can you tell us more, does there have to a user mode app to FPGA round trip every 60 usec?
Jan
-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@gmail.com
Sent: Tuesday, October 09, 2012 10:02 PM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] Serial driver increase baud limit
“All” I want is a baud rate of 916.2k. Still a “standard” baud rate, and I don’t want any additional functions (more data bits per frame, interrupts, etc.). I realize hacking the windows serial driver is far from ideal but I am convinced it is the bottleneck.
The problem with anything USB is the 1ms minimum interval that exists between data frames for synchronization. This means that in between instructions for read/write over USB there is a mandatory 1ms wait before sending/receiving more data.
This is no problem when dealing with large data sizes, but when dealing with a low data count of only 4 bytes it creates a significant delay. If I was sending a large amount of data that could occupy that 1ms gap it would be fine. USB is completely out of the option.
I will reiterate that my requirements are for 32 bits to be sent/received in less than 60us.
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer