Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results

Home NTDEV

Before Posting...

Please check out the Community Guidelines in the Announcements and Administration Category.

More Info on Driver Writing and Debugging


The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.


Check out The OSR Learning Library at: https://www.osr.com/osr-learning-library/


Re: Maximum interrupts per second

OSR_Community_UserOSR_Community_User Member Posts: 110,217
[email protected] said:
> The numbers cited 6us vs 60us on identical dual boot hardware are
> silly. One has to wonder exactly what NT could be doing between the
> interrupt assertion and the isr that would account for an order of
> magnitude difference. Without some explanation I have to consider this
> to be a measuring error rather than anything real.


[email protected] said:
> Seems like there could indeed be many things that could account for a
> difference. I'm not an expert on the fine details of the Linux kernel,
> but some possibilities are:

Depending on how interrupts are handled in an operating system, there
may even be an entire context switch. Linux avoids dumping things like
FP registers unless a process is suspended, for example (no FP in the
kernel).

I can easily imagine excessive ISR dispatching "cleverness" causing an
unreasonably long code path between IRQ and the device driver's ISR.

I tried to measure the situation as carefully as I could. I used an
oscilloscope to measure the times that the INTA# signal was active so
there was no chance of timer sampling error. I also put the register write
that cleared the IRQ in the beginning of the ISR (*not* the DPC) in both
NT and Linux. Then I set the device generating interrupts.

The measurement was from the falling edge of INTA# (the scope trigger)
to the rising edge. The interrupts were generated continuously, and I
got a nice clean trace on the scope. The performance was very regular,
with little jitter in the position of the rising edge (clearing) of the
INTA# signal.

The difference was really quite shocking, and it was pretty obvious in
the macroscopic scale--file transfers from the board really did seem
much slower under NT. I since redesigned the communication process to
depend much less on interrupts and use shared memory instead. The device
has an embedded i960 and can DMA to host memory, so I redesigned the
protocol with the host driver to use huge shared buffers and generate
fewer interrupts.

You really want to avoid interrupts when designing hardware for NT.

Now I have not repeated the measurement with W2K or Linux 2.2, so the
gulf may have since closed a bit, but...

Anyhow, that's my experience. Learn from it or laugh at the fact that
I had to learn it the hard way:-)
--
Steve Williams "The woods are lovely, dark and deep.
[email protected] But I have promises to keep,
[email protected] and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."

Comments

  • Mark_de_WitMark_de_Wit Member Posts: 2
    On Mon, 5 Jun 2000, Jan Bottorff wrote:

    > >The numbers cited 6us vs 60us on identical dual boot hardware are
    > >silly. One has to wonder exactly what NT could be doing between the
    > >interrupt assertion and the isr that would account for an order of magnitude
    > >difference.

    There's a good discussion on NT interrupt behaviour in Dr. Dobb's Journal,
    April 1998. DDJ doesn't provide the article on-line, though. It's an
    interesting view into NT interrupt handling, including why interrupts can
    be handled out of order or delayed (before you think out-of-order
    interrupts are good, it's not so if low-priority interrupts prevent your
    high-priority interrupt from being handled).

    Mark
    --
    Mark de Wit, aka [LoL]Slothboy
    University of Glasgow Computing Science Department
    [email protected] http://www.dcs.gla.ac.uk/~dew
    Office phone: +44-(0)141-339 8855 ext. 0914
    ICQ # 7179380

    "Television! Teacher...Mother...Secret Lover...Let us all bask in
    television's warm, glowing, warming glow.-H.J.Simpson
  • OSR_Community_UserOSR_Community_User Member Posts: 110,217
    Sure the apic or mps hal more or less randomly assign bus interrupts to apic
    slots, but that in itself does not account for the 6us vs 60us difference.
    The original poster insists that he measured from interrupt assertion to isr
    invocation, and I guess I believe him. I think he also claims that there was
    no interrupt sharing, and I hope he is also sure that there were no
    conflicting interrupts from other devices. The problem I have is that there
    really isn't a whole lot of software in nt between the idt vector and the
    isr. The idt vector is pointing at a glob of ke code in the interrupt object
    that does not do much at all: get the spinlock, run the interrupt object isr
    list. Getting the spinlock is certainly not something that linux does. I
    also wonder if linux is even using the apic (in apic mode) on up x86
    platforms. It is possible that this by itself is the difference, but it
    raises a lot of other questions.

    On a pci bus in general you have no control over what your slot's interrupt
    priority will be, so the fact that nt might choose to rearrange this is not
    in itself wrong or bad. If it slows down the bus, that would be an issue,
    but I'm not exactly sure what "out of order interrupt processing" means on a
    MP platform anyway. The idea with the APIC architecture is to get as much
    concurrent interrupt processing as possible, and that by definition means
    that the order of overlapping interrupts is essentially undefined. Note that
    the APIC acks the bus interrupt as soon as it delivers the interrupt message
    to a target cpu, where that interrupt can be queued, so the bus is not being
    held up.

    > -----Original Message-----
    > From: Mark de Wit [mailto:[email protected]]
    > Sent: Tuesday, June 06, 2000 5:55 AM
    > To: NT Developers Interest List
    > Subject: [ntdev] Re: Maximum interrupts per second
    >
    >
    > On Mon, 5 Jun 2000, Jan Bottorff wrote:
    >
    > > >The numbers cited 6us vs 60us on identical dual boot hardware are
    > > >silly. One has to wonder exactly what NT could be doing between the
    > > >interrupt assertion and the isr that would account for an
    > order of magnitude
    > > >difference.
    >
    > There's a good discussion on NT interrupt behaviour in Dr.
    > Dobb's Journal,
    > April 1998. DDJ doesn't provide the article on-line, though. It's an
    > interesting view into NT interrupt handling, including why
    > interrupts can
    > be handled out of order or delayed (before you think out-of-order
    > interrupts are good, it's not so if low-priority interrupts
    > prevent your
    > high-priority interrupt from being handled).
    >
    > Mark
    > --
    > Mark de Wit, aka [LoL]Slothboy
    > University of Glasgow Computing Science Department
    > [email protected] http://www.dcs.gla.ac.uk/~dew
    > Office phone: +44-(0)141-339 8855 ext. 0914
    > ICQ # 7179380
    >
    > "Television! Teacher...Mother...Secret Lover...Let us all bask in
    > television's warm, glowing, warming glow.-H.J.Simpson
    >
    >
    > ---
    > You are currently subscribed to ntdev as: [email protected]
    > To unsubscribe send a blank email to $subst('Email.Unsub')
    >
  • OSR_Community_User-35OSR_Community_User-35 Member Posts: 154
    I'm sure it's not about connecting a huge number of devices, but
    maximizing bandwidth to a smaller number. Fibre channel is, after all,
    'only' 100 MB/s. A couple or three adapters cards can max out a fibre
    cable and/or their PCI bus, so some of these big servers add more cards
    on more PCI buses to keep their processors and target storage busy.

    -----------------------------------------------------------------------
    Dave Cox
    Hewlett-Packard Co.
    HPSO/SSMO (Santa Barbara)
    https://ecardfile.com/id/Dave+Cox


    -----Original Message-----
    From: Paul Bunn [mailto:[email protected]]
    Sent: Thursday, June 01, 2000 8:19 AM
    To: NT Developers Interest List
    Subject: [ntdev] RE: Maximum interrupts per second


    That doesn't make any sense to me. A single FC controller can handle 120
    odd devices each. Does he really need to connect to more than 2000 devices
    on a single system ? Is it possible to use a Fibre-switch (eg
    http://www.vixel.com) to reduce the number of controllers required ?

    Regards,

    Paul Bunn, UltraBac.com, 425-644-6000
    Microsoft MVP - WindowsNT/2000
    http://www.ultrabac.com



    > -----Original Message-----
    > From: Gary Little [mailto:[email protected]]
    > Sent: Thursday, June 01, 2000 8:14 AM
    > To: NT Developers Interest List
    > Subject: Maximum interrupts per second
    >
    > Somewhere in the course of my travels in developing NT device drivers I
    came across one of those magic numbers that represents the maximum number of
    interrupts per second: 10,000. Does NT begin governing interrupts once
    interrupts exceeds 10K per second, and if so how does this scale across
    multiple CPU's? It seems that we have a customer that wants to use 20 PCI
    fibrechannel adapters for a large storage system, and is expecting on the
    order of 200,000 interrupts per second. Will NT or Win2K choke when it only
    has about 5 microseconds between interrupts?
    >

    ---
    You are currently subscribed to ntdev as: [email protected]
    To unsubscribe send a blank email to $subst('Email.Unsub')
  • OSR_Community_UserOSR_Community_User Member Posts: 110,217
    A lot of this discussion has been "He should do it ..." or "I once did ...".
    Those are all valid points and good for discussion, but it misses the
    question.

    Is there a known limit, possibly varying depending upon CPU, numbers of
    CPU's, and speed of CPU's, for the maximum number of interrupts that
    NT/Win2000 will allow? Once this limit is exceeded will interrupts then be
    governed and throttled back to a more tolerable level?

    I remember reading a posting 2 years ago that indicated that after
    interrupts exceeded 10K per second, they were throttled back and instead of
    about 60 us to the ISR, the interval jumped to about 160 us before an ISR
    would be called. Naturally that was 4 years ago and I neither remember the
    author or the exact subject of the posting.

    -----Original Message-----
    From: COX,DAVID (HP-Roseville,ex1)
    [mailto:[email protected]]
    Sent: Thursday, June 08, 2000 11:07 AM
    To: NT Developers Interest List
    Subject: [ntdev] RE: Maximum interrupts per second

    I'm sure it's not about connecting a huge number of devices,
    but
    maximizing bandwidth to a smaller number. Fibre channel is,
    after all,
    'only' 100 MB/s. A couple or three adapters cards can max
    out a fibre
    cable and/or their PCI bus, so some of these big servers add
    more cards
    on more PCI buses to keep their processors and target
    storage busy.


    -----------------------------------------------------------------------
    Dave Cox
    Hewlett-Packard Co.
    HPSO/SSMO (Santa Barbara)
    https://ecardfile.com/id/Dave+Cox


    -----Original Message-----
    From: Paul Bunn [mailto:[email protected]]
    Sent: Thursday, June 01, 2000 8:19 AM
    To: NT Developers Interest List
    Subject: [ntdev] RE: Maximum interrupts per second


    That doesn't make any sense to me. A single FC controller
    can handle 120
    odd devices each. Does he really need to connect to more
    than 2000 devices
    on a single system ? Is it possible to use a Fibre-switch
    (eg
    http://www.vixel.com) to reduce the number of controllers
    required ?

    Regards,

    Paul Bunn, UltraBac.com, 425-644-6000
    Microsoft MVP - WindowsNT/2000
    http://www.ultrabac.com



    > -----Original Message-----
    > From: Gary Little [mailto:[email protected]]
    > Sent: Thursday, June 01, 2000 8:14 AM
    > To: NT Developers Interest List
    > Subject: Maximum interrupts per second
    >
    > Somewhere in the course of my travels in developing NT
    device drivers I
    came across one of those magic numbers that represents the
    maximum number of
    interrupts per second: 10,000. Does NT begin governing
    interrupts once
    interrupts exceeds 10K per second, and if so how does this
    scale across
    multiple CPU's? It seems that we have a customer that wants
    to use 20 PCI
    fibrechannel adapters for a large storage system, and is
    expecting on the
    order of 200,000 interrupts per second. Will NT or Win2K
    choke when it only
    has about 5 microseconds between interrupts?
    >

    ---
    You are currently subscribed to ntdev as: [email protected]
    To unsubscribe send a blank email to
    $subst('Email.Unsub')

    ---
    You are currently subscribed to ntdev as:
    [email protected]
    To unsubscribe send a blank email to
    $subst('Email.Unsub')
  • OSR_Community_UserOSR_Community_User Member Posts: 110,217
    ----- Original Message -----
    From: "Gary Little" <[email protected]>
    To: "NT Developers Interest List" <[email protected]>
    Sent: Thursday, June 08, 2000 2:22 PM
    Subject: [ntdev] RE: Maximum interrupts per second


    > A lot of this discussion has been "He should do it ..." or "I once did ...".
    > Those are all valid points and good for discussion, but it misses the
    > question.
    >
    > Is there a known limit, possibly varying depending upon CPU, numbers of
    > CPU's, and speed of CPU's, for the maximum number of interrupts that
    > NT/Win2000 will allow? Once this limit is exceeded will interrupts then be
    > governed and throttled back to a more tolerable level?
    >
    > I remember reading a posting 2 years ago that indicated that after
    > interrupts exceeded 10K per second, they were throttled back and instead of
    > about 60 us to the ISR, the interval jumped to about 160 us before an ISR
    > would be called. Naturally that was 4 years ago and I neither remember the
    > author or the exact subject of the posting.
    I've seen no evidence of anything like this being programmed in to the standard
    HALs. Without interrupt affinity, you could easily get performance as described above.
    Its possible someone did a private HAL along the lines you described.
    -DH


    >
    > -----Original Message-----
    > From: COX,DAVID (HP-Roseville,ex1)
    > [mailto:[email protected]]
    > Sent: Thursday, June 08, 2000 11:07 AM
    > To: NT Developers Interest List
    > Subject: [ntdev] RE: Maximum interrupts per second
    >
    > I'm sure it's not about connecting a huge number of devices,
    > but
    > maximizing bandwidth to a smaller number. Fibre channel is,
    > after all,
    > 'only' 100 MB/s. A couple or three adapters cards can max
    > out a fibre
    > cable and/or their PCI bus, so some of these big servers add
    > more cards
    > on more PCI buses to keep their processors and target
    > storage busy.
    >
    >
    > -----------------------------------------------------------------------
    > Dave Cox
    > Hewlett-Packard Co.
    > HPSO/SSMO (Santa Barbara)
    > https://ecardfile.com/id/Dave+Cox
    >
    >
    > -----Original Message-----
    > From: Paul Bunn [mailto:[email protected]]
    > Sent: Thursday, June 01, 2000 8:19 AM
    > To: NT Developers Interest List
    > Subject: [ntdev] RE: Maximum interrupts per second
    >
    >
    > That doesn't make any sense to me. A single FC controller
    > can handle 120
    > odd devices each. Does he really need to connect to more
    > than 2000 devices
    > on a single system ? Is it possible to use a Fibre-switch
    > (eg
    > http://www.vixel.com) to reduce the number of controllers
    > required ?
    >
    > Regards,
    >
    > Paul Bunn, UltraBac.com, 425-644-6000
    > Microsoft MVP - WindowsNT/2000
    > http://www.ultrabac.com
    >
    >
    >
    > > -----Original Message-----
    > > From: Gary Little [mailto:[email protected]]
    > > Sent: Thursday, June 01, 2000 8:14 AM
    > > To: NT Developers Interest List
    > > Subject: Maximum interrupts per second
    > >
    > > Somewhere in the course of my travels in developing NT
    > device drivers I
    > came across one of those magic numbers that represents the
    > maximum number of
    > interrupts per second: 10,000. Does NT begin governing
    > interrupts once
    > interrupts exceeds 10K per second, and if so how does this
    > scale across
    > multiple CPU's? It seems that we have a customer that wants
    > to use 20 PCI
    > fibrechannel adapters for a large storage system, and is
    > expecting on the
    > order of 200,000 interrupts per second. Will NT or Win2K
    > choke when it only
    > has about 5 microseconds between interrupts?
    > >
    >
    > ---
    > You are currently subscribed to ntdev as: [email protected]
    > To unsubscribe send a blank email to
    > $subst('Email.Unsub')
    >
    > ---
    > You are currently subscribed to ntdev as:
    > [email protected]
    > To unsubscribe send a blank email to
    > $subst('Email.Unsub')
    >
    > ---
    > You are currently subscribed to ntdev as: [email protected]
    > To unsubscribe send a blank email to $subst('Email.Unsub')
    >
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

Upcoming OSR Seminars
OSR has suspended in-person seminars due to the Covid-19 outbreak. But, don't miss your training! Attend via the internet instead!
Kernel Debugging 13-17 May 2024 Live, Online
Developing Minifilters 1-5 Apr 2024 Live, Online
Internals & Software Drivers 11-15 Mar 2024 Live, Online
Writing WDF Drivers 20-24 May 2024 Live, Online