Linux Kernel  3.7.1
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Groups Pages
de4x5.c
Go to the documentation of this file.
1 /* de4x5.c: A DIGITAL DC21x4x DECchip and DE425/DE434/DE435/DE450/DE500
2  ethernet driver for Linux.
3 
4  Copyright 1994, 1995 Digital Equipment Corporation.
5 
6  Testing resources for this driver have been made available
7  in part by NASA Ames Research Center ([email protected]).
8 
9  The author may be reached at [email protected].
10 
11  This program is free software; you can redistribute it and/or modify it
12  under the terms of the GNU General Public License as published by the
13  Free Software Foundation; either version 2 of the License, or (at your
14  option) any later version.
15 
16  THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
17  WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
18  MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
19  NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
20  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
21  NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
22  USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
23  ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
24  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
25  THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
26 
27  You should have received a copy of the GNU General Public License along
28  with this program; if not, write to the Free Software Foundation, Inc.,
29  675 Mass Ave, Cambridge, MA 02139, USA.
30 
31  Originally, this driver was written for the Digital Equipment
32  Corporation series of EtherWORKS ethernet cards:
33 
34  DE425 TP/COAX EISA
35  DE434 TP PCI
36  DE435 TP/COAX/AUI PCI
37  DE450 TP/COAX/AUI PCI
38  DE500 10/100 PCI Fasternet
39 
40  but it will now attempt to support all cards which conform to the
41  Digital Semiconductor SROM Specification. The driver currently
42  recognises the following chips:
43 
44  DC21040 (no SROM)
45  DC21041[A]
46  DC21140[A]
47  DC21142
48  DC21143
49 
50  So far the driver is known to work with the following cards:
51 
52  KINGSTON
53  Linksys
54  ZNYX342
55  SMC8432
56  SMC9332 (w/new SROM)
57  ZNYX31[45]
58  ZNYX346 10/100 4 port (can act as a 10/100 bridge!)
59 
60  The driver has been tested on a relatively busy network using the DE425,
61  DE434, DE435 and DE500 cards and benchmarked with 'ttcp': it transferred
62  16M of data to a DECstation 5000/200 as follows:
63 
64  TCP UDP
65  TX RX TX RX
66  DE425 1030k 997k 1170k 1128k
67  DE434 1063k 995k 1170k 1125k
68  DE435 1063k 995k 1170k 1125k
69  DE500 1063k 998k 1170k 1125k in 10Mb/s mode
70 
71  All values are typical (in kBytes/sec) from a sample of 4 for each
72  measurement. Their error is +/-20k on a quiet (private) network and also
73  depend on what load the CPU has.
74 
75  =========================================================================
76  This driver has been written substantially from scratch, although its
77  inheritance of style and stack interface from 'ewrk3.c' and in turn from
78  Donald Becker's 'lance.c' should be obvious. With the module autoload of
79  every usable DECchip board, I pinched Donald's 'next_module' field to
80  link my modules together.
81 
82  Up to 15 EISA cards can be supported under this driver, limited primarily
83  by the available IRQ lines. I have checked different configurations of
84  multiple depca, EtherWORKS 3 cards and de4x5 cards and have not found a
85  problem yet (provided you have at least depca.c v0.38) ...
86 
87  PCI support has been added to allow the driver to work with the DE434,
88  DE435, DE450 and DE500 cards. The I/O accesses are a bit of a kludge due
89  to the differences in the EISA and PCI CSR address offsets from the base
90  address.
91 
92  The ability to load this driver as a loadable module has been included
93  and used extensively during the driver development (to save those long
94  reboot sequences). Loadable module support under PCI and EISA has been
95  achieved by letting the driver autoprobe as if it were compiled into the
96  kernel. Do make sure you're not sharing interrupts with anything that
97  cannot accommodate interrupt sharing!
98 
99  To utilise this ability, you have to do 8 things:
100 
101  0) have a copy of the loadable modules code installed on your system.
102  1) copy de4x5.c from the /linux/drivers/net directory to your favourite
103  temporary directory.
104  2) for fixed autoprobes (not recommended), edit the source code near
105  line 5594 to reflect the I/O address you're using, or assign these when
106  loading by:
107 
108  insmod de4x5 io=0xghh where g = bus number
109  hh = device number
110 
111  NB: autoprobing for modules is now supported by default. You may just
112  use:
113 
114  insmod de4x5
115 
116  to load all available boards. For a specific board, still use
117  the 'io=?' above.
118  3) compile de4x5.c, but include -DMODULE in the command line to ensure
119  that the correct bits are compiled (see end of source code).
120  4) if you are wanting to add a new card, goto 5. Otherwise, recompile a
121  kernel with the de4x5 configuration turned off and reboot.
122  5) insmod de4x5 [io=0xghh]
123  6) run the net startup bits for your new eth?? interface(s) manually
124  (usually /etc/rc.inet[12] at boot time).
125  7) enjoy!
126 
127  To unload a module, turn off the associated interface(s)
128  'ifconfig eth?? down' then 'rmmod de4x5'.
129 
130  Automedia detection is included so that in principal you can disconnect
131  from, e.g. TP, reconnect to BNC and things will still work (after a
132  pause whilst the driver figures out where its media went). My tests
133  using ping showed that it appears to work....
134 
135  By default, the driver will now autodetect any DECchip based card.
136  Should you have a need to restrict the driver to DIGITAL only cards, you
137  can compile with a DEC_ONLY define, or if loading as a module, use the
138  'dec_only=1' parameter.
139 
140  I've changed the timing routines to use the kernel timer and scheduling
141  functions so that the hangs and other assorted problems that occurred
142  while autosensing the media should be gone. A bonus for the DC21040
143  auto media sense algorithm is that it can now use one that is more in
144  line with the rest (the DC21040 chip doesn't have a hardware timer).
145  The downside is the 1 'jiffies' (10ms) resolution.
146 
147  IEEE 802.3u MII interface code has been added in anticipation that some
148  products may use it in the future.
149 
150  The SMC9332 card has a non-compliant SROM which needs fixing - I have
151  patched this driver to detect it because the SROM format used complies
152  to a previous DEC-STD format.
153 
154  I have removed the buffer copies needed for receive on Intels. I cannot
155  remove them for Alphas since the Tulip hardware only does longword
156  aligned DMA transfers and the Alphas get alignment traps with non
157  longword aligned data copies (which makes them really slow). No comment.
158 
159  I have added SROM decoding routines to make this driver work with any
160  card that supports the Digital Semiconductor SROM spec. This will help
161  all cards running the dc2114x series chips in particular. Cards using
162  the dc2104x chips should run correctly with the basic driver. I'm in
163  debt to <[email protected]> for the testing and feedback that helped get
164  this feature working. So far we have tested KINGSTON, SMC8432, SMC9332
165  (with the latest SROM complying with the SROM spec V3: their first was
166  broken), ZNYX342 and LinkSys. ZYNX314 (dual 21041 MAC) and ZNYX 315
167  (quad 21041 MAC) cards also appear to work despite their incorrectly
168  wired IRQs.
169 
170  I have added a temporary fix for interrupt problems when some SCSI cards
171  share the same interrupt as the DECchip based cards. The problem occurs
172  because the SCSI card wants to grab the interrupt as a fast interrupt
173  (runs the service routine with interrupts turned off) vs. this card
174  which really needs to run the service routine with interrupts turned on.
175  This driver will now add the interrupt service routine as a fast
176  interrupt if it is bounced from the slow interrupt. THIS IS NOT A
177  RECOMMENDED WAY TO RUN THE DRIVER and has been done for a limited time
178  until people sort out their compatibility issues and the kernel
179  interrupt service code is fixed. YOU SHOULD SEPARATE OUT THE FAST
180  INTERRUPT CARDS FROM THE SLOW INTERRUPT CARDS to ensure that they do not
181  run on the same interrupt. PCMCIA/CardBus is another can of worms...
182 
183  Finally, I think I have really fixed the module loading problem with
184  more than one DECchip based card. As a side effect, I don't mess with
185  the device structure any more which means that if more than 1 card in
186  2.0.x is installed (4 in 2.1.x), the user will have to edit
187  linux/drivers/net/Space.c to make room for them. Hence, module loading
188  is the preferred way to use this driver, since it doesn't have this
189  limitation.
190 
191  Where SROM media detection is used and full duplex is specified in the
192  SROM, the feature is ignored unless lp->params.fdx is set at compile
193  time OR during a module load (insmod de4x5 args='eth??:fdx' [see
194  below]). This is because there is no way to automatically detect full
195  duplex links except through autonegotiation. When I include the
196  autonegotiation feature in the SROM autoconf code, this detection will
197  occur automatically for that case.
198 
199  Command line arguments are now allowed, similar to passing arguments
200  through LILO. This will allow a per adapter board set up of full duplex
201  and media. The only lexical constraints are: the board name (dev->name)
202  appears in the list before its parameters. The list of parameters ends
203  either at the end of the parameter list or with another board name. The
204  following parameters are allowed:
205 
206  fdx for full duplex
207  autosense to set the media/speed; with the following
208  sub-parameters:
209  TP, TP_NW, BNC, AUI, BNC_AUI, 100Mb, 10Mb, AUTO
210 
211  Case sensitivity is important for the sub-parameters. They *must* be
212  upper case. Examples:
213 
214  insmod de4x5 args='eth1:fdx autosense=BNC eth0:autosense=100Mb'.
215 
216  For a compiled in driver, at or above line 548, place e.g.
217  #define DE4X5_PARM "eth0:fdx autosense=AUI eth2:autosense=TP"
218 
219  Yes, I know full duplex isn't permissible on BNC or AUI; they're just
220  examples. By default, full duplex is turned off and AUTO is the default
221  autosense setting. In reality, I expect only the full duplex option to
222  be used. Note the use of single quotes in the two examples above and the
223  lack of commas to separate items. ALSO, you must get the requested media
224  correct in relation to what the adapter SROM says it has. There's no way
225  to determine this in advance other than by trial and error and common
226  sense, e.g. call a BNC connectored port 'BNC', not '10Mb'.
227 
228  Changed the bus probing. EISA used to be done first, followed by PCI.
229  Most people probably don't even know what a de425 is today and the EISA
230  probe has messed up some SCSI cards in the past, so now PCI is always
231  probed first followed by EISA if a) the architecture allows EISA and
232  either b) there have been no PCI cards detected or c) an EISA probe is
233  forced by the user. To force a probe include "force_eisa" in your
234  insmod "args" line; for built-in kernels either change the driver to do
235  this automatically or include #define DE4X5_FORCE_EISA on or before
236  line 1040 in the driver.
237 
238  TO DO:
239  ------
240 
241  Revision History
242  ----------------
243 
244  Version Date Description
245 
246  0.1 17-Nov-94 Initial writing. ALPHA code release.
247  0.2 13-Jan-95 Added PCI support for DE435's.
248  0.21 19-Jan-95 Added auto media detection.
249  0.22 10-Feb-95 Fix interrupt handler call <[email protected]>.
250  Fix recognition bug reported by <[email protected]>.
251  Add request/release_region code.
252  Add loadable modules support for PCI.
253  Clean up loadable modules support.
254  0.23 28-Feb-95 Added DC21041 and DC21140 support.
255  Fix missed frame counter value and initialisation.
256  Fixed EISA probe.
257  0.24 11-Apr-95 Change delay routine to use <linux/udelay>.
258  Change TX_BUFFS_AVAIL macro.
259  Change media autodetection to allow manual setting.
260  Completed DE500 (DC21140) support.
261  0.241 18-Apr-95 Interim release without DE500 Autosense Algorithm.
262  0.242 10-May-95 Minor changes.
263  0.30 12-Jun-95 Timer fix for DC21140.
264  Portability changes.
265  Add ALPHA changes from <[email protected]>.
266  Add DE500 semi automatic autosense.
267  Add Link Fail interrupt TP failure detection.
268  Add timer based link change detection.
269  Plugged a memory leak in de4x5_queue_pkt().
270  0.31 13-Jun-95 Fixed PCI stuff for 1.3.1.
271  0.32 26-Jun-95 Added verify_area() calls in de4x5_ioctl() from a
272  suggestion by <[email protected]>.
273  0.33 8-Aug-95 Add shared interrupt support (not released yet).
274  0.331 21-Aug-95 Fix de4x5_open() with fast CPUs.
275  Fix de4x5_interrupt().
276  Fix dc21140_autoconf() mess.
277  No shared interrupt support.
278  0.332 11-Sep-95 Added MII management interface routines.
279  0.40 5-Mar-96 Fix setup frame timeout <[email protected]>.
280  Add kernel timer code (h/w is too flaky).
281  Add MII based PHY autosense.
282  Add new multicasting code.
283  Add new autosense algorithms for media/mode
284  selection using kernel scheduling/timing.
285  Re-formatted.
286  Made changes suggested by <[email protected]>:
287  Change driver to detect all DECchip based cards
288  with DEC_ONLY restriction a special case.
289  Changed driver to autoprobe as a module. No irq
290  checking is done now - assume BIOS is good!
291  Added SMC9332 detection <[email protected]>
292  0.41 21-Mar-96 Don't check for get_hw_addr checksum unless DEC card
293  only <[email protected]>
294  Fix for multiple PCI cards reported by <[email protected]>
295  Duh, put the IRQF_SHARED flag into request_interrupt().
296  Fix SMC ethernet address in enet_det[].
297  Print chip name instead of "UNKNOWN" during boot.
298  0.42 26-Apr-96 Fix MII write TA bit error.
299  Fix bug in dc21040 and dc21041 autosense code.
300  Remove buffer copies on receive for Intels.
301  Change sk_buff handling during media disconnects to
302  eliminate DUP packets.
303  Add dynamic TX thresholding.
304  Change all chips to use perfect multicast filtering.
305  Fix alloc_device() bug <[email protected]>
306  0.43 21-Jun-96 Fix unconnected media TX retry bug.
307  Add Accton to the list of broken cards.
308  Fix TX under-run bug for non DC21140 chips.
309  Fix boot command probe bug in alloc_device() as
310  reported by <[email protected]> and
312  Add cache locks to prevent a race condition as
313  reported by <[email protected]> and
315  Upgraded alloc_device() code.
316  0.431 28-Jun-96 Fix potential bug in queue_pkt() from discussion
317  with <[email protected]>
318  0.44 13-Aug-96 Fix RX overflow bug in 2114[023] chips.
319  Fix EISA probe bugs reported by <[email protected]>
320  and <[email protected]>.
321  0.441 9-Sep-96 Change dc21041_autoconf() to probe quiet BNC media
322  with a loopback packet.
323  0.442 9-Sep-96 Include AUI in dc21041 media printout. Bug reported
324  by <[email protected]>
325  0.45 8-Dec-96 Include endian functions for PPC use, from work
327  0.451 28-Dec-96 Added fix to allow autoprobe for modules after
328  suggestion from <[email protected]>.
329  0.5 30-Jan-97 Added SROM decoding functions.
330  Updated debug flags.
331  Fix sleep/wakeup calls for PCI cards, bug reported
332  by <[email protected]>.
333  Added multi-MAC, one SROM feature from discussion
334  with <[email protected]>.
335  Added full module autoprobe capability.
336  Added attempt to use an SMC9332 with broken SROM.
337  Added fix for ZYNX multi-mac cards that didn't
338  get their IRQs wired correctly.
339  0.51 13-Feb-97 Added endian fixes for the SROM accesses from
341  Fix init_connection() to remove extra device reset.
342  Fix MAC/PHY reset ordering in dc21140m_autoconf().
343  Fix initialisation problem with lp->timeout in
344  typeX_infoblock() from <[email protected]>.
345  Fix MII PHY reset problem from work done by
347  0.52 26-Apr-97 Some changes may not credit the right people -
348  a disk crash meant I lost some mail.
349  Change RX interrupt routine to drop rather than
350  defer packets to avoid hang reported by
352  Fix srom_exec() to return for COMPACT and type 1
353  infoblocks.
354  Added DC21142 and DC21143 functions.
355  Added byte counters from <[email protected]>
356  Added IRQF_DISABLED temporary fix from
358  0.53 12-Nov-97 Fix the *_probe() to include 'eth??' name during
359  module load: bug reported by
361  Fix multi-MAC, one SROM, to work with 2114x chips:
362  bug reported by <[email protected]>.
363  Make above search independent of BIOS device scan
364  direction.
365  Completed DC2114[23] autosense functions.
366  0.531 21-Dec-97 Fix DE500-XA 100Mb/s bug reported by
368  Fix type1_infoblock() bug introduced in 0.53, from
369  problem reports by
370  <[email protected]> and
372  Added argument list to set up each board from either
373  a module's command line or a compiled in #define.
374  Added generic MII PHY functionality to deal with
375  newer PHY chips.
376  Fix the mess in 2.1.67.
377  0.532 5-Jan-98 Fix bug in mii_get_phy() reported by
379  Fix bug in pci_probe() for 64 bit systems reported
380  by <[email protected]>.
381  0.533 9-Jan-98 Fix more 64 bit bugs reported by <[email protected]>.
382  0.534 24-Jan-98 Fix last (?) endian bug from <[email protected]>
383  0.535 21-Feb-98 Fix Ethernet Address PROM reset bug for DC21040.
384  0.536 21-Mar-98 Change pci_probe() to use the pci_dev structure.
385  **Incompatible with 2.0.x from here.**
386  0.540 5-Jul-98 Atomicize assertion of dev->interrupt for SMP
387  from <[email protected]>
388  Add TP, AUI and BNC cases to 21140m_autoconf() for
389  case where a 21140 under SROM control uses, e.g. AUI
390  from problem report by <[email protected]>
391  Add MII parallel detection to 2114x_autoconf() for
392  case where no autonegotiation partner exists from
393  problem report by <[email protected]>.
394  Add ability to force connection type directly even
395  when using SROM control from problem report by
397  Updated the PCI interface to conform with the latest
398  version. I hope nothing is broken...
399  Add TX done interrupt modification from suggestion
400  by <[email protected]>.
401  Fix is_anc_capable() bug reported by
403  Fix type[13]_infoblock() bug: during MII search, PHY
404  lp->rst not run because lp->ibn not initialised -
405  from report & fix by <[email protected]>.
406  Fix probe bug with EISA & PCI cards present from
407  report by <[email protected]>.
408  0.541 24-Aug-98 Fix compiler problems associated with i386-string
409  ops from multiple bug reports and temporary fix
410  from <[email protected]>.
411  Fix pci_probe() to correctly emulate the old
412  pcibios_find_class() function.
413  Add an_exception() for old ZYNX346 and fix compile
414  warning on PPC & SPARC, from <[email protected]>.
415  Fix lastPCI to correctly work with compiled in
416  kernels and modules from bug report by
417  <[email protected]> et al.
418  0.542 15-Sep-98 Fix dc2114x_autoconf() to stop multiple messages
419  when media is unconnected.
420  Change dev->interrupt to lp->interrupt to ensure
421  alignment for Alpha's and avoid their unaligned
422  access traps. This flag is merely for log messages:
423  should do something more definitive though...
424  0.543 30-Dec-98 Add SMP spin locking.
425  0.544 8-May-99 Fix for buggy SROM in Motorola embedded boards using
426  a 21143 by <[email protected]>.
427  Change PCI/EISA bus probing order.
428  0.545 28-Nov-99 Further Moto SROM bug fix from
430  Remove double checking for DEBUG_RX in de4x5_dbg_rx()
431  from report by <[email protected]>
432  0.546 22-Feb-01 Fixes Alpha XP1000 oops. The srom_search function
433  was causing a page fault when initializing the
434  variable 'pb', on a non de4x5 PCI device, in this
435  case a PCI bridge (DEC chip 21152). The value of
436  'pb' is now only initialized if a de4x5 chip is
437  present.
439  0.547 08-Nov-01 Use library crc32 functions by <[email protected]>
440  0.548 30-Aug-03 Big 2.6 cleanup. Ported to PCI/EISA probing and
441  generic DMA APIs. Fixed DE425 support on Alpha.
443  =========================================================================
444 */
445 
446 #include <linux/module.h>
447 #include <linux/kernel.h>
448 #include <linux/string.h>
449 #include <linux/interrupt.h>
450 #include <linux/ptrace.h>
451 #include <linux/errno.h>
452 #include <linux/ioport.h>
453 #include <linux/pci.h>
454 #include <linux/eisa.h>
455 #include <linux/delay.h>
456 #include <linux/init.h>
457 #include <linux/spinlock.h>
458 #include <linux/crc32.h>
459 #include <linux/netdevice.h>
460 #include <linux/etherdevice.h>
461 #include <linux/skbuff.h>
462 #include <linux/time.h>
463 #include <linux/types.h>
464 #include <linux/unistd.h>
465 #include <linux/ctype.h>
466 #include <linux/dma-mapping.h>
467 #include <linux/moduleparam.h>
468 #include <linux/bitops.h>
469 #include <linux/gfp.h>
470 
471 #include <asm/io.h>
472 #include <asm/dma.h>
473 #include <asm/byteorder.h>
474 #include <asm/unaligned.h>
475 #include <asm/uaccess.h>
476 #ifdef CONFIG_PPC_PMAC
477 #include <asm/machdep.h>
478 #endif /* CONFIG_PPC_PMAC */
479 
480 #include "de4x5.h"
481 
482 static const char version[] __devinitconst =
483  KERN_INFO "de4x5.c:V0.546 2001/02/22 [email protected]\n";
484 
485 #define c_char const char
486 
487 /*
488 ** MII Information
489 */
490 struct phy_table {
491  int reset; /* Hard reset required? */
492  int id; /* IEEE OUI */
493  int ta; /* One cycle TA time - 802.3u is confusing here */
494  struct { /* Non autonegotiation (parallel) speed det. */
495  int reg;
496  int mask;
497  int value;
498  } spd;
499 };
500 
501 struct mii_phy {
502  int reset; /* Hard reset required? */
503  int id; /* IEEE OUI */
504  int ta; /* One cycle TA time */
505  struct { /* Non autonegotiation (parallel) speed det. */
506  int reg;
507  int mask;
508  int value;
509  } spd;
510  int addr; /* MII address for the PHY */
511  u_char *gep; /* Start of GEP sequence block in SROM */
512  u_char *rst; /* Start of reset sequence in SROM */
513  u_int mc; /* Media Capabilities */
514  u_int ana; /* NWay Advertisement */
515  u_int fdx; /* Full DupleX capabilities for each media */
516  u_int ttm; /* Transmit Threshold Mode for each media */
517  u_int mci; /* 21142 MII Connector Interrupt info */
518 };
519 
520 #define DE4X5_MAX_PHY 8 /* Allow up to 8 attached PHY devices per board */
521 
522 struct sia_phy {
523  u_char mc; /* Media Code */
524  u_char ext; /* csr13-15 valid when set */
525  int csr13; /* SIA Connectivity Register */
526  int csr14; /* SIA TX/RX Register */
527  int csr15; /* SIA General Register */
528  int gepc; /* SIA GEP Control Information */
529  int gep; /* SIA GEP Data */
530 };
531 
532 /*
533 ** Define the know universe of PHY devices that can be
534 ** recognised by this driver.
535 */
536 static struct phy_table phy_info[] = {
537  {0, NATIONAL_TX, 1, {0x19, 0x40, 0x00}}, /* National TX */
538  {1, BROADCOM_T4, 1, {0x10, 0x02, 0x02}}, /* Broadcom T4 */
539  {0, SEEQ_T4 , 1, {0x12, 0x10, 0x10}}, /* SEEQ T4 */
540  {0, CYPRESS_T4 , 1, {0x05, 0x20, 0x20}}, /* Cypress T4 */
541  {0, 0x7810 , 1, {0x14, 0x0800, 0x0800}} /* Level One LTX970 */
542 };
543 
544 /*
545 ** These GENERIC values assumes that the PHY devices follow 802.3u and
546 ** allow parallel detection to set the link partner ability register.
547 ** Detection of 100Base-TX [H/F Duplex] and 100Base-T4 is supported.
548 */
549 #define GENERIC_REG 0x05 /* Autoneg. Link Partner Advertisement Reg. */
550 #define GENERIC_MASK MII_ANLPA_100M /* All 100Mb/s Technologies */
551 #define GENERIC_VALUE MII_ANLPA_100M /* 100B-TX, 100B-TX FDX, 100B-T4 */
552 
553 /*
554 ** Define special SROM detection cases
555 */
556 static c_char enet_det[][ETH_ALEN] = {
557  {0x00, 0x00, 0xc0, 0x00, 0x00, 0x00},
558  {0x00, 0x00, 0xe8, 0x00, 0x00, 0x00}
559 };
560 
561 #define SMC 1
562 #define ACCTON 2
563 
564 /*
565 ** SROM Repair definitions. If a broken SROM is detected a card may
566 ** use this information to help figure out what to do. This is a
567 ** "stab in the dark" and so far for SMC9332's only.
568 */
569 static c_char srom_repair_info[][100] = {
570  {0x00,0x1e,0x00,0x00,0x00,0x08, /* SMC9332 */
571  0x1f,0x01,0x8f,0x01,0x00,0x01,0x00,0x02,
572  0x01,0x00,0x00,0x78,0xe0,0x01,0x00,0x50,
573  0x00,0x18,}
574 };
575 
576 
577 #ifdef DE4X5_DEBUG
578 static int de4x5_debug = DE4X5_DEBUG;
579 #else
580 /*static int de4x5_debug = (DEBUG_MII | DEBUG_SROM | DEBUG_PCICFG | DEBUG_MEDIA | DEBUG_VERSION);*/
581 static int de4x5_debug = (DEBUG_MEDIA | DEBUG_VERSION);
582 #endif
583 
584 /*
585 ** Allow per adapter set up. For modules this is simply a command line
586 ** parameter, e.g.:
587 ** insmod de4x5 args='eth1:fdx autosense=BNC eth0:autosense=100Mb'.
588 **
589 ** For a compiled in driver, place e.g.
590 ** #define DE4X5_PARM "eth0:fdx autosense=AUI eth2:autosense=TP"
591 ** here
592 */
593 #ifdef DE4X5_PARM
594 static char *args = DE4X5_PARM;
595 #else
596 static char *args;
597 #endif
598 
599 struct parameters {
600  bool fdx;
602 };
603 
604 #define DE4X5_AUTOSENSE_MS 250 /* msec autosense tick (DE500) */
605 
606 #define DE4X5_NDA 0xffe0 /* No Device (I/O) Address */
607 
608 /*
609 ** Ethernet PROM defines
610 */
611 #define PROBE_LENGTH 32
612 #define ETH_PROM_SIG 0xAA5500FFUL
613 
614 /*
615 ** Ethernet Info
616 */
617 #define PKT_BUF_SZ 1536 /* Buffer size for each Tx/Rx buffer */
618 #define IEEE802_3_SZ 1518 /* Packet + CRC */
619 #define MAX_PKT_SZ 1514 /* Maximum ethernet packet length */
620 #define MAX_DAT_SZ 1500 /* Maximum ethernet data length */
621 #define MIN_DAT_SZ 1 /* Minimum ethernet data length */
622 #define PKT_HDR_LEN 14 /* Addresses and data length info */
623 #define FAKE_FRAME_LEN (MAX_PKT_SZ + 1)
624 #define QUEUE_PKT_TIMEOUT (3*HZ) /* 3 second timeout */
625 
626 
627 /*
628 ** EISA bus defines
629 */
630 #define DE4X5_EISA_IO_PORTS 0x0c00 /* I/O port base address, slot 0 */
631 #define DE4X5_EISA_TOTAL_SIZE 0x100 /* I/O address extent */
632 
633 #define EISA_ALLOWED_IRQ_LIST {5, 9, 10, 11}
634 
635 #define DE4X5_SIGNATURE {"DE425","DE434","DE435","DE450","DE500"}
636 #define DE4X5_NAME_LENGTH 8
637 
638 static c_char *de4x5_signatures[] = DE4X5_SIGNATURE;
639 
640 /*
641 ** Ethernet PROM defines for DC21040
642 */
643 #define PROBE_LENGTH 32
644 #define ETH_PROM_SIG 0xAA5500FFUL
645 
646 /*
647 ** PCI Bus defines
648 */
649 #define PCI_MAX_BUS_NUM 8
650 #define DE4X5_PCI_TOTAL_SIZE 0x80 /* I/O address extent */
651 #define DE4X5_CLASS_CODE 0x00020000 /* Network controller, Ethernet */
652 
653 /*
654 ** Memory Alignment. Each descriptor is 4 longwords long. To force a
655 ** particular alignment on the TX descriptor, adjust DESC_SKIP_LEN and
656 ** DESC_ALIGN. ALIGN aligns the start address of the private memory area
657 ** and hence the RX descriptor ring's first entry.
658 */
659 #define DE4X5_ALIGN4 ((u_long)4 - 1) /* 1 longword align */
660 #define DE4X5_ALIGN8 ((u_long)8 - 1) /* 2 longword align */
661 #define DE4X5_ALIGN16 ((u_long)16 - 1) /* 4 longword align */
662 #define DE4X5_ALIGN32 ((u_long)32 - 1) /* 8 longword align */
663 #define DE4X5_ALIGN64 ((u_long)64 - 1) /* 16 longword align */
664 #define DE4X5_ALIGN128 ((u_long)128 - 1) /* 32 longword align */
665 
666 #define DE4X5_ALIGN DE4X5_ALIGN32 /* Keep the DC21040 happy... */
667 #define DE4X5_CACHE_ALIGN CAL_16LONG
668 #define DESC_SKIP_LEN DSL_0 /* Must agree with DESC_ALIGN */
669 /*#define DESC_ALIGN u32 dummy[4]; / * Must agree with DESC_SKIP_LEN */
670 #define DESC_ALIGN
671 
672 #ifndef DEC_ONLY /* See README.de4x5 for using this */
673 static int dec_only;
674 #else
675 static int dec_only = 1;
676 #endif
677 
678 /*
679 ** DE4X5 IRQ ENABLE/DISABLE
680 */
681 #define ENABLE_IRQs { \
682  imr |= lp->irq_en;\
683  outl(imr, DE4X5_IMR); /* Enable the IRQs */\
684 }
685 
686 #define DISABLE_IRQs {\
687  imr = inl(DE4X5_IMR);\
688  imr &= ~lp->irq_en;\
689  outl(imr, DE4X5_IMR); /* Disable the IRQs */\
690 }
691 
692 #define UNMASK_IRQs {\
693  imr |= lp->irq_mask;\
694  outl(imr, DE4X5_IMR); /* Unmask the IRQs */\
695 }
696 
697 #define MASK_IRQs {\
698  imr = inl(DE4X5_IMR);\
699  imr &= ~lp->irq_mask;\
700  outl(imr, DE4X5_IMR); /* Mask the IRQs */\
701 }
702 
703 /*
704 ** DE4X5 START/STOP
705 */
706 #define START_DE4X5 {\
707  omr = inl(DE4X5_OMR);\
708  omr |= OMR_ST | OMR_SR;\
709  outl(omr, DE4X5_OMR); /* Enable the TX and/or RX */\
710 }
711 
712 #define STOP_DE4X5 {\
713  omr = inl(DE4X5_OMR);\
714  omr &= ~(OMR_ST|OMR_SR);\
715  outl(omr, DE4X5_OMR); /* Disable the TX and/or RX */ \
716 }
717 
718 /*
719 ** DE4X5 SIA RESET
720 */
721 #define RESET_SIA outl(0, DE4X5_SICR); /* Reset SIA connectivity regs */
722 
723 /*
724 ** DE500 AUTOSENSE TIMER INTERVAL (MILLISECS)
725 */
726 #define DE4X5_AUTOSENSE_MS 250
727 
728 /*
729 ** SROM Structure
730 */
731 struct de4x5_srom {
732  char sub_vendor_id[2];
733  char sub_system_id[2];
734  char reserved[12];
736  char reserved2;
737  char version;
739  char ieee_addr[6];
740  char info[100];
741  short chksum;
742 };
743 #define SUB_VENDOR_ID 0x500a
744 
745 /*
746 ** DE4X5 Descriptors. Make sure that all the RX buffers are contiguous
747 ** and have sizes of both a power of 2 and a multiple of 4.
748 ** A size of 256 bytes for each buffer could be chosen because over 90% of
749 ** all packets in our network are <256 bytes long and 64 longword alignment
750 ** is possible. 1536 showed better 'ttcp' performance. Take your pick. 32 TX
751 ** descriptors are needed for machines with an ALPHA CPU.
752 */
753 #define NUM_RX_DESC 8 /* Number of RX descriptors */
754 #define NUM_TX_DESC 32 /* Number of TX descriptors */
755 #define RX_BUFF_SZ 1536 /* Power of 2 for kmalloc and */
756  /* Multiple of 4 for DC21040 */
757  /* Allows 512 byte alignment */
758 struct de4x5_desc {
759  volatile __le32 status;
763  DESC_ALIGN
764 };
765 
766 /*
767 ** The DE4X5 private structure
768 */
769 #define DE4X5_PKT_STAT_SZ 16
770 #define DE4X5_PKT_BIN_SZ 128 /* Should be >=100 unless you
771  increase DE4X5_PKT_STAT_SZ */
773 struct pkt_stats {
774  u_int bins[DE4X5_PKT_STAT_SZ]; /* Private stats counters */
785 };
788  char adapter_name[80]; /* Adapter name */
789  u_long interrupt; /* Aligned ISR flag */
790  struct de4x5_desc *rx_ring; /* RX descriptor ring */
791  struct de4x5_desc *tx_ring; /* TX descriptor ring */
792  struct sk_buff *tx_skb[NUM_TX_DESC]; /* TX skb for freeing when sent */
793  struct sk_buff *rx_skb[NUM_RX_DESC]; /* RX skb's */
794  int rx_new, rx_old; /* RX descriptor ring pointers */
795  int tx_new, tx_old; /* TX descriptor ring pointers */
796  char setup_frame[SETUP_FRAME_LEN]; /* Holds MCA and PA info. */
797  char frame[64]; /* Min sized packet for loopback*/
798  spinlock_t lock; /* Adapter specific spinlock */
799  struct net_device_stats stats; /* Public stats */
800  struct pkt_stats pktStats; /* Private stats counters */
803  int bus; /* EISA or PCI */
804  int bus_num; /* PCI Bus number */
805  int device; /* Device number on PCI bus */
806  int state; /* Adapter OPENED or CLOSED */
807  int chipset; /* DC21040, DC21041 or DC21140 */
808  s32 irq_mask; /* Interrupt Mask (Enable) bits */
809  s32 irq_en; /* Summary interrupt bits */
810  int media; /* Media (eg TP), mode (eg 100B)*/
811  int c_media; /* Remember the last media conn */
812  bool fdx; /* media full duplex flag */
813  int linkOK; /* Link is OK */
814  int autosense; /* Allow/disallow autosensing */
815  bool tx_enable; /* Enable descriptor polling */
816  int setup_f; /* Setup frame filtering type */
817  int local_state; /* State within a 'media' state */
818  struct mii_phy phy[DE4X5_MAX_PHY]; /* List of attached PHY devices */
819  struct sia_phy sia; /* SIA PHY Information */
820  int active; /* Index to active PHY device */
821  int mii_cnt; /* Number of attached PHY's */
822  int timeout; /* Scheduling counter */
823  struct timer_list timer; /* Timer info for kernel */
824  int tmp; /* Temporary global per card */
825  struct {
826  u_long lock; /* Lock the cache accesses */
827  s32 csr0; /* Saved Bus Mode Register */
828  s32 csr6; /* Saved Operating Mode Reg. */
829  s32 csr7; /* Saved IRQ Mask Register */
830  s32 gep; /* Saved General Purpose Reg. */
831  s32 gepc; /* Control info for GEP */
832  s32 csr13; /* Saved SIA Connectivity Reg. */
833  s32 csr14; /* Saved SIA TX/RX Register */
834  s32 csr15; /* Saved SIA General Register */
835  int save_cnt; /* Flag if state already saved */
836  struct sk_buff_head queue; /* Save the (re-ordered) skb's */
837  } cache;
838  struct de4x5_srom srom; /* A copy of the SROM */
839  int cfrv; /* Card CFRV copy */
840  int rx_ovf; /* Check for 'RX overflow' tag */
841  bool useSROM; /* For non-DEC card use SROM */
842  bool useMII; /* Infoblock using the MII */
843  int asBitValid; /* Autosense bits in GEP? */
844  int asPolarity; /* 0 => asserted high */
845  int asBit; /* Autosense bit number in GEP */
846  int defMedium; /* SROM default medium */
847  int tcount; /* Last infoblock number */
848  int infoblock_init; /* Initialised this infoblock? */
849  int infoleaf_offset; /* SROM infoleaf for controller */
850  s32 infoblock_csr6; /* csr6 value in SROM infoblock */
851  int infoblock_media; /* infoblock media */
852  int (*infoleaf_fn)(struct net_device *); /* Pointer to infoleaf function */
853  u_char *rst; /* Pointer to Type 5 reset info */
854  u_char ibn; /* Infoblock number */
855  struct parameters params; /* Command line/ #defined params */
856  struct device *gendev; /* Generic device */
857  dma_addr_t dma_rings; /* DMA handle for rings */
858  int dma_size; /* Size of the DMA area */
859  char *rx_bufs; /* rx bufs on alpha, sparc, ... */
860 };
861 
862 /*
863 ** To get around certain poxy cards that don't provide an SROM
864 ** for the second and more DECchip, I have to key off the first
865 ** chip's address. I'll assume there's not a bad SROM iff:
866 **
867 ** o the chipset is the same
868 ** o the bus number is the same and > 0
869 ** o the sum of all the returned hw address bytes is 0 or 0x5fa
870 **
871 ** Also have to save the irq for those cards whose hardware designers
872 ** can't follow the PCI to PCI Bridge Architecture spec.
873 */
874 static struct {
875  int chipset;
876  int bus;
877  int irq;
879 } last = {0,};
880 
881 /*
882 ** The transmit ring full condition is described by the tx_old and tx_new
883 ** pointers by:
884 ** tx_old = tx_new Empty ring
885 ** tx_old = tx_new+1 Full ring
886 ** tx_old+txRingSize = tx_new+1 Full ring (wrapped condition)
887 */
888 #define TX_BUFFS_AVAIL ((lp->tx_old<=lp->tx_new)?\
889  lp->tx_old+lp->txRingSize-lp->tx_new-1:\
890  lp->tx_old -lp->tx_new-1)
892 #define TX_PKT_PENDING (lp->tx_old != lp->tx_new)
893 
894 /*
895 ** Public Functions
896 */
897 static int de4x5_open(struct net_device *dev);
898 static netdev_tx_t de4x5_queue_pkt(struct sk_buff *skb,
899  struct net_device *dev);
900 static irqreturn_t de4x5_interrupt(int irq, void *dev_id);
901 static int de4x5_close(struct net_device *dev);
902 static struct net_device_stats *de4x5_get_stats(struct net_device *dev);
903 static void de4x5_local_stats(struct net_device *dev, char *buf, int pkt_len);
904 static void set_multicast_list(struct net_device *dev);
905 static int de4x5_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
906 
907 /*
908 ** Private functions
909 */
910 static int de4x5_hw_init(struct net_device *dev, u_long iobase, struct device *gendev);
911 static int de4x5_init(struct net_device *dev);
912 static int de4x5_sw_reset(struct net_device *dev);
913 static int de4x5_rx(struct net_device *dev);
914 static int de4x5_tx(struct net_device *dev);
915 static void de4x5_ast(struct net_device *dev);
916 static int de4x5_txur(struct net_device *dev);
917 static int de4x5_rx_ovfc(struct net_device *dev);
918 
919 static int autoconf_media(struct net_device *dev);
920 static void create_packet(struct net_device *dev, char *frame, int len);
921 static void load_packet(struct net_device *dev, char *buf, u32 flags, struct sk_buff *skb);
922 static int dc21040_autoconf(struct net_device *dev);
923 static int dc21041_autoconf(struct net_device *dev);
924 static int dc21140m_autoconf(struct net_device *dev);
925 static int dc2114x_autoconf(struct net_device *dev);
926 static int srom_autoconf(struct net_device *dev);
927 static int de4x5_suspect_state(struct net_device *dev, int timeout, int prev_state, int (*fn)(struct net_device *, int), int (*asfn)(struct net_device *));
928 static int dc21040_state(struct net_device *dev, int csr13, int csr14, int csr15, int timeout, int next_state, int suspect_state, int (*fn)(struct net_device *, int));
929 static int test_media(struct net_device *dev, s32 irqs, s32 irq_mask, s32 csr13, s32 csr14, s32 csr15, s32 msec);
930 static int test_for_100Mb(struct net_device *dev, int msec);
931 static int wait_for_link(struct net_device *dev);
932 static int test_mii_reg(struct net_device *dev, int reg, int mask, bool pol, long msec);
933 static int is_spd_100(struct net_device *dev);
934 static int is_100_up(struct net_device *dev);
935 static int is_10_up(struct net_device *dev);
936 static int is_anc_capable(struct net_device *dev);
937 static int ping_media(struct net_device *dev, int msec);
938 static struct sk_buff *de4x5_alloc_rx_buff(struct net_device *dev, int index, int len);
939 static void de4x5_free_rx_buffs(struct net_device *dev);
940 static void de4x5_free_tx_buffs(struct net_device *dev);
941 static void de4x5_save_skbs(struct net_device *dev);
942 static void de4x5_rst_desc_ring(struct net_device *dev);
943 static void de4x5_cache_state(struct net_device *dev, int flag);
944 static void de4x5_put_cache(struct net_device *dev, struct sk_buff *skb);
945 static void de4x5_putb_cache(struct net_device *dev, struct sk_buff *skb);
946 static struct sk_buff *de4x5_get_cache(struct net_device *dev);
947 static void de4x5_setup_intr(struct net_device *dev);
948 static void de4x5_init_connection(struct net_device *dev);
949 static int de4x5_reset_phy(struct net_device *dev);
950 static void reset_init_sia(struct net_device *dev, s32 sicr, s32 strr, s32 sigr);
951 static int test_ans(struct net_device *dev, s32 irqs, s32 irq_mask, s32 msec);
952 static int test_tp(struct net_device *dev, s32 msec);
953 static int EISA_signature(char *name, struct device *device);
954 static int PCI_signature(char *name, struct de4x5_private *lp);
955 static void DevicePresent(struct net_device *dev, u_long iobase);
956 static void enet_addr_rst(u_long aprom_addr);
957 static int de4x5_bad_srom(struct de4x5_private *lp);
958 static short srom_rd(u_long address, u_char offset);
959 static void srom_latch(u_int command, u_long address);
960 static void srom_command(u_int command, u_long address);
961 static void srom_address(u_int command, u_long address, u_char offset);
962 static short srom_data(u_int command, u_long address);
963 /*static void srom_busy(u_int command, u_long address);*/
964 static void sendto_srom(u_int command, u_long addr);
965 static int getfrom_srom(u_long addr);
966 static int srom_map_media(struct net_device *dev);
967 static int srom_infoleaf_info(struct net_device *dev);
968 static void srom_init(struct net_device *dev);
969 static void srom_exec(struct net_device *dev, u_char *p);
970 static int mii_rd(u_char phyreg, u_char phyaddr, u_long ioaddr);
971 static void mii_wr(int data, u_char phyreg, u_char phyaddr, u_long ioaddr);
972 static int mii_rdata(u_long ioaddr);
973 static void mii_wdata(int data, int len, u_long ioaddr);
974 static void mii_ta(u_long rw, u_long ioaddr);
975 static int mii_swap(int data, int len);
976 static void mii_address(u_char addr, u_long ioaddr);
977 static void sendto_mii(u32 command, int data, u_long ioaddr);
978 static int getfrom_mii(u32 command, u_long ioaddr);
979 static int mii_get_oui(u_char phyaddr, u_long ioaddr);
980 static int mii_get_phy(struct net_device *dev);
981 static void SetMulticastFilter(struct net_device *dev);
982 static int get_hw_addr(struct net_device *dev);
983 static void srom_repair(struct net_device *dev, int card);
984 static int test_bad_enet(struct net_device *dev, int status);
985 static int an_exception(struct de4x5_private *lp);
986 static char *build_setup_frame(struct net_device *dev, int mode);
987 static void disable_ast(struct net_device *dev);
988 static long de4x5_switch_mac_port(struct net_device *dev);
989 static int gep_rd(struct net_device *dev);
990 static void gep_wr(s32 data, struct net_device *dev);
991 static void yawn(struct net_device *dev, int state);
992 static void de4x5_parse_params(struct net_device *dev);
993 static void de4x5_dbg_open(struct net_device *dev);
994 static void de4x5_dbg_mii(struct net_device *dev, int k);
995 static void de4x5_dbg_media(struct net_device *dev);
996 static void de4x5_dbg_srom(struct de4x5_srom *p);
997 static void de4x5_dbg_rx(struct sk_buff *skb, int len);
998 static int de4x5_strncmp(char *a, char *b, int n);
999 static int dc21041_infoleaf(struct net_device *dev);
1000 static int dc21140_infoleaf(struct net_device *dev);
1001 static int dc21142_infoleaf(struct net_device *dev);
1002 static int dc21143_infoleaf(struct net_device *dev);
1003 static int type0_infoblock(struct net_device *dev, u_char count, u_char *p);
1004 static int type1_infoblock(struct net_device *dev, u_char count, u_char *p);
1005 static int type2_infoblock(struct net_device *dev, u_char count, u_char *p);
1006 static int type3_infoblock(struct net_device *dev, u_char count, u_char *p);
1007 static int type4_infoblock(struct net_device *dev, u_char count, u_char *p);
1008 static int type5_infoblock(struct net_device *dev, u_char count, u_char *p);
1009 static int compact_infoblock(struct net_device *dev, u_char count, u_char *p);
1010 
1011 /*
1012 ** Note now that module autoprobing is allowed under EISA and PCI. The
1013 ** IRQ lines will not be auto-detected; instead I'll rely on the BIOSes
1014 ** to "do the right thing".
1015 */
1016 
1017 static int io=0x0;/* EDIT THIS LINE FOR YOUR CONFIGURATION IF NEEDED */
1018 
1019 module_param(io, int, 0);
1020 module_param(de4x5_debug, int, 0);
1021 module_param(dec_only, int, 0);
1022 module_param(args, charp, 0);
1023 
1024 MODULE_PARM_DESC(io, "de4x5 I/O base address");
1025 MODULE_PARM_DESC(de4x5_debug, "de4x5 debug mask");
1026 MODULE_PARM_DESC(dec_only, "de4x5 probe only for Digital boards (0-1)");
1027 MODULE_PARM_DESC(args, "de4x5 full duplex and media type settings; see de4x5.c for details");
1028 MODULE_LICENSE("GPL");
1029 
1030 /*
1031 ** List the SROM infoleaf functions and chipsets
1032 */
1033 struct InfoLeaf {
1034  int chipset;
1035  int (*fn)(struct net_device *);
1036 };
1037 static struct InfoLeaf infoleaf_array[] = {
1038  {DC21041, dc21041_infoleaf},
1039  {DC21140, dc21140_infoleaf},
1040  {DC21142, dc21142_infoleaf},
1041  {DC21143, dc21143_infoleaf}
1042 };
1043 #define INFOLEAF_SIZE ARRAY_SIZE(infoleaf_array)
1044 
1045 /*
1046 ** List the SROM info block functions
1047 */
1048 static int (*dc_infoblock[])(struct net_device *dev, u_char, u_char *) = {
1049  type0_infoblock,
1050  type1_infoblock,
1051  type2_infoblock,
1052  type3_infoblock,
1053  type4_infoblock,
1054  type5_infoblock,
1055  compact_infoblock
1056 };
1058 #define COMPACT (ARRAY_SIZE(dc_infoblock) - 1)
1059 
1060 /*
1061 ** Miscellaneous defines...
1062 */
1063 #define RESET_DE4X5 {\
1064  int i;\
1065  i=inl(DE4X5_BMR);\
1066  mdelay(1);\
1067  outl(i | BMR_SWR, DE4X5_BMR);\
1068  mdelay(1);\
1069  outl(i, DE4X5_BMR);\
1070  mdelay(1);\
1071  for (i=0;i<5;i++) {inl(DE4X5_BMR); mdelay(1);}\
1072  mdelay(1);\
1073 }
1075 #define PHY_HARD_RESET {\
1076  outl(GEP_HRST, DE4X5_GEP); /* Hard RESET the PHY dev. */\
1077  mdelay(1); /* Assert for 1ms */\
1078  outl(0x00, DE4X5_GEP);\
1079  mdelay(2); /* Wait for 2ms */\
1080 }
1081 
1082 static const struct net_device_ops de4x5_netdev_ops = {
1083  .ndo_open = de4x5_open,
1084  .ndo_stop = de4x5_close,
1085  .ndo_start_xmit = de4x5_queue_pkt,
1086  .ndo_get_stats = de4x5_get_stats,
1087  .ndo_set_rx_mode = set_multicast_list,
1088  .ndo_do_ioctl = de4x5_ioctl,
1089  .ndo_change_mtu = eth_change_mtu,
1090  .ndo_set_mac_address= eth_mac_addr,
1091  .ndo_validate_addr = eth_validate_addr,
1092 };
1093 
1094 
1095 static int __devinit
1096 de4x5_hw_init(struct net_device *dev, u_long iobase, struct device *gendev)
1097 {
1098  char name[DE4X5_NAME_LENGTH + 1];
1099  struct de4x5_private *lp = netdev_priv(dev);
1100  struct pci_dev *pdev = NULL;
1101  int i, status=0;
1102 
1103  dev_set_drvdata(gendev, dev);
1104 
1105  /* Ensure we're not sleeping */
1106  if (lp->bus == EISA) {
1107  outb(WAKEUP, PCI_CFPM);
1108  } else {
1109  pdev = to_pci_dev (gendev);
1110  pci_write_config_byte(pdev, PCI_CFDA_PSM, WAKEUP);
1111  }
1112  mdelay(10);
1113 
1114  RESET_DE4X5;
1115 
1116  if ((inl(DE4X5_STS) & (STS_TS | STS_RS)) != 0) {
1117  return -ENXIO; /* Hardware could not reset */
1118  }
1119 
1120  /*
1121  ** Now find out what kind of DC21040/DC21041/DC21140 board we have.
1122  */
1123  lp->useSROM = false;
1124  if (lp->bus == PCI) {
1125  PCI_signature(name, lp);
1126  } else {
1127  EISA_signature(name, gendev);
1128  }
1129 
1130  if (*name == '\0') { /* Not found a board signature */
1131  return -ENXIO;
1132  }
1133 
1134  dev->base_addr = iobase;
1135  printk ("%s: %s at 0x%04lx", dev_name(gendev), name, iobase);
1136 
1137  status = get_hw_addr(dev);
1138  printk(", h/w address %pM\n", dev->dev_addr);
1139 
1140  if (status != 0) {
1141  printk(" which has an Ethernet PROM CRC error.\n");
1142  return -ENXIO;
1143  } else {
1144  skb_queue_head_init(&lp->cache.queue);
1145  lp->cache.gepc = GEP_INIT;
1146  lp->asBit = GEP_SLNK;
1147  lp->asPolarity = GEP_SLNK;
1148  lp->asBitValid = ~0;
1149  lp->timeout = -1;
1150  lp->gendev = gendev;
1151  spin_lock_init(&lp->lock);
1152  init_timer(&lp->timer);
1153  lp->timer.function = (void (*)(unsigned long))de4x5_ast;
1154  lp->timer.data = (unsigned long)dev;
1155  de4x5_parse_params(dev);
1156 
1157  /*
1158  ** Choose correct autosensing in case someone messed up
1159  */
1160  lp->autosense = lp->params.autosense;
1161  if (lp->chipset != DC21140) {
1162  if ((lp->chipset==DC21040) && (lp->params.autosense&TP_NW)) {
1163  lp->params.autosense = TP;
1164  }
1165  if ((lp->chipset==DC21041) && (lp->params.autosense&BNC_AUI)) {
1166  lp->params.autosense = BNC;
1167  }
1168  }
1169  lp->fdx = lp->params.fdx;
1170  sprintf(lp->adapter_name,"%s (%s)", name, dev_name(gendev));
1171 
1172  lp->dma_size = (NUM_RX_DESC + NUM_TX_DESC) * sizeof(struct de4x5_desc);
1173 #if defined(__alpha__) || defined(__powerpc__) || defined(CONFIG_SPARC) || defined(DE4X5_DO_MEMCPY)
1175 #endif
1176  lp->rx_ring = dma_alloc_coherent(gendev, lp->dma_size,
1177  &lp->dma_rings, GFP_ATOMIC);
1178  if (lp->rx_ring == NULL) {
1179  return -ENOMEM;
1180  }
1181 
1182  lp->tx_ring = lp->rx_ring + NUM_RX_DESC;
1183 
1184  /*
1185  ** Set up the RX descriptor ring (Intels)
1186  ** Allocate contiguous receive buffers, long word aligned (Alphas)
1187  */
1188 #if !defined(__alpha__) && !defined(__powerpc__) && !defined(CONFIG_SPARC) && !defined(DE4X5_DO_MEMCPY)
1189  for (i=0; i<NUM_RX_DESC; i++) {
1190  lp->rx_ring[i].status = 0;
1191  lp->rx_ring[i].des1 = cpu_to_le32(RX_BUFF_SZ);
1192  lp->rx_ring[i].buf = 0;
1193  lp->rx_ring[i].next = 0;
1194  lp->rx_skb[i] = (struct sk_buff *) 1; /* Dummy entry */
1195  }
1196 
1197 #else
1198  {
1199  dma_addr_t dma_rx_bufs;
1200 
1201  dma_rx_bufs = lp->dma_rings + (NUM_RX_DESC + NUM_TX_DESC)
1202  * sizeof(struct de4x5_desc);
1203  dma_rx_bufs = (dma_rx_bufs + DE4X5_ALIGN) & ~DE4X5_ALIGN;
1204  lp->rx_bufs = (char *)(((long)(lp->rx_ring + NUM_RX_DESC
1205  + NUM_TX_DESC) + DE4X5_ALIGN) & ~DE4X5_ALIGN);
1206  for (i=0; i<NUM_RX_DESC; i++) {
1207  lp->rx_ring[i].status = 0;
1208  lp->rx_ring[i].des1 = cpu_to_le32(RX_BUFF_SZ);
1209  lp->rx_ring[i].buf =
1210  cpu_to_le32(dma_rx_bufs+i*RX_BUFF_SZ);
1211  lp->rx_ring[i].next = 0;
1212  lp->rx_skb[i] = (struct sk_buff *) 1; /* Dummy entry */
1213  }
1214 
1215  }
1216 #endif
1217 
1218  barrier();
1219 
1220  lp->rxRingSize = NUM_RX_DESC;
1221  lp->txRingSize = NUM_TX_DESC;
1222 
1223  /* Write the end of list marker to the descriptor lists */
1224  lp->rx_ring[lp->rxRingSize - 1].des1 |= cpu_to_le32(RD_RER);
1225  lp->tx_ring[lp->txRingSize - 1].des1 |= cpu_to_le32(TD_TER);
1226 
1227  /* Tell the adapter where the TX/RX rings are located. */
1228  outl(lp->dma_rings, DE4X5_RRBA);
1229  outl(lp->dma_rings + NUM_RX_DESC * sizeof(struct de4x5_desc),
1230  DE4X5_TRBA);
1231 
1232  /* Initialise the IRQ mask and Enable/Disable */
1233  lp->irq_mask = IMR_RIM | IMR_TIM | IMR_TUM | IMR_UNM;
1234  lp->irq_en = IMR_NIM | IMR_AIM;
1235 
1236  /* Create a loopback packet frame for later media probing */
1237  create_packet(dev, lp->frame, sizeof(lp->frame));
1238 
1239  /* Check if the RX overflow bug needs testing for */
1240  i = lp->cfrv & 0x000000fe;
1241  if ((lp->chipset == DC21140) && (i == 0x20)) {
1242  lp->rx_ovf = 1;
1243  }
1244 
1245  /* Initialise the SROM pointers if possible */
1246  if (lp->useSROM) {
1247  lp->state = INITIALISED;
1248  if (srom_infoleaf_info(dev)) {
1249  dma_free_coherent (gendev, lp->dma_size,
1250  lp->rx_ring, lp->dma_rings);
1251  return -ENXIO;
1252  }
1253  srom_init(dev);
1254  }
1255 
1256  lp->state = CLOSED;
1257 
1258  /*
1259  ** Check for an MII interface
1260  */
1261  if ((lp->chipset != DC21040) && (lp->chipset != DC21041)) {
1262  mii_get_phy(dev);
1263  }
1264 
1265  printk(" and requires IRQ%d (provided by %s).\n", dev->irq,
1266  ((lp->bus == PCI) ? "PCI BIOS" : "EISA CNFG"));
1267  }
1268 
1269  if (de4x5_debug & DEBUG_VERSION) {
1270  printk(version);
1271  }
1272 
1273  /* The DE4X5-specific entries in the device structure. */
1274  SET_NETDEV_DEV(dev, gendev);
1275  dev->netdev_ops = &de4x5_netdev_ops;
1276  dev->mem_start = 0;
1277 
1278  /* Fill in the generic fields of the device structure. */
1279  if ((status = register_netdev (dev))) {
1280  dma_free_coherent (gendev, lp->dma_size,
1281  lp->rx_ring, lp->dma_rings);
1282  return status;
1283  }
1284 
1285  /* Let the adapter sleep to save power */
1286  yawn(dev, SLEEP);
1287 
1288  return status;
1289 }
1290 
1291 
1292 static int
1293 de4x5_open(struct net_device *dev)
1294 {
1295  struct de4x5_private *lp = netdev_priv(dev);
1296  u_long iobase = dev->base_addr;
1297  int i, status = 0;
1298  s32 omr;
1299 
1300  /* Allocate the RX buffers */
1301  for (i=0; i<lp->rxRingSize; i++) {
1302  if (de4x5_alloc_rx_buff(dev, i, 0) == NULL) {
1303  de4x5_free_rx_buffs(dev);
1304  return -EAGAIN;
1305  }
1306  }
1307 
1308  /*
1309  ** Wake up the adapter
1310  */
1311  yawn(dev, WAKEUP);
1312 
1313  /*
1314  ** Re-initialize the DE4X5...
1315  */
1316  status = de4x5_init(dev);
1317  spin_lock_init(&lp->lock);
1318  lp->state = OPEN;
1319  de4x5_dbg_open(dev);
1320 
1321  if (request_irq(dev->irq, de4x5_interrupt, IRQF_SHARED,
1322  lp->adapter_name, dev)) {
1323  printk("de4x5_open(): Requested IRQ%d is busy - attemping FAST/SHARE...", dev->irq);
1324  if (request_irq(dev->irq, de4x5_interrupt, IRQF_DISABLED | IRQF_SHARED,
1325  lp->adapter_name, dev)) {
1326  printk("\n Cannot get IRQ- reconfigure your hardware.\n");
1327  disable_ast(dev);
1328  de4x5_free_rx_buffs(dev);
1329  de4x5_free_tx_buffs(dev);
1330  yawn(dev, SLEEP);
1331  lp->state = CLOSED;
1332  return -EAGAIN;
1333  } else {
1334  printk("\n Succeeded, but you should reconfigure your hardware to avoid this.\n");
1335  printk("WARNING: there may be IRQ related problems in heavily loaded systems.\n");
1336  }
1337  }
1338 
1340  dev->trans_start = jiffies; /* prevent tx timeout */
1341 
1342  START_DE4X5;
1343 
1344  de4x5_setup_intr(dev);
1345 
1346  if (de4x5_debug & DEBUG_OPEN) {
1347  printk("\tsts: 0x%08x\n", inl(DE4X5_STS));
1348  printk("\tbmr: 0x%08x\n", inl(DE4X5_BMR));
1349  printk("\timr: 0x%08x\n", inl(DE4X5_IMR));
1350  printk("\tomr: 0x%08x\n", inl(DE4X5_OMR));
1351  printk("\tsisr: 0x%08x\n", inl(DE4X5_SISR));
1352  printk("\tsicr: 0x%08x\n", inl(DE4X5_SICR));
1353  printk("\tstrr: 0x%08x\n", inl(DE4X5_STRR));
1354  printk("\tsigr: 0x%08x\n", inl(DE4X5_SIGR));
1355  }
1356 
1357  return status;
1358 }
1359 
1360 /*
1361 ** Initialize the DE4X5 operating conditions. NB: a chip problem with the
1362 ** DC21140 requires using perfect filtering mode for that chip. Since I can't
1363 ** see why I'd want > 14 multicast addresses, I have changed all chips to use
1364 ** the perfect filtering mode. Keep the DMA burst length at 8: there seems
1365 ** to be data corruption problems if it is larger (UDP errors seen from a
1366 ** ttcp source).
1367 */
1368 static int
1369 de4x5_init(struct net_device *dev)
1370 {
1371  /* Lock out other processes whilst setting up the hardware */
1372  netif_stop_queue(dev);
1373 
1374  de4x5_sw_reset(dev);
1375 
1376  /* Autoconfigure the connected port */
1377  autoconf_media(dev);
1378 
1379  return 0;
1380 }
1381 
1382 static int
1383 de4x5_sw_reset(struct net_device *dev)
1384 {
1385  struct de4x5_private *lp = netdev_priv(dev);
1386  u_long iobase = dev->base_addr;
1387  int i, j, status = 0;
1388  s32 bmr, omr;
1389 
1390  /* Select the MII or SRL port now and RESET the MAC */
1391  if (!lp->useSROM) {
1392  if (lp->phy[lp->active].id != 0) {
1394  } else {
1395  lp->infoblock_csr6 = OMR_SDP | OMR_TTM;
1396  }
1397  de4x5_switch_mac_port(dev);
1398  }
1399 
1400  /*
1401  ** Set the programmable burst length to 8 longwords for all the DC21140
1402  ** Fasternet chips and 4 longwords for all others: DMA errors result
1403  ** without these values. Cache align 16 long.
1404  */
1406  bmr |= ((lp->chipset & ~0x00ff)==DC2114x ? BMR_RML : 0);
1407  outl(bmr, DE4X5_BMR);
1408 
1409  omr = inl(DE4X5_OMR) & ~OMR_PR; /* Turn off promiscuous mode */
1410  if (lp->chipset == DC21140) {
1411  omr |= (OMR_SDP | OMR_SB);
1412  }
1413  lp->setup_f = PERFECT;
1414  outl(lp->dma_rings, DE4X5_RRBA);
1415  outl(lp->dma_rings + NUM_RX_DESC * sizeof(struct de4x5_desc),
1416  DE4X5_TRBA);
1417 
1418  lp->rx_new = lp->rx_old = 0;
1419  lp->tx_new = lp->tx_old = 0;
1420 
1421  for (i = 0; i < lp->rxRingSize; i++) {
1422  lp->rx_ring[i].status = cpu_to_le32(R_OWN);
1423  }
1424 
1425  for (i = 0; i < lp->txRingSize; i++) {
1426  lp->tx_ring[i].status = cpu_to_le32(0);
1427  }
1428 
1429  barrier();
1430 
1431  /* Build the setup frame depending on filtering mode */
1432  SetMulticastFilter(dev);
1433 
1434  load_packet(dev, lp->setup_frame, PERFECT_F|TD_SET|SETUP_FRAME_LEN, (struct sk_buff *)1);
1435  outl(omr|OMR_ST, DE4X5_OMR);
1436 
1437  /* Poll for setup frame completion (adapter interrupts are disabled now) */
1438 
1439  for (j=0, i=0;(i<500) && (j==0);i++) { /* Up to 500ms delay */
1440  mdelay(1);
1441  if ((s32)le32_to_cpu(lp->tx_ring[lp->tx_new].status) >= 0) j=1;
1442  }
1443  outl(omr, DE4X5_OMR); /* Stop everything! */
1444 
1445  if (j == 0) {
1446  printk("%s: Setup frame timed out, status %08x\n", dev->name,
1447  inl(DE4X5_STS));
1448  status = -EIO;
1449  }
1450 
1451  lp->tx_new = (lp->tx_new + 1) % lp->txRingSize;
1452  lp->tx_old = lp->tx_new;
1453 
1454  return status;
1455 }
1456 
1457 /*
1458 ** Writes a socket buffer address to the next available transmit descriptor.
1459 */
1460 static netdev_tx_t
1461 de4x5_queue_pkt(struct sk_buff *skb, struct net_device *dev)
1462 {
1463  struct de4x5_private *lp = netdev_priv(dev);
1464  u_long iobase = dev->base_addr;
1465  u_long flags = 0;
1466 
1467  netif_stop_queue(dev);
1468  if (!lp->tx_enable) /* Cannot send for now */
1469  return NETDEV_TX_LOCKED;
1470 
1471  /*
1472  ** Clean out the TX ring asynchronously to interrupts - sometimes the
1473  ** interrupts are lost by delayed descriptor status updates relative to
1474  ** the irq assertion, especially with a busy PCI bus.
1475  */
1476  spin_lock_irqsave(&lp->lock, flags);
1477  de4x5_tx(dev);
1478  spin_unlock_irqrestore(&lp->lock, flags);
1479 
1480  /* Test if cache is already locked - requeue skb if so */
1481  if (test_and_set_bit(0, (void *)&lp->cache.lock) && !lp->interrupt)
1482  return NETDEV_TX_LOCKED;
1483 
1484  /* Transmit descriptor ring full or stale skb */
1485  if (netif_queue_stopped(dev) || (u_long) lp->tx_skb[lp->tx_new] > 1) {
1486  if (lp->interrupt) {
1487  de4x5_putb_cache(dev, skb); /* Requeue the buffer */
1488  } else {
1489  de4x5_put_cache(dev, skb);
1490  }
1491  if (de4x5_debug & DEBUG_TX) {
1492  printk("%s: transmit busy, lost media or stale skb found:\n STS:%08x\n tbusy:%d\n IMR:%08x\n OMR:%08x\n Stale skb: %s\n",dev->name, inl(DE4X5_STS), netif_queue_stopped(dev), inl(DE4X5_IMR), inl(DE4X5_OMR), ((u_long) lp->tx_skb[lp->tx_new] > 1) ? "YES" : "NO");
1493  }
1494  } else if (skb->len > 0) {
1495  /* If we already have stuff queued locally, use that first */
1496  if (!skb_queue_empty(&lp->cache.queue) && !lp->interrupt) {
1497  de4x5_put_cache(dev, skb);
1498  skb = de4x5_get_cache(dev);
1499  }
1500 
1501  while (skb && !netif_queue_stopped(dev) &&
1502  (u_long) lp->tx_skb[lp->tx_new] <= 1) {
1503  spin_lock_irqsave(&lp->lock, flags);
1504  netif_stop_queue(dev);
1505  load_packet(dev, skb->data, TD_IC | TD_LS | TD_FS | skb->len, skb);
1506  lp->stats.tx_bytes += skb->len;
1507  outl(POLL_DEMAND, DE4X5_TPD);/* Start the TX */
1508 
1509  lp->tx_new = (lp->tx_new + 1) % lp->txRingSize;
1510 
1511  if (TX_BUFFS_AVAIL) {
1512  netif_start_queue(dev); /* Another pkt may be queued */
1513  }
1514  skb = de4x5_get_cache(dev);
1515  spin_unlock_irqrestore(&lp->lock, flags);
1516  }
1517  if (skb) de4x5_putb_cache(dev, skb);
1518  }
1519 
1520  lp->cache.lock = 0;
1521 
1522  return NETDEV_TX_OK;
1523 }
1524 
1525 /*
1526 ** The DE4X5 interrupt handler.
1527 **
1528 ** I/O Read/Writes through intermediate PCI bridges are never 'posted',
1529 ** so that the asserted interrupt always has some real data to work with -
1530 ** if these I/O accesses are ever changed to memory accesses, ensure the
1531 ** STS write is read immediately to complete the transaction if the adapter
1532 ** is not on bus 0. Lost interrupts can still occur when the PCI bus load
1533 ** is high and descriptor status bits cannot be set before the associated
1534 ** interrupt is asserted and this routine entered.
1535 */
1536 static irqreturn_t
1537 de4x5_interrupt(int irq, void *dev_id)
1538 {
1539  struct net_device *dev = dev_id;
1540  struct de4x5_private *lp;
1541  s32 imr, omr, sts, limit;
1542  u_long iobase;
1543  unsigned int handled = 0;
1544 
1545  lp = netdev_priv(dev);
1546  spin_lock(&lp->lock);
1547  iobase = dev->base_addr;
1548 
1549  DISABLE_IRQs; /* Ensure non re-entrancy */
1550 
1551  if (test_and_set_bit(MASK_INTERRUPTS, (void*) &lp->interrupt))
1552  printk("%s: Re-entering the interrupt handler.\n", dev->name);
1553 
1554  synchronize_irq(dev->irq);
1555 
1556  for (limit=0; limit<8; limit++) {
1557  sts = inl(DE4X5_STS); /* Read IRQ status */
1558  outl(sts, DE4X5_STS); /* Reset the board interrupts */
1559 
1560  if (!(sts & lp->irq_mask)) break;/* All done */
1561  handled = 1;
1562 
1563  if (sts & (STS_RI | STS_RU)) /* Rx interrupt (packet[s] arrived) */
1564  de4x5_rx(dev);
1565 
1566  if (sts & (STS_TI | STS_TU)) /* Tx interrupt (packet sent) */
1567  de4x5_tx(dev);
1568 
1569  if (sts & STS_LNF) { /* TP Link has failed */
1570  lp->irq_mask &= ~IMR_LFM;
1571  }
1572 
1573  if (sts & STS_UNF) { /* Transmit underrun */
1574  de4x5_txur(dev);
1575  }
1576 
1577  if (sts & STS_SE) { /* Bus Error */
1578  STOP_DE4X5;
1579  printk("%s: Fatal bus error occurred, sts=%#8x, device stopped.\n",
1580  dev->name, sts);
1581  spin_unlock(&lp->lock);
1582  return IRQ_HANDLED;
1583  }
1584  }
1585 
1586  /* Load the TX ring with any locally stored packets */
1587  if (!test_and_set_bit(0, (void *)&lp->cache.lock)) {
1588  while (!skb_queue_empty(&lp->cache.queue) && !netif_queue_stopped(dev) && lp->tx_enable) {
1589  de4x5_queue_pkt(de4x5_get_cache(dev), dev);
1590  }
1591  lp->cache.lock = 0;
1592  }
1593 
1595  ENABLE_IRQs;
1596  spin_unlock(&lp->lock);
1597 
1598  return IRQ_RETVAL(handled);
1599 }
1600 
1601 static int
1602 de4x5_rx(struct net_device *dev)
1603 {
1604  struct de4x5_private *lp = netdev_priv(dev);
1605  u_long iobase = dev->base_addr;
1606  int entry;
1607  s32 status;
1608 
1609  for (entry=lp->rx_new; (s32)le32_to_cpu(lp->rx_ring[entry].status)>=0;
1610  entry=lp->rx_new) {
1611  status = (s32)le32_to_cpu(lp->rx_ring[entry].status);
1612 
1613  if (lp->rx_ovf) {
1614  if (inl(DE4X5_MFC) & MFC_FOCM) {
1615  de4x5_rx_ovfc(dev);
1616  break;
1617  }
1618  }
1619 
1620  if (status & RD_FS) { /* Remember the start of frame */
1621  lp->rx_old = entry;
1622  }
1623 
1624  if (status & RD_LS) { /* Valid frame status */
1625  if (lp->tx_enable) lp->linkOK++;
1626  if (status & RD_ES) { /* There was an error. */
1627  lp->stats.rx_errors++; /* Update the error stats. */
1628  if (status & (RD_RF | RD_TL)) lp->stats.rx_frame_errors++;
1629  if (status & RD_CE) lp->stats.rx_crc_errors++;
1630  if (status & RD_OF) lp->stats.rx_fifo_errors++;
1631  if (status & RD_TL) lp->stats.rx_length_errors++;
1632  if (status & RD_RF) lp->pktStats.rx_runt_frames++;
1633  if (status & RD_CS) lp->pktStats.rx_collision++;
1634  if (status & RD_DB) lp->pktStats.rx_dribble++;
1635  if (status & RD_OF) lp->pktStats.rx_overflow++;
1636  } else { /* A valid frame received */
1637  struct sk_buff *skb;
1638  short pkt_len = (short)(le32_to_cpu(lp->rx_ring[entry].status)
1639  >> 16) - 4;
1640 
1641  if ((skb = de4x5_alloc_rx_buff(dev, entry, pkt_len)) == NULL) {
1642  printk("%s: Insufficient memory; nuking packet.\n",
1643  dev->name);
1644  lp->stats.rx_dropped++;
1645  } else {
1646  de4x5_dbg_rx(skb, pkt_len);
1647 
1648  /* Push up the protocol stack */
1649  skb->protocol=eth_type_trans(skb,dev);
1650  de4x5_local_stats(dev, skb->data, pkt_len);
1651  netif_rx(skb);
1652 
1653  /* Update stats */
1654  lp->stats.rx_packets++;
1655  lp->stats.rx_bytes += pkt_len;
1656  }
1657  }
1658 
1659  /* Change buffer ownership for this frame, back to the adapter */
1660  for (;lp->rx_old!=entry;lp->rx_old=(lp->rx_old + 1)%lp->rxRingSize) {
1661  lp->rx_ring[lp->rx_old].status = cpu_to_le32(R_OWN);
1662  barrier();
1663  }
1664  lp->rx_ring[entry].status = cpu_to_le32(R_OWN);
1665  barrier();
1666  }
1667 
1668  /*
1669  ** Update entry information
1670  */
1671  lp->rx_new = (lp->rx_new + 1) % lp->rxRingSize;
1672  }
1673 
1674  return 0;
1675 }
1676 
1677 static inline void
1678 de4x5_free_tx_buff(struct de4x5_private *lp, int entry)
1679 {
1680  dma_unmap_single(lp->gendev, le32_to_cpu(lp->tx_ring[entry].buf),
1681  le32_to_cpu(lp->tx_ring[entry].des1) & TD_TBS1,
1682  DMA_TO_DEVICE);
1683  if ((u_long) lp->tx_skb[entry] > 1)
1684  dev_kfree_skb_irq(lp->tx_skb[entry]);
1685  lp->tx_skb[entry] = NULL;
1686 }
1687 
1688 /*
1689 ** Buffer sent - check for TX buffer errors.
1690 */
1691 static int
1692 de4x5_tx(struct net_device *dev)
1693 {
1694  struct de4x5_private *lp = netdev_priv(dev);
1695  u_long iobase = dev->base_addr;
1696  int entry;
1697  s32 status;
1698 
1699  for (entry = lp->tx_old; entry != lp->tx_new; entry = lp->tx_old) {
1700  status = (s32)le32_to_cpu(lp->tx_ring[entry].status);
1701  if (status < 0) { /* Buffer not sent yet */
1702  break;
1703  } else if (status != 0x7fffffff) { /* Not setup frame */
1704  if (status & TD_ES) { /* An error happened */
1705  lp->stats.tx_errors++;
1706  if (status & TD_NC) lp->stats.tx_carrier_errors++;
1707  if (status & TD_LC) lp->stats.tx_window_errors++;
1708  if (status & TD_UF) lp->stats.tx_fifo_errors++;
1709  if (status & TD_EC) lp->pktStats.excessive_collisions++;
1710  if (status & TD_DE) lp->stats.tx_aborted_errors++;
1711 
1712  if (TX_PKT_PENDING) {
1713  outl(POLL_DEMAND, DE4X5_TPD);/* Restart a stalled TX */
1714  }
1715  } else { /* Packet sent */
1716  lp->stats.tx_packets++;
1717  if (lp->tx_enable) lp->linkOK++;
1718  }
1719  /* Update the collision counter */
1720  lp->stats.collisions += ((status & TD_EC) ? 16 :
1721  ((status & TD_CC) >> 3));
1722 
1723  /* Free the buffer. */
1724  if (lp->tx_skb[entry] != NULL)
1725  de4x5_free_tx_buff(lp, entry);
1726  }
1727 
1728  /* Update all the pointers */
1729  lp->tx_old = (lp->tx_old + 1) % lp->txRingSize;
1730  }
1731 
1732  /* Any resources available? */
1733  if (TX_BUFFS_AVAIL && netif_queue_stopped(dev)) {
1734  if (lp->interrupt)
1735  netif_wake_queue(dev);
1736  else
1737  netif_start_queue(dev);
1738  }
1739 
1740  return 0;
1741 }
1742 
1743 static void
1744 de4x5_ast(struct net_device *dev)
1745 {
1746  struct de4x5_private *lp = netdev_priv(dev);
1747  int next_tick = DE4X5_AUTOSENSE_MS;
1748  int dt;
1749 
1750  if (lp->useSROM)
1751  next_tick = srom_autoconf(dev);
1752  else if (lp->chipset == DC21140)
1753  next_tick = dc21140m_autoconf(dev);
1754  else if (lp->chipset == DC21041)
1755  next_tick = dc21041_autoconf(dev);
1756  else if (lp->chipset == DC21040)
1757  next_tick = dc21040_autoconf(dev);
1758  lp->linkOK = 0;
1759 
1760  dt = (next_tick * HZ) / 1000;
1761 
1762  if (!dt)
1763  dt = 1;
1764 
1765  mod_timer(&lp->timer, jiffies + dt);
1766 }
1767 
1768 static int
1769 de4x5_txur(struct net_device *dev)
1770 {
1771  struct de4x5_private *lp = netdev_priv(dev);
1772  u_long iobase = dev->base_addr;
1773  int omr;
1774 
1775  omr = inl(DE4X5_OMR);
1776  if (!(omr & OMR_SF) || (lp->chipset==DC21041) || (lp->chipset==DC21040)) {
1777  omr &= ~(OMR_ST|OMR_SR);
1778  outl(omr, DE4X5_OMR);
1779  while (inl(DE4X5_STS) & STS_TS);
1780  if ((omr & OMR_TR) < OMR_TR) {
1781  omr += 0x4000;
1782  } else {
1783  omr |= OMR_SF;
1784  }
1785  outl(omr | OMR_ST | OMR_SR, DE4X5_OMR);
1786  }
1787 
1788  return 0;
1789 }
1790 
1791 static int
1792 de4x5_rx_ovfc(struct net_device *dev)
1793 {
1794  struct de4x5_private *lp = netdev_priv(dev);
1795  u_long iobase = dev->base_addr;
1796  int omr;
1797 
1798  omr = inl(DE4X5_OMR);
1799  outl(omr & ~OMR_SR, DE4X5_OMR);
1800  while (inl(DE4X5_STS) & STS_RS);
1801 
1802  for (; (s32)le32_to_cpu(lp->rx_ring[lp->rx_new].status)>=0;) {
1803  lp->rx_ring[lp->rx_new].status = cpu_to_le32(R_OWN);
1804  lp->rx_new = (lp->rx_new + 1) % lp->rxRingSize;
1805  }
1806 
1807  outl(omr, DE4X5_OMR);
1808 
1809  return 0;
1810 }
1811 
1812 static int
1813 de4x5_close(struct net_device *dev)
1814 {
1815  struct de4x5_private *lp = netdev_priv(dev);
1816  u_long iobase = dev->base_addr;
1817  s32 imr, omr;
1818 
1819  disable_ast(dev);
1820 
1821  netif_stop_queue(dev);
1822 
1823  if (de4x5_debug & DEBUG_CLOSE) {
1824  printk("%s: Shutting down ethercard, status was %8.8x.\n",
1825  dev->name, inl(DE4X5_STS));
1826  }
1827 
1828  /*
1829  ** We stop the DE4X5 here... mask interrupts and stop TX & RX
1830  */
1831  DISABLE_IRQs;
1832  STOP_DE4X5;
1833 
1834  /* Free the associated irq */
1835  free_irq(dev->irq, dev);
1836  lp->state = CLOSED;
1837 
1838  /* Free any socket buffers */
1839  de4x5_free_rx_buffs(dev);
1840  de4x5_free_tx_buffs(dev);
1841 
1842  /* Put the adapter to sleep to save power */
1843  yawn(dev, SLEEP);
1844 
1845  return 0;
1846 }
1847 
1848 static struct net_device_stats *
1849 de4x5_get_stats(struct net_device *dev)
1850 {
1851  struct de4x5_private *lp = netdev_priv(dev);
1852  u_long iobase = dev->base_addr;
1853 
1854  lp->stats.rx_missed_errors = (int)(inl(DE4X5_MFC) & (MFC_OVFL | MFC_CNTR));
1855 
1856  return &lp->stats;
1857 }
1858 
1859 static void
1860 de4x5_local_stats(struct net_device *dev, char *buf, int pkt_len)
1861 {
1862  struct de4x5_private *lp = netdev_priv(dev);
1863  int i;
1864 
1865  for (i=1; i<DE4X5_PKT_STAT_SZ-1; i++) {
1866  if (pkt_len < (i*DE4X5_PKT_BIN_SZ)) {
1867  lp->pktStats.bins[i]++;
1868  i = DE4X5_PKT_STAT_SZ;
1869  }
1870  }
1871  if (is_multicast_ether_addr(buf)) {
1872  if (is_broadcast_ether_addr(buf)) {
1873  lp->pktStats.broadcast++;
1874  } else {
1875  lp->pktStats.multicast++;
1876  }
1877  } else if (ether_addr_equal(buf, dev->dev_addr)) {
1878  lp->pktStats.unicast++;
1879  }
1880 
1881  lp->pktStats.bins[0]++; /* Duplicates stats.rx_packets */
1882  if (lp->pktStats.bins[0] == 0) { /* Reset counters */
1883  memset((char *)&lp->pktStats, 0, sizeof(lp->pktStats));
1884  }
1885 }
1886 
1887 /*
1888 ** Removes the TD_IC flag from previous descriptor to improve TX performance.
1889 ** If the flag is changed on a descriptor that is being read by the hardware,
1890 ** I assume PCI transaction ordering will mean you are either successful or
1891 ** just miss asserting the change to the hardware. Anyway you're messing with
1892 ** a descriptor you don't own, but this shouldn't kill the chip provided
1893 ** the descriptor register is read only to the hardware.
1894 */
1895 static void
1896 load_packet(struct net_device *dev, char *buf, u32 flags, struct sk_buff *skb)
1897 {
1898  struct de4x5_private *lp = netdev_priv(dev);
1899  int entry = (lp->tx_new ? lp->tx_new-1 : lp->txRingSize-1);
1900  dma_addr_t buf_dma = dma_map_single(lp->gendev, buf, flags & TD_TBS1, DMA_TO_DEVICE);
1901 
1902  lp->tx_ring[lp->tx_new].buf = cpu_to_le32(buf_dma);
1903  lp->tx_ring[lp->tx_new].des1 &= cpu_to_le32(TD_TER);
1904  lp->tx_ring[lp->tx_new].des1 |= cpu_to_le32(flags);
1905  lp->tx_skb[lp->tx_new] = skb;
1906  lp->tx_ring[entry].des1 &= cpu_to_le32(~TD_IC);
1907  barrier();
1908 
1909  lp->tx_ring[lp->tx_new].status = cpu_to_le32(T_OWN);
1910  barrier();
1911 }
1912 
1913 /*
1914 ** Set or clear the multicast filter for this adaptor.
1915 */
1916 static void
1917 set_multicast_list(struct net_device *dev)
1918 {
1919  struct de4x5_private *lp = netdev_priv(dev);
1920  u_long iobase = dev->base_addr;
1921 
1922  /* First, double check that the adapter is open */
1923  if (lp->state == OPEN) {
1924  if (dev->flags & IFF_PROMISC) { /* set promiscuous mode */
1925  u32 omr;
1926  omr = inl(DE4X5_OMR);
1927  omr |= OMR_PR;
1928  outl(omr, DE4X5_OMR);
1929  } else {
1930  SetMulticastFilter(dev);
1931  load_packet(dev, lp->setup_frame, TD_IC | PERFECT_F | TD_SET |
1932  SETUP_FRAME_LEN, (struct sk_buff *)1);
1933 
1934  lp->tx_new = (lp->tx_new + 1) % lp->txRingSize;
1935  outl(POLL_DEMAND, DE4X5_TPD); /* Start the TX */
1936  dev->trans_start = jiffies; /* prevent tx timeout */
1937  }
1938  }
1939 }
1940 
1941 /*
1942 ** Calculate the hash code and update the logical address filter
1943 ** from a list of ethernet multicast addresses.
1944 ** Little endian crc one liner from Matt Thomas, DEC.
1945 */
1946 static void
1947 SetMulticastFilter(struct net_device *dev)
1948 {
1949  struct de4x5_private *lp = netdev_priv(dev);
1950  struct netdev_hw_addr *ha;
1951  u_long iobase = dev->base_addr;
1952  int i, bit, byte;
1953  u16 hashcode;
1954  u32 omr, crc;
1955  char *pa;
1956  unsigned char *addrs;
1957 
1958  omr = inl(DE4X5_OMR);
1959  omr &= ~(OMR_PR | OMR_PM);
1960  pa = build_setup_frame(dev, ALL); /* Build the basic frame */
1961 
1962  if ((dev->flags & IFF_ALLMULTI) || (netdev_mc_count(dev) > 14)) {
1963  omr |= OMR_PM; /* Pass all multicasts */
1964  } else if (lp->setup_f == HASH_PERF) { /* Hash Filtering */
1965  netdev_for_each_mc_addr(ha, dev) {
1966  crc = ether_crc_le(ETH_ALEN, ha->addr);
1967  hashcode = crc & HASH_BITS; /* hashcode is 9 LSb of CRC */
1968 
1969  byte = hashcode >> 3; /* bit[3-8] -> byte in filter */
1970  bit = 1 << (hashcode & 0x07);/* bit[0-2] -> bit in byte */
1971 
1972  byte <<= 1; /* calc offset into setup frame */
1973  if (byte & 0x02) {
1974  byte -= 1;
1975  }
1976  lp->setup_frame[byte] |= bit;
1977  }
1978  } else { /* Perfect filtering */
1979  netdev_for_each_mc_addr(ha, dev) {
1980  addrs = ha->addr;
1981  for (i=0; i<ETH_ALEN; i++) {
1982  *(pa + (i&1)) = *addrs++;
1983  if (i & 0x01) pa += 4;
1984  }
1985  }
1986  }
1987  outl(omr, DE4X5_OMR);
1988 }
1989 
1990 #ifdef CONFIG_EISA
1991 
1992 static u_char de4x5_irq[] = EISA_ALLOWED_IRQ_LIST;
1993 
1994 static int __init de4x5_eisa_probe (struct device *gendev)
1995 {
1996  struct eisa_device *edev;
1997  u_long iobase;
1998  u_char irq, regval;
1999  u_short vendor;
2000  u32 cfid;
2001  int status, device;
2002  struct net_device *dev;
2003  struct de4x5_private *lp;
2004 
2005  edev = to_eisa_device (gendev);
2006  iobase = edev->base_addr;
2007 
2008  if (!request_region (iobase, DE4X5_EISA_TOTAL_SIZE, "de4x5"))
2009  return -EBUSY;
2010 
2011  if (!request_region (iobase + DE4X5_EISA_IO_PORTS,
2012  DE4X5_EISA_TOTAL_SIZE, "de4x5")) {
2013  status = -EBUSY;
2014  goto release_reg_1;
2015  }
2016 
2017  if (!(dev = alloc_etherdev (sizeof (struct de4x5_private)))) {
2018  status = -ENOMEM;
2019  goto release_reg_2;
2020  }
2021  lp = netdev_priv(dev);
2022 
2023  cfid = (u32) inl(PCI_CFID);
2024  lp->cfrv = (u_short) inl(PCI_CFRV);
2025  device = (cfid >> 8) & 0x00ffff00;
2026  vendor = (u_short) cfid;
2027 
2028  /* Read the EISA Configuration Registers */
2029  regval = inb(EISA_REG0) & (ER0_INTL | ER0_INTT);
2030 #ifdef CONFIG_ALPHA
2031  /* Looks like the Jensen firmware (rev 2.2) doesn't really
2032  * care about the EISA configuration, and thus doesn't
2033  * configure the PLX bridge properly. Oh well... Simply mimic
2034  * the EISA config file to sort it out. */
2035 
2036  /* EISA REG1: Assert DecChip 21040 HW Reset */
2037  outb (ER1_IAM | 1, EISA_REG1);
2038  mdelay (1);
2039 
2040  /* EISA REG1: Deassert DecChip 21040 HW Reset */
2041  outb (ER1_IAM, EISA_REG1);
2042  mdelay (1);
2043 
2044  /* EISA REG3: R/W Burst Transfer Enable */
2046 
2047  /* 32_bit slave/master, Preempt Time=23 bclks, Unlatched Interrupt */
2048  outb (ER0_BSW | ER0_BMW | ER0_EPT | regval, EISA_REG0);
2049 #endif
2050  irq = de4x5_irq[(regval >> 1) & 0x03];
2051 
2052  if (is_DC2114x) {
2053  device = ((lp->cfrv & CFRV_RN) < DC2114x_BRK ? DC21142 : DC21143);
2054  }
2055  lp->chipset = device;
2056  lp->bus = EISA;
2057 
2058  /* Write the PCI Configuration Registers */
2060  outl(0x00006000, PCI_CFLT);
2061  outl(iobase, PCI_CBIO);
2062 
2063  DevicePresent(dev, EISA_APROM);
2064 
2065  dev->irq = irq;
2066 
2067  if (!(status = de4x5_hw_init (dev, iobase, gendev))) {
2068  return 0;
2069  }
2070 
2071  free_netdev (dev);
2072  release_reg_2:
2074  release_reg_1:
2076 
2077  return status;
2078 }
2079 
2080 static int __devexit de4x5_eisa_remove (struct device *device)
2081 {
2082  struct net_device *dev;
2083  u_long iobase;
2084 
2085  dev = dev_get_drvdata(device);
2086  iobase = dev->base_addr;
2087 
2088  unregister_netdev (dev);
2089  free_netdev (dev);
2092 
2093  return 0;
2094 }
2095 
2096 static struct eisa_device_id de4x5_eisa_ids[] = {
2097  { "DEC4250", 0 }, /* 0 is the board name index... */
2098  { "" }
2099 };
2100 MODULE_DEVICE_TABLE(eisa, de4x5_eisa_ids);
2101 
2102 static struct eisa_driver de4x5_eisa_driver = {
2103  .id_table = de4x5_eisa_ids,
2104  .driver = {
2105  .name = "de4x5",
2106  .probe = de4x5_eisa_probe,
2107  .remove = __devexit_p (de4x5_eisa_remove),
2108  }
2109 };
2110 MODULE_DEVICE_TABLE(eisa, de4x5_eisa_ids);
2111 #endif
2112 
2113 #ifdef CONFIG_PCI
2114 
2115 /*
2116 ** This function searches the current bus (which is >0) for a DECchip with an
2117 ** SROM, so that in multiport cards that have one SROM shared between multiple
2118 ** DECchips, we can find the base SROM irrespective of the BIOS scan direction.
2119 ** For single port cards this is a time waster...
2120 */
2121 static void __devinit
2122 srom_search(struct net_device *dev, struct pci_dev *pdev)
2123 {
2124  u_char pb;
2126  u_int irq = 0, device;
2127  u_long iobase = 0; /* Clear upper 32 bits in Alphas */
2128  int i, j;
2129  struct de4x5_private *lp = netdev_priv(dev);
2130  struct pci_dev *this_dev;
2131 
2132  list_for_each_entry(this_dev, &pdev->bus->devices, bus_list) {
2133  vendor = this_dev->vendor;
2134  device = this_dev->device << 8;
2135  if (!(is_DC21040 || is_DC21041 || is_DC21140 || is_DC2114x)) continue;
2136 
2137  /* Get the chip configuration revision register */
2138  pb = this_dev->bus->number;
2139 
2140  /* Set the device number information */
2141  lp->device = PCI_SLOT(this_dev->devfn);
2142  lp->bus_num = pb;
2143 
2144  /* Set the chipset information */
2145  if (is_DC2114x) {
2146  device = ((this_dev->revision & CFRV_RN) < DC2114x_BRK
2147  ? DC21142 : DC21143);
2148  }
2149  lp->chipset = device;
2150 
2151  /* Get the board I/O address (64 bits on sparc64) */
2152  iobase = pci_resource_start(this_dev, 0);
2153 
2154  /* Fetch the IRQ to be used */
2155  irq = this_dev->irq;
2156  if ((irq == 0) || (irq == 0xff) || ((int)irq == -1)) continue;
2157 
2158  /* Check if I/O accesses are enabled */
2159  pci_read_config_word(this_dev, PCI_COMMAND, &status);
2160  if (!(status & PCI_COMMAND_IO)) continue;
2161 
2162  /* Search for a valid SROM attached to this DECchip */
2163  DevicePresent(dev, DE4X5_APROM);
2164  for (j=0, i=0; i<ETH_ALEN; i++) {
2165  j += (u_char) *((u_char *)&lp->srom + SROM_HWADD + i);
2166  }
2167  if (j != 0 && j != 6 * 0xff) {
2168  last.chipset = device;
2169  last.bus = pb;
2170  last.irq = irq;
2171  for (i=0; i<ETH_ALEN; i++) {
2172  last.addr[i] = (u_char)*((u_char *)&lp->srom + SROM_HWADD + i);
2173  }
2174  return;
2175  }
2176  }
2177 }
2178 
2179 /*
2180 ** PCI bus I/O device probe
2181 ** NB: PCI I/O accesses and Bus Mastering are enabled by the PCI BIOS, not
2182 ** the driver. Some PCI BIOS's, pre V2.1, need the slot + features to be
2183 ** enabled by the user first in the set up utility. Hence we just check for
2184 ** enabled features and silently ignore the card if they're not.
2185 **
2186 ** STOP PRESS: Some BIOS's __require__ the driver to enable the bus mastering
2187 ** bit. Here, check for I/O accesses and then set BM. If you put the card in
2188 ** a non BM slot, you're on your own (and complain to the PC vendor that your
2189 ** PC doesn't conform to the PCI standard)!
2190 **
2191 ** This function is only compatible with the *latest* 2.1.x kernels. For 2.0.x
2192 ** kernels use the V0.535[n] drivers.
2193 */
2194 
2195 static int __devinit de4x5_pci_probe (struct pci_dev *pdev,
2196  const struct pci_device_id *ent)
2197 {
2198  u_char pb, pbus = 0, dev_num, dnum = 0, timer;
2200  u_int irq = 0, device;
2201  u_long iobase = 0; /* Clear upper 32 bits in Alphas */
2202  int error;
2203  struct net_device *dev;
2204  struct de4x5_private *lp;
2205 
2206  dev_num = PCI_SLOT(pdev->devfn);
2207  pb = pdev->bus->number;
2208 
2209  if (io) { /* probe a single PCI device */
2210  pbus = (u_short)(io >> 8);
2211  dnum = (u_short)(io & 0xff);
2212  if ((pbus != pb) || (dnum != dev_num))
2213  return -ENODEV;
2214  }
2215 
2216  vendor = pdev->vendor;
2217  device = pdev->device << 8;
2218  if (!(is_DC21040 || is_DC21041 || is_DC21140 || is_DC2114x))
2219  return -ENODEV;
2220 
2221  /* Ok, the device seems to be for us. */
2222  if ((error = pci_enable_device (pdev)))
2223  return error;
2224 
2225  if (!(dev = alloc_etherdev (sizeof (struct de4x5_private)))) {
2226  error = -ENOMEM;
2227  goto disable_dev;
2228  }
2229 
2230  lp = netdev_priv(dev);
2231  lp->bus = PCI;
2232  lp->bus_num = 0;
2233 
2234  /* Search for an SROM on this bus */
2235  if (lp->bus_num != pb) {
2236  lp->bus_num = pb;
2237  srom_search(dev, pdev);
2238  }
2239 
2240  /* Get the chip configuration revision register */
2241  lp->cfrv = pdev->revision;
2242 
2243  /* Set the device number information */
2244  lp->device = dev_num;
2245  lp->bus_num = pb;
2246 
2247  /* Set the chipset information */
2248  if (is_DC2114x) {
2249  device = ((lp->cfrv & CFRV_RN) < DC2114x_BRK ? DC21142 : DC21143);
2250  }
2251  lp->chipset = device;
2252 
2253  /* Get the board I/O address (64 bits on sparc64) */
2254  iobase = pci_resource_start(pdev, 0);
2255 
2256  /* Fetch the IRQ to be used */
2257  irq = pdev->irq;
2258  if ((irq == 0) || (irq == 0xff) || ((int)irq == -1)) {
2259  error = -ENODEV;
2260  goto free_dev;
2261  }
2262 
2263  /* Check if I/O accesses and Bus Mastering are enabled */
2264  pci_read_config_word(pdev, PCI_COMMAND, &status);
2265 #ifdef __powerpc__
2266  if (!(status & PCI_COMMAND_IO)) {
2267  status |= PCI_COMMAND_IO;
2268  pci_write_config_word(pdev, PCI_COMMAND, status);
2269  pci_read_config_word(pdev, PCI_COMMAND, &status);
2270  }
2271 #endif /* __powerpc__ */
2272  if (!(status & PCI_COMMAND_IO)) {
2273  error = -ENODEV;
2274  goto free_dev;
2275  }
2276 
2277  if (!(status & PCI_COMMAND_MASTER)) {
2278  status |= PCI_COMMAND_MASTER;
2279  pci_write_config_word(pdev, PCI_COMMAND, status);
2280  pci_read_config_word(pdev, PCI_COMMAND, &status);
2281  }
2282  if (!(status & PCI_COMMAND_MASTER)) {
2283  error = -ENODEV;
2284  goto free_dev;
2285  }
2286 
2287  /* Check the latency timer for values >= 0x60 */
2288  pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &timer);
2289  if (timer < 0x60) {
2290  pci_write_config_byte(pdev, PCI_LATENCY_TIMER, 0x60);
2291  }
2292 
2293  DevicePresent(dev, DE4X5_APROM);
2294 
2295  if (!request_region (iobase, DE4X5_PCI_TOTAL_SIZE, "de4x5")) {
2296  error = -EBUSY;
2297  goto free_dev;
2298  }
2299 
2300  dev->irq = irq;
2301 
2302  if ((error = de4x5_hw_init(dev, iobase, &pdev->dev))) {
2303  goto release;
2304  }
2305 
2306  return 0;
2307 
2308  release:
2310  free_dev:
2311  free_netdev (dev);
2312  disable_dev:
2313  pci_disable_device (pdev);
2314  return error;
2315 }
2316 
2317 static void __devexit de4x5_pci_remove (struct pci_dev *pdev)
2318 {
2319  struct net_device *dev;
2320  u_long iobase;
2321 
2322  dev = dev_get_drvdata(&pdev->dev);
2323  iobase = dev->base_addr;
2324 
2325  unregister_netdev (dev);
2326  free_netdev (dev);
2328  pci_disable_device (pdev);
2329 }
2330 
2331 static struct pci_device_id de4x5_pci_tbl[] = {
2333  PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
2335  PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1 },
2337  PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2 },
2339  PCI_ANY_ID, PCI_ANY_ID, 0, 0, 3 },
2340  { },
2341 };
2342 
2343 static struct pci_driver de4x5_pci_driver = {
2344  .name = "de4x5",
2345  .id_table = de4x5_pci_tbl,
2346  .probe = de4x5_pci_probe,
2347  .remove = __devexit_p (de4x5_pci_remove),
2348 };
2349 
2350 #endif
2351 
2352 /*
2353 ** Auto configure the media here rather than setting the port at compile
2354 ** time. This routine is called by de4x5_init() and when a loss of media is
2355 ** detected (excessive collisions, loss of carrier, no carrier or link fail
2356 ** [TP] or no recent receive activity) to check whether the user has been
2357 ** sneaky and changed the port on us.
2358 */
2359 static int
2360 autoconf_media(struct net_device *dev)
2361 {
2362  struct de4x5_private *lp = netdev_priv(dev);
2363  u_long iobase = dev->base_addr;
2364 
2365  disable_ast(dev);
2366 
2367  lp->c_media = AUTO; /* Bogus last media */
2368  inl(DE4X5_MFC); /* Zero the lost frames counter */
2369  lp->media = INIT;
2370  lp->tcount = 0;
2371 
2372  de4x5_ast(dev);
2373 
2374  return lp->media;
2375 }
2376 
2377 /*
2378 ** Autoconfigure the media when using the DC21040. AUI cannot be distinguished
2379 ** from BNC as the port has a jumper to set thick or thin wire. When set for
2380 ** BNC, the BNC port will indicate activity if it's not terminated correctly.
2381 ** The only way to test for that is to place a loopback packet onto the
2382 ** network and watch for errors. Since we're messing with the interrupt mask
2383 ** register, disable the board interrupts and do not allow any more packets to
2384 ** be queued to the hardware. Re-enable everything only when the media is
2385 ** found.
2386 ** I may have to "age out" locally queued packets so that the higher layer
2387 ** timeouts don't effectively duplicate packets on the network.
2388 */
2389 static int
2390 dc21040_autoconf(struct net_device *dev)
2391 {
2392  struct de4x5_private *lp = netdev_priv(dev);
2393  u_long iobase = dev->base_addr;
2394  int next_tick = DE4X5_AUTOSENSE_MS;
2395  s32 imr;
2396 
2397  switch (lp->media) {
2398  case INIT:
2399  DISABLE_IRQs;
2400  lp->tx_enable = false;
2401  lp->timeout = -1;
2402  de4x5_save_skbs(dev);
2403  if ((lp->autosense == AUTO) || (lp->autosense == TP)) {
2404  lp->media = TP;
2405  } else if ((lp->autosense == BNC) || (lp->autosense == AUI) || (lp->autosense == BNC_AUI)) {
2406  lp->media = BNC_AUI;
2407  } else if (lp->autosense == EXT_SIA) {
2408  lp->media = EXT_SIA;
2409  } else {
2410  lp->media = NC;
2411  }
2412  lp->local_state = 0;
2413  next_tick = dc21040_autoconf(dev);
2414  break;
2415 
2416  case TP:
2417  next_tick = dc21040_state(dev, 0x8f01, 0xffff, 0x0000, 3000, BNC_AUI,
2418  TP_SUSPECT, test_tp);
2419  break;
2420 
2421  case TP_SUSPECT:
2422  next_tick = de4x5_suspect_state(dev, 1000, TP, test_tp, dc21040_autoconf);
2423  break;
2424 
2425  case BNC:
2426  case AUI:
2427  case BNC_AUI:
2428  next_tick = dc21040_state(dev, 0x8f09, 0x0705, 0x0006, 3000, EXT_SIA,
2429  BNC_AUI_SUSPECT, ping_media);
2430  break;
2431 
2432  case BNC_AUI_SUSPECT:
2433  next_tick = de4x5_suspect_state(dev, 1000, BNC_AUI, ping_media, dc21040_autoconf);
2434  break;
2435 
2436  case EXT_SIA:
2437  next_tick = dc21040_state(dev, 0x3041, 0x0000, 0x0006, 3000,
2438  NC, EXT_SIA_SUSPECT, ping_media);
2439  break;
2440 
2441  case EXT_SIA_SUSPECT:
2442  next_tick = de4x5_suspect_state(dev, 1000, EXT_SIA, ping_media, dc21040_autoconf);
2443  break;
2444 
2445  case NC:
2446  /* default to TP for all */
2447  reset_init_sia(dev, 0x8f01, 0xffff, 0x0000);
2448  if (lp->media != lp->c_media) {
2449  de4x5_dbg_media(dev);
2450  lp->c_media = lp->media;
2451  }
2452  lp->media = INIT;
2453  lp->tx_enable = false;
2454  break;
2455  }
2456 
2457  return next_tick;
2458 }
2459 
2460 static int
2461 dc21040_state(struct net_device *dev, int csr13, int csr14, int csr15, int timeout,
2462  int next_state, int suspect_state,
2463  int (*fn)(struct net_device *, int))
2464 {
2465  struct de4x5_private *lp = netdev_priv(dev);
2466  int next_tick = DE4X5_AUTOSENSE_MS;
2467  int linkBad;
2468 
2469  switch (lp->local_state) {
2470  case 0:
2471  reset_init_sia(dev, csr13, csr14, csr15);
2472  lp->local_state++;
2473  next_tick = 500;
2474  break;
2475 
2476  case 1:
2477  if (!lp->tx_enable) {
2478  linkBad = fn(dev, timeout);
2479  if (linkBad < 0) {
2480  next_tick = linkBad & ~TIMER_CB;
2481  } else {
2482  if (linkBad && (lp->autosense == AUTO)) {
2483  lp->local_state = 0;
2484  lp->media = next_state;
2485  } else {
2486  de4x5_init_connection(dev);
2487  }
2488  }
2489  } else if (!lp->linkOK && (lp->autosense == AUTO)) {
2490  lp->media = suspect_state;
2491  next_tick = 3000;
2492  }
2493  break;
2494  }
2495 
2496  return next_tick;
2497 }
2498 
2499 static int
2500 de4x5_suspect_state(struct net_device *dev, int timeout, int prev_state,
2501  int (*fn)(struct net_device *, int),
2502  int (*asfn)(struct net_device *))
2503 {
2504  struct de4x5_private *lp = netdev_priv(dev);
2505  int next_tick = DE4X5_AUTOSENSE_MS;
2506  int linkBad;
2507 
2508  switch (lp->local_state) {
2509  case 1:
2510  if (lp->linkOK) {
2511  lp->media = prev_state;
2512  } else {
2513  lp->local_state++;
2514  next_tick = asfn(dev);
2515  }
2516  break;
2517 
2518  case 2:
2519  linkBad = fn(dev, timeout);
2520  if (linkBad < 0) {
2521  next_tick = linkBad & ~TIMER_CB;
2522  } else if (!linkBad) {
2523  lp->local_state--;
2524  lp->media = prev_state;
2525  } else {
2526  lp->media = INIT;
2527  lp->tcount++;
2528  }
2529  }
2530 
2531  return next_tick;
2532 }
2533 
2534 /*
2535 ** Autoconfigure the media when using the DC21041. AUI needs to be tested
2536 ** before BNC, because the BNC port will indicate activity if it's not
2537 ** terminated correctly. The only way to test for that is to place a loopback
2538 ** packet onto the network and watch for errors. Since we're messing with
2539 ** the interrupt mask register, disable the board interrupts and do not allow
2540 ** any more packets to be queued to the hardware. Re-enable everything only
2541 ** when the media is found.
2542 */
2543 static int
2544 dc21041_autoconf(struct net_device *dev)
2545 {
2546  struct de4x5_private *lp = netdev_priv(dev);
2547  u_long iobase = dev->base_addr;
2548  s32 sts, irqs, irq_mask, imr, omr;
2549  int next_tick = DE4X5_AUTOSENSE_MS;
2550 
2551  switch (lp->media) {
2552  case INIT:
2553  DISABLE_IRQs;
2554  lp->tx_enable = false;
2555  lp->timeout = -1;
2556  de4x5_save_skbs(dev); /* Save non transmitted skb's */
2557  if ((lp->autosense == AUTO) || (lp->autosense == TP_NW)) {
2558  lp->media = TP; /* On chip auto negotiation is broken */
2559  } else if (lp->autosense == TP) {
2560  lp->media = TP;
2561  } else if (lp->autosense == BNC) {
2562  lp->media = BNC;
2563  } else if (lp->autosense == AUI) {
2564  lp->media = AUI;
2565  } else {
2566  lp->media = NC;
2567  }
2568  lp->local_state = 0;
2569  next_tick = dc21041_autoconf(dev);
2570  break;
2571 
2572  case TP_NW:
2573  if (lp->timeout < 0) {
2574  omr = inl(DE4X5_OMR);/* Set up full duplex for the autonegotiate */
2575  outl(omr | OMR_FDX, DE4X5_OMR);
2576  }
2577  irqs = STS_LNF | STS_LNP;
2578  irq_mask = IMR_LFM | IMR_LPM;
2579  sts = test_media(dev, irqs, irq_mask, 0xef01, 0xffff, 0x0008, 2400);
2580  if (sts < 0) {
2581  next_tick = sts & ~TIMER_CB;
2582  } else {
2583  if (sts & STS_LNP) {
2584  lp->media = ANS;
2585  } else {
2586  lp->media = AUI;
2587  }
2588  next_tick = dc21041_autoconf(dev);
2589  }
2590  break;
2591 
2592  case ANS:
2593  if (!lp->tx_enable) {
2594  irqs = STS_LNP;
2595  irq_mask = IMR_LPM;
2596  sts = test_ans(dev, irqs, irq_mask, 3000);
2597  if (sts < 0) {
2598  next_tick = sts & ~TIMER_CB;
2599  } else {
2600  if (!(sts & STS_LNP) && (lp->autosense == AUTO)) {
2601  lp->media = TP;
2602  next_tick = dc21041_autoconf(dev);
2603  } else {
2604  lp->local_state = 1;
2605  de4x5_init_connection(dev);
2606  }
2607  }
2608  } else if (!lp->linkOK && (lp->autosense == AUTO)) {
2609  lp->media = ANS_SUSPECT;
2610  next_tick = 3000;
2611  }
2612  break;
2613 
2614  case ANS_SUSPECT:
2615  next_tick = de4x5_suspect_state(dev, 1000, ANS, test_tp, dc21041_autoconf);
2616  break;
2617 
2618  case TP:
2619  if (!lp->tx_enable) {
2620  if (lp->timeout < 0) {
2621  omr = inl(DE4X5_OMR); /* Set up half duplex for TP */
2622  outl(omr & ~OMR_FDX, DE4X5_OMR);
2623  }
2624  irqs = STS_LNF | STS_LNP;
2625  irq_mask = IMR_LFM | IMR_LPM;
2626  sts = test_media(dev,irqs, irq_mask, 0xef01, 0xff3f, 0x0008, 2400);
2627  if (sts < 0) {
2628  next_tick = sts & ~TIMER_CB;
2629  } else {
2630  if (!(sts & STS_LNP) && (lp->autosense == AUTO)) {
2631  if (inl(DE4X5_SISR) & SISR_NRA) {
2632  lp->media = AUI; /* Non selected port activity */
2633  } else {
2634  lp->media = BNC;
2635  }
2636  next_tick = dc21041_autoconf(dev);
2637  } else {
2638  lp->local_state = 1;
2639  de4x5_init_connection(dev);
2640  }
2641  }
2642  } else if (!lp->linkOK && (lp->autosense == AUTO)) {
2643  lp->media = TP_SUSPECT;
2644  next_tick = 3000;
2645  }
2646  break;
2647 
2648  case TP_SUSPECT:
2649  next_tick = de4x5_suspect_state(dev, 1000, TP, test_tp, dc21041_autoconf);
2650  break;
2651 
2652  case AUI:
2653  if (!lp->tx_enable) {
2654  if (lp->timeout < 0) {
2655  omr = inl(DE4X5_OMR); /* Set up half duplex for AUI */
2656  outl(omr & ~OMR_FDX, DE4X5_OMR);
2657  }
2658  irqs = 0;
2659  irq_mask = 0;
2660  sts = test_media(dev,irqs, irq_mask, 0xef09, 0xf73d, 0x000e, 1000);
2661  if (sts < 0) {
2662  next_tick = sts & ~TIMER_CB;
2663  } else {
2664  if (!(inl(DE4X5_SISR) & SISR_SRA) && (lp->autosense == AUTO)) {
2665  lp->media = BNC;
2666  next_tick = dc21041_autoconf(dev);
2667  } else {
2668  lp->local_state = 1;
2669  de4x5_init_connection(dev);
2670  }
2671  }
2672  } else if (!lp->linkOK && (lp->autosense == AUTO)) {
2673  lp->media = AUI_SUSPECT;
2674  next_tick = 3000;
2675  }
2676  break;
2677 
2678  case AUI_SUSPECT:
2679  next_tick = de4x5_suspect_state(dev, 1000, AUI, ping_media, dc21041_autoconf);
2680  break;
2681 
2682  case BNC:
2683  switch (lp->local_state) {
2684  case 0:
2685  if (lp->timeout < 0) {
2686  omr = inl(DE4X5_OMR); /* Set up half duplex for BNC */
2687  outl(omr & ~OMR_FDX, DE4X5_OMR);
2688  }
2689  irqs = 0;
2690  irq_mask = 0;
2691  sts = test_media(dev,irqs, irq_mask, 0xef09, 0xf73d, 0x0006, 1000);
2692  if (sts < 0) {
2693  next_tick = sts & ~TIMER_CB;
2694  } else {
2695  lp->local_state++; /* Ensure media connected */
2696  next_tick = dc21041_autoconf(dev);
2697  }
2698  break;
2699 
2700  case 1:
2701  if (!lp->tx_enable) {
2702  if ((sts = ping_media(dev, 3000)) < 0) {
2703  next_tick = sts & ~TIMER_CB;
2704  } else {
2705  if (sts) {
2706  lp->local_state = 0;
2707  lp->media = NC;
2708  } else {
2709  de4x5_init_connection(dev);
2710  }
2711  }
2712  } else if (!lp->linkOK && (lp->autosense == AUTO)) {
2713  lp->media = BNC_SUSPECT;
2714  next_tick = 3000;
2715  }
2716  break;
2717  }
2718  break;
2719 
2720  case BNC_SUSPECT:
2721  next_tick = de4x5_suspect_state(dev, 1000, BNC, ping_media, dc21041_autoconf);
2722  break;
2723 
2724  case NC:
2725  omr = inl(DE4X5_OMR); /* Set up full duplex for the autonegotiate */
2726  outl(omr | OMR_FDX, DE4X5_OMR);
2727  reset_init_sia(dev, 0xef01, 0xffff, 0x0008);/* Initialise the SIA */
2728  if (lp->media != lp->c_media) {
2729  de4x5_dbg_media(dev);
2730  lp->c_media = lp->media;
2731  }
2732  lp->media = INIT;
2733  lp->tx_enable = false;
2734  break;
2735  }
2736 
2737  return next_tick;
2738 }
2739 
2740 /*
2741 ** Some autonegotiation chips are broken in that they do not return the
2742 ** acknowledge bit (anlpa & MII_ANLPA_ACK) in the link partner advertisement
2743 ** register, except at the first power up negotiation.
2744 */
2745 static int
2746 dc21140m_autoconf(struct net_device *dev)
2747 {
2748  struct de4x5_private *lp = netdev_priv(dev);
2749  int ana, anlpa, cap, cr, slnk, sr;
2750  int next_tick = DE4X5_AUTOSENSE_MS;
2751  u_long imr, omr, iobase = dev->base_addr;
2752 
2753  switch(lp->media) {
2754  case INIT:
2755  if (lp->timeout < 0) {
2756  DISABLE_IRQs;
2757  lp->tx_enable = false;
2758  lp->linkOK = 0;
2759  de4x5_save_skbs(dev); /* Save non transmitted skb's */
2760  }
2761  if ((next_tick = de4x5_reset_phy(dev)) < 0) {
2762  next_tick &= ~TIMER_CB;
2763  } else {
2764  if (lp->useSROM) {
2765  if (srom_map_media(dev) < 0) {
2766  lp->tcount++;
2767  return next_tick;
2768  }
2769  srom_exec(dev, lp->phy[lp->active].gep);
2770  if (lp->infoblock_media == ANS) {
2771  ana = lp->phy[lp->active].ana | MII_ANA_CSMA;
2772  mii_wr(ana, MII_ANA, lp->phy[lp->active].addr, DE4X5_MII);
2773  }
2774  } else {
2775  lp->tmp = MII_SR_ASSC; /* Fake out the MII speed set */
2776  SET_10Mb;
2777  if (lp->autosense == _100Mb) {
2778  lp->media = _100Mb;
2779  } else if (lp->autosense == _10Mb) {
2780  lp->media = _10Mb;
2781  } else if ((lp->autosense == AUTO) &&
2782  ((sr=is_anc_capable(dev)) & MII_SR_ANC)) {
2783  ana = (((sr >> 6) & MII_ANA_TAF) | MII_ANA_CSMA);
2784  ana &= (lp->fdx ? ~0 : ~MII_ANA_FDAM);
2785  mii_wr(ana, MII_ANA, lp->phy[lp->active].addr, DE4X5_MII);
2786  lp->media = ANS;
2787  } else if (lp->autosense == AUTO) {
2788  lp->media = SPD_DET;
2789  } else if (is_spd_100(dev) && is_100_up(dev)) {
2790  lp->media = _100Mb;
2791  } else {
2792  lp->media = NC;
2793  }
2794  }
2795  lp->local_state = 0;
2796  next_tick = dc21140m_autoconf(dev);
2797  }
2798  break;
2799 
2800  case ANS:
2801  switch (lp->local_state) {
2802  case 0:
2803  if (lp->timeout < 0) {
2804  mii_wr(MII_CR_ASSE | MII_CR_RAN, MII_CR, lp->phy[lp->active].addr, DE4X5_MII);
2805  }
2806  cr = test_mii_reg(dev, MII_CR, MII_CR_RAN, false, 500);
2807  if (cr < 0) {
2808  next_tick = cr & ~TIMER_CB;
2809  } else {
2810  if (cr) {
2811  lp->local_state = 0;
2812  lp->media = SPD_DET;
2813  } else {
2814  lp->local_state++;
2815  }
2816  next_tick = dc21140m_autoconf(dev);
2817  }
2818  break;
2819 
2820  case 1:
2821  if ((sr=test_mii_reg(dev, MII_SR, MII_SR_ASSC, true, 2000)) < 0) {
2822  next_tick = sr & ~TIMER_CB;
2823  } else {
2824  lp->media = SPD_DET;
2825  lp->local_state = 0;
2826  if (sr) { /* Success! */
2827  lp->tmp = MII_SR_ASSC;
2828  anlpa = mii_rd(MII_ANLPA, lp->phy[lp->active].addr, DE4X5_MII);
2829  ana = mii_rd(MII_ANA, lp->phy[lp->active].addr, DE4X5_MII);
2830  if (!(anlpa & MII_ANLPA_RF) &&
2831  (cap = anlpa & MII_ANLPA_TAF & ana)) {
2832  if (cap & MII_ANA_100M) {
2833  lp->fdx = (ana & anlpa & MII_ANA_FDAM & MII_ANA_100M) != 0;
2834  lp->media = _100Mb;
2835  } else if (cap & MII_ANA_10M) {
2836  lp->fdx = (ana & anlpa & MII_ANA_FDAM & MII_ANA_10M) != 0;
2837 
2838  lp->media = _10Mb;
2839  }
2840  }
2841  } /* Auto Negotiation failed to finish */
2842  next_tick = dc21140m_autoconf(dev);
2843  } /* Auto Negotiation failed to start */
2844  break;
2845  }
2846  break;
2847 
2848  case SPD_DET: /* Choose 10Mb/s or 100Mb/s */
2849  if (lp->timeout < 0) {
2850  lp->tmp = (lp->phy[lp->active].id ? MII_SR_LKS :
2851  (~gep_rd(dev) & GEP_LNP));
2853  }
2854  if ((slnk = test_for_100Mb(dev, 6500)) < 0) {
2855  next_tick = slnk & ~TIMER_CB;
2856  } else {
2857  if (is_spd_100(dev) && is_100_up(dev)) {
2858  lp->media = _100Mb;
2859  } else if ((!is_spd_100(dev) && (is_10_up(dev) & lp->tmp))) {
2860  lp->media = _10Mb;
2861  } else {
2862  lp->media = NC;
2863  }
2864  next_tick = dc21140m_autoconf(dev);
2865  }
2866  break;
2867 
2868  case _100Mb: /* Set 100Mb/s */
2869  next_tick = 3000;
2870  if (!lp->tx_enable) {
2871  SET_100Mb;
2872  de4x5_init_connection(dev);
2873  } else {
2874  if (!lp->linkOK && (lp->autosense == AUTO)) {
2875  if (!is_100_up(dev) || (!lp->useSROM && !is_spd_100(dev))) {
2876  lp->media = INIT;
2877  lp->tcount++;
2878  next_tick = DE4X5_AUTOSENSE_MS;
2879  }
2880  }
2881  }
2882  break;
2883 
2884  case BNC:
2885  case AUI:
2886  case _10Mb: /* Set 10Mb/s */
2887  next_tick = 3000;
2888  if (!lp->tx_enable) {
2889  SET_10Mb;
2890  de4x5_init_connection(dev);
2891  } else {
2892  if (!lp->linkOK && (lp->autosense == AUTO)) {
2893  if (!is_10_up(dev) || (!lp->useSROM && is_spd_100(dev))) {
2894  lp->media = INIT;
2895  lp->tcount++;
2896  next_tick = DE4X5_AUTOSENSE_MS;
2897  }
2898  }
2899  }
2900  break;
2901 
2902  case NC:
2903  if (lp->media != lp->c_media) {
2904  de4x5_dbg_media(dev);
2905  lp->c_media = lp->media;
2906  }
2907  lp->media = INIT;
2908  lp->tx_enable = false;
2909  break;
2910  }
2911 
2912  return next_tick;
2913 }
2914 
2915 /*
2916 ** This routine may be merged into dc21140m_autoconf() sometime as I'm
2917 ** changing how I figure out the media - but trying to keep it backwards
2918 ** compatible with the de500-xa and de500-aa.
2919 ** Whether it's BNC, AUI, SYM or MII is sorted out in the infoblock
2920 ** functions and set during de4x5_mac_port() and/or de4x5_reset_phy().
2921 ** This routine just has to figure out whether 10Mb/s or 100Mb/s is
2922 ** active.
2923 ** When autonegotiation is working, the ANS part searches the SROM for
2924 ** the highest common speed (TP) link that both can run and if that can
2925 ** be full duplex. That infoblock is executed and then the link speed set.
2926 **
2927 ** Only _10Mb and _100Mb are tested here.
2928 */
2929 static int
2930 dc2114x_autoconf(struct net_device *dev)
2931 {
2932  struct de4x5_private *lp = netdev_priv(dev);
2933  u_long iobase = dev->base_addr;
2934  s32 cr, anlpa, ana, cap, irqs, irq_mask, imr, omr, slnk, sr, sts;
2935  int next_tick = DE4X5_AUTOSENSE_MS;
2936 
2937  switch (lp->media) {
2938  case INIT:
2939  if (lp->timeout < 0) {
2940  DISABLE_IRQs;
2941  lp->tx_enable = false;
2942  lp->linkOK = 0;
2943  lp->timeout = -1;
2944  de4x5_save_skbs(dev); /* Save non transmitted skb's */
2945  if (lp->params.autosense & ~AUTO) {
2946  srom_map_media(dev); /* Fixed media requested */
2947  if (lp->media != lp->params.autosense) {
2948  lp->tcount++;
2949  lp->media = INIT;
2950  return next_tick;
2951  }
2952  lp->media = INIT;
2953  }
2954  }
2955  if ((next_tick = de4x5_reset_phy(dev)) < 0) {
2956  next_tick &= ~TIMER_CB;
2957  } else {
2958  if (lp->autosense == _100Mb) {
2959  lp->media = _100Mb;
2960  } else if (lp->autosense == _10Mb) {
2961  lp->media = _10Mb;
2962  } else if (lp->autosense == TP) {
2963  lp->media = TP;
2964  } else if (lp->autosense == BNC) {
2965  lp->media = BNC;
2966  } else if (lp->autosense == AUI) {
2967  lp->media = AUI;
2968  } else {
2969  lp->media = SPD_DET;
2970  if ((lp->infoblock_media == ANS) &&
2971  ((sr=is_anc_capable(dev)) & MII_SR_ANC)) {
2972  ana = (((sr >> 6) & MII_ANA_TAF) | MII_ANA_CSMA);
2973  ana &= (lp->fdx ? ~0 : ~MII_ANA_FDAM);
2974  mii_wr(ana, MII_ANA, lp->phy[lp->active].addr, DE4X5_MII);
2975  lp->media = ANS;
2976  }
2977  }
2978  lp->local_state = 0;
2979  next_tick = dc2114x_autoconf(dev);
2980  }
2981  break;
2982 
2983  case ANS:
2984  switch (lp->local_state) {
2985  case 0:
2986  if (lp->timeout < 0) {
2987  mii_wr(MII_CR_ASSE | MII_CR_RAN, MII_CR, lp->phy[lp->active].addr, DE4X5_MII);
2988  }
2989  cr = test_mii_reg(dev, MII_CR, MII_CR_RAN, false, 500);
2990  if (cr < 0) {
2991  next_tick = cr & ~TIMER_CB;
2992  } else {
2993  if (cr) {
2994  lp->local_state = 0;
2995  lp->media = SPD_DET;
2996  } else {
2997  lp->local_state++;
2998  }
2999  next_tick = dc2114x_autoconf(dev);
3000  }
3001  break;
3002 
3003  case 1:
3004  sr = test_mii_reg(dev, MII_SR, MII_SR_ASSC, true, 2000);
3005  if (sr < 0) {
3006  next_tick = sr & ~TIMER_CB;
3007  } else {
3008  lp->media = SPD_DET;
3009  lp->local_state = 0;
3010  if (sr) { /* Success! */
3011  lp->tmp = MII_SR_ASSC;
3012  anlpa = mii_rd(MII_ANLPA, lp->phy[lp->active].addr, DE4X5_MII);
3013  ana = mii_rd(MII_ANA, lp->phy[lp->active].addr, DE4X5_MII);
3014  if (!(anlpa & MII_ANLPA_RF) &&
3015  (cap = anlpa & MII_ANLPA_TAF & ana)) {
3016  if (cap & MII_ANA_100M) {
3017  lp->fdx = (ana & anlpa & MII_ANA_FDAM & MII_ANA_100M) != 0;
3018  lp->media = _100Mb;
3019  } else if (cap & MII_ANA_10M) {
3020  lp->fdx = (ana & anlpa & MII_ANA_FDAM & MII_ANA_10M) != 0;
3021  lp->media = _10Mb;
3022  }
3023  }
3024  } /* Auto Negotiation failed to finish */
3025  next_tick = dc2114x_autoconf(dev);
3026  } /* Auto Negotiation failed to start */
3027  break;
3028  }
3029  break;
3030 
3031  case AUI:
3032  if (!lp->tx_enable) {
3033  if (lp->timeout < 0) {
3034  omr = inl(DE4X5_OMR); /* Set up half duplex for AUI */
3035  outl(omr & ~OMR_FDX, DE4X5_OMR);
3036  }
3037  irqs = 0;
3038  irq_mask = 0;
3039  sts = test_media(dev,irqs, irq_mask, 0, 0, 0, 1000);
3040  if (sts < 0) {
3041  next_tick = sts & ~TIMER_CB;
3042  } else {
3043  if (!(inl(DE4X5_SISR) & SISR_SRA) && (lp->autosense == AUTO)) {
3044  lp->media = BNC;
3045  next_tick = dc2114x_autoconf(dev);
3046  } else {
3047  lp->local_state = 1;
3048  de4x5_init_connection(dev);
3049  }
3050  }
3051  } else if (!lp->linkOK && (lp->autosense == AUTO)) {
3052  lp->media = AUI_SUSPECT;
3053  next_tick = 3000;
3054  }
3055  break;
3056 
3057  case AUI_SUSPECT:
3058  next_tick = de4x5_suspect_state(dev, 1000, AUI, ping_media, dc2114x_autoconf);
3059  break;
3060 
3061  case BNC:
3062  switch (lp->local_state) {
3063  case 0:
3064  if (lp->timeout < 0) {
3065  omr = inl(DE4X5_OMR); /* Set up half duplex for BNC */
3066  outl(omr & ~OMR_FDX, DE4X5_OMR);
3067  }
3068  irqs = 0;
3069  irq_mask = 0;
3070  sts = test_media(dev,irqs, irq_mask, 0, 0, 0, 1000);
3071  if (sts < 0) {
3072  next_tick = sts & ~TIMER_CB;
3073  } else {
3074  lp->local_state++; /* Ensure media connected */
3075  next_tick = dc2114x_autoconf(dev);
3076  }
3077  break;
3078 
3079  case 1:
3080  if (!lp->tx_enable) {
3081  if ((sts = ping_media(dev, 3000)) < 0) {
3082  next_tick = sts & ~TIMER_CB;
3083  } else {
3084  if (sts) {
3085  lp->local_state = 0;
3086  lp->tcount++;
3087  lp->media = INIT;
3088  } else {
3089  de4x5_init_connection(dev);
3090  }
3091  }
3092  } else if (!lp->linkOK && (lp->autosense == AUTO)) {
3093  lp->media = BNC_SUSPECT;
3094  next_tick = 3000;
3095  }
3096  break;
3097  }
3098  break;
3099 
3100  case BNC_SUSPECT:
3101  next_tick = de4x5_suspect_state(dev, 1000, BNC, ping_media, dc2114x_autoconf);
3102  break;
3103 
3104  case SPD_DET: /* Choose 10Mb/s or 100Mb/s */
3105  if (srom_map_media(dev) < 0) {
3106  lp->tcount++;
3107  lp->media = INIT;
3108  return next_tick;
3109  }
3110  if (lp->media == _100Mb) {
3111  if ((slnk = test_for_100Mb(dev, 6500)) < 0) {
3112  lp->media = SPD_DET;
3113  return slnk & ~TIMER_CB;
3114  }
3115  } else {
3116  if (wait_for_link(dev) < 0) {
3117  lp->media = SPD_DET;
3118  return PDET_LINK_WAIT;
3119  }
3120  }
3121  if (lp->media == ANS) { /* Do MII parallel detection */
3122  if (is_spd_100(dev)) {
3123  lp->media = _100Mb;
3124  } else {
3125  lp->media = _10Mb;
3126  }
3127  next_tick = dc2114x_autoconf(dev);
3128  } else if (((lp->media == _100Mb) && is_100_up(dev)) ||
3129  (((lp->media == _10Mb) || (lp->media == TP) ||
3130  (lp->media == BNC) || (lp->media == AUI)) &&
3131  is_10_up(dev))) {
3132  next_tick = dc2114x_autoconf(dev);
3133  } else {
3134  lp->tcount++;
3135  lp->media = INIT;
3136  }
3137  break;
3138 
3139  case _10Mb:
3140  next_tick = 3000;
3141  if (!lp->tx_enable) {
3142  SET_10Mb;
3143  de4x5_init_connection(dev);
3144  } else {
3145  if (!lp->linkOK && (lp->autosense == AUTO)) {
3146  if (!is_10_up(dev) || (!lp->useSROM && is_spd_100(dev))) {
3147  lp->media = INIT;
3148  lp->tcount++;
3149  next_tick = DE4X5_AUTOSENSE_MS;
3150  }
3151  }
3152  }
3153  break;
3154 
3155  case _100Mb:
3156  next_tick = 3000;
3157  if (!lp->tx_enable) {
3158  SET_100Mb;
3159  de4x5_init_connection(dev);
3160  } else {
3161  if (!lp->linkOK && (lp->autosense == AUTO)) {
3162  if (!is_100_up(dev) || (!lp->useSROM && !is_spd_100(dev))) {
3163  lp->media = INIT;
3164  lp->tcount++;
3165  next_tick = DE4X5_AUTOSENSE_MS;
3166  }
3167  }
3168  }
3169  break;
3170 
3171  default:
3172  lp->tcount++;
3173 printk("Huh?: media:%02x\n", lp->media);
3174  lp->media = INIT;
3175  break;
3176  }
3177 
3178  return next_tick;
3179 }
3180 
3181 static int
3182 srom_autoconf(struct net_device *dev)
3183 {
3184  struct de4x5_private *lp = netdev_priv(dev);
3185 
3186  return lp->infoleaf_fn(dev);
3187 }
3188 
3189 /*
3190 ** This mapping keeps the original media codes and FDX flag unchanged.
3191 ** While it isn't strictly necessary, it helps me for the moment...
3192 ** The early return avoids a media state / SROM media space clash.
3193 */
3194 static int
3195 srom_map_media(struct net_device *dev)
3196 {
3197  struct de4x5_private *lp = netdev_priv(dev);
3198 
3199  lp->fdx = false;
3200  if (lp->infoblock_media == lp->media)
3201  return 0;
3202 
3203  switch(lp->infoblock_media) {
3204  case SROM_10BASETF:
3205  if (!lp->params.fdx) return -1;
3206  lp->fdx = true;
3207  case SROM_10BASET:
3208  if (lp->params.fdx && !lp->fdx) return -1;
3209  if ((lp->chipset == DC21140) || ((lp->chipset & ~0x00ff) == DC2114x)) {
3210  lp->media = _10Mb;
3211  } else {
3212  lp->media = TP;
3213  }
3214  break;
3215 
3216  case SROM_10BASE2:
3217  lp->media = BNC;
3218  break;
3219 
3220  case SROM_10BASE5:
3221  lp->media = AUI;
3222  break;
3223 
3224  case SROM_100BASETF:
3225  if (!lp->params.fdx) return -1;
3226  lp->fdx = true;
3227  case SROM_100BASET:
3228  if (lp->params.fdx && !lp->fdx) return -1;
3229  lp->media = _100Mb;
3230  break;
3231 
3232  case SROM_100BASET4:
3233  lp->media = _100Mb;
3234  break;
3235 
3236  case SROM_100BASEFF:
3237  if (!lp->params.fdx) return -1;
3238  lp->fdx = true;
3239  case SROM_100BASEF:
3240  if (lp->params.fdx && !lp->fdx) return -1;
3241  lp->media = _100Mb;
3242  break;
3243 
3244  case ANS:
3245  lp->media = ANS;
3246  lp->fdx = lp->params.fdx;
3247  break;
3248 
3249  default:
3250  printk("%s: Bad media code [%d] detected in SROM!\n", dev->name,
3251  lp->infoblock_media);
3252  return -1;
3253  break;
3254  }
3255 
3256  return 0;
3257 }
3258 
3259 static void
3260 de4x5_init_connection(struct net_device *dev)
3261 {
3262  struct de4x5_private *lp = netdev_priv(dev);
3263  u_long iobase = dev->base_addr;
3264  u_long flags = 0;
3265 
3266  if (lp->media != lp->c_media) {
3267  de4x5_dbg_media(dev);
3268  lp->c_media = lp->media; /* Stop scrolling media messages */
3269  }
3270 
3271  spin_lock_irqsave(&lp->lock, flags);
3272  de4x5_rst_desc_ring(dev);
3273  de4x5_setup_intr(dev);
3274  lp->tx_enable = true;
3275  spin_unlock_irqrestore(&lp->lock, flags);
3277 
3278  netif_wake_queue(dev);
3279 }
3280 
3281 /*
3282 ** General PHY reset function. Some MII devices don't reset correctly
3283 ** since their MII address pins can float at voltages that are dependent
3284 ** on the signal pin use. Do a double reset to ensure a reset.
3285 */
3286 static int
3287 de4x5_reset_phy(struct net_device *dev)
3288 {
3289  struct de4x5_private *lp = netdev_priv(dev);
3290  u_long iobase = dev->base_addr;
3291  int next_tick = 0;
3292 
3293  if ((lp->useSROM) || (lp->phy[lp->active].id)) {
3294  if (lp->timeout < 0) {
3295  if (lp->useSROM) {
3296  if (lp->phy[lp->active].rst) {
3297  srom_exec(dev, lp->phy[lp->active].rst);
3298  srom_exec(dev, lp->phy[lp->active].rst);
3299  } else if (lp->rst) { /* Type 5 infoblock reset */
3300  srom_exec(dev, lp->rst);
3301  srom_exec(dev, lp->rst);
3302  }
3303  } else {
3305  }
3306  if (lp->useMII) {
3307  mii_wr(MII_CR_RST, MII_CR, lp->phy[lp->active].addr, DE4X5_MII);
3308  }
3309  }
3310  if (lp->useMII) {
3311  next_tick = test_mii_reg(dev, MII_CR, MII_CR_RST, false, 500);
3312  }
3313  } else if (lp->chipset == DC21140) {
3315  }
3316 
3317  return next_tick;
3318 }
3319 
3320 static int
3321 test_media(struct net_device *dev, s32 irqs, s32 irq_mask, s32 csr13, s32 csr14, s32 csr15, s32 msec)
3322 {
3323  struct de4x5_private *lp = netdev_priv(dev);
3324  u_long iobase = dev->base_addr;
3325  s32 sts, csr12;
3326 
3327  if (lp->timeout < 0) {
3328  lp->timeout = msec/100;
3329  if (!lp->useSROM) { /* Already done if by SROM, else dc2104[01] */
3330  reset_init_sia(dev, csr13, csr14, csr15);
3331  }
3332 
3333  /* set up the interrupt mask */
3334  outl(irq_mask, DE4X5_IMR);
3335 
3336  /* clear all pending interrupts */
3337  sts = inl(DE4X5_STS);
3338  outl(sts, DE4X5_STS);
3339 
3340  /* clear csr12 NRA and SRA bits */
3341  if ((lp->chipset == DC21041) || lp->useSROM) {
3342  csr12 = inl(DE4X5_SISR);
3343  outl(csr12, DE4X5_SISR);
3344  }
3345  }
3346 
3347  sts = inl(DE4X5_STS) & ~TIMER_CB;
3348 
3349  if (!(sts & irqs) && --lp->timeout) {
3350  sts = 100 | TIMER_CB;
3351  } else {
3352  lp->timeout = -1;
3353  }
3354 
3355  return sts;
3356 }
3357 
3358 static int
3359 test_tp(struct net_device *dev, s32 msec)
3360 {
3361  struct de4x5_private *lp = netdev_priv(dev);
3362  u_long iobase = dev->base_addr;
3363  int sisr;
3364 
3365  if (lp->timeout < 0) {
3366  lp->timeout = msec/100;
3367  }
3368 
3369  sisr = (inl(DE4X5_SISR) & ~TIMER_CB) & (SISR_LKF | SISR_NCR);
3370 
3371  if (sisr && --lp->timeout) {
3372  sisr = 100 | TIMER_CB;
3373  } else {
3374  lp->timeout = -1;
3375  }
3376 
3377  return sisr;
3378 }
3379 
3380 /*
3381 ** Samples the 100Mb Link State Signal. The sample interval is important
3382 ** because too fast a rate can give erroneous results and confuse the
3383 ** speed sense algorithm.
3384 */
3385 #define SAMPLE_INTERVAL 500 /* ms */
3386 #define SAMPLE_DELAY 2000 /* ms */
3387 static int
3388 test_for_100Mb(struct net_device *dev, int msec)
3389 {
3390  struct de4x5_private *lp = netdev_priv(dev);
3391  int gep = 0, ret = ((lp->chipset & ~0x00ff)==DC2114x? -1 :GEP_SLNK);
3392 
3393  if (lp->timeout < 0) {
3394  if ((msec/SAMPLE_INTERVAL) <= 0) return 0;
3395  if (msec > SAMPLE_DELAY) {
3396  lp->timeout = (msec - SAMPLE_DELAY)/SAMPLE_INTERVAL;
3397  gep = SAMPLE_DELAY | TIMER_CB;
3398  return gep;
3399  } else {
3400  lp->timeout = msec/SAMPLE_INTERVAL;
3401  }
3402  }
3403 
3404  if (lp->phy[lp->active].id || lp->useSROM) {
3405  gep = is_100_up(dev) | is_spd_100(dev);
3406  } else {
3407  gep = (~gep_rd(dev) & (GEP_SLNK | GEP_LNP));
3408  }
3409  if (!(gep & ret) && --lp->timeout) {
3410  gep = SAMPLE_INTERVAL | TIMER_CB;
3411  } else {
3412  lp->timeout = -1;
3413  }
3414 
3415  return gep;
3416 }
3417 
3418 static int
3419 wait_for_link(struct net_device *dev)
3420 {
3421  struct de4x5_private *lp = netdev_priv(dev);
3422 
3423  if (lp->timeout < 0) {
3424  lp->timeout = 1;
3425  }
3426 
3427  if (lp->timeout--) {
3428  return TIMER_CB;
3429  } else {
3430  lp->timeout = -1;
3431  }
3432 
3433  return 0;
3434 }
3435 
3436 /*
3437 **
3438 **
3439 */
3440 static int
3441 test_mii_reg(struct net_device *dev, int reg, int mask, bool pol, long msec)
3442 {
3443  struct de4x5_private *lp = netdev_priv(dev);
3444  int test;
3445  u_long iobase = dev->base_addr;
3446 
3447  if (lp->timeout < 0) {
3448  lp->timeout = msec/100;
3449  }
3450 
3451  reg = mii_rd((u_char)reg, lp->phy[lp->active].addr, DE4X5_MII) & mask;
3452  test = (reg ^ (pol ? ~0 : 0)) & mask;
3453 
3454  if (test && --lp->timeout) {
3455  reg = 100 | TIMER_CB;
3456  } else {
3457  lp->timeout = -1;
3458  }
3459 
3460  return reg;
3461 }
3462 
3463 static int
3464 is_spd_100(struct net_device *dev)
3465 {
3466  struct de4x5_private *lp = netdev_priv(dev);
3467  u_long iobase = dev->base_addr;
3468  int spd;
3469 
3470  if (lp->useMII) {
3471  spd = mii_rd(lp->phy[lp->active].spd.reg, lp->phy[lp->active].addr, DE4X5_MII);
3472  spd = ~(spd ^ lp->phy[lp->active].spd.value);
3473  spd &= lp->phy[lp->active].spd.mask;
3474  } else if (!lp->useSROM) { /* de500-xa */
3475  spd = ((~gep_rd(dev)) & GEP_SLNK);
3476  } else {
3477  if ((lp->ibn == 2) || !lp->asBitValid)
3478  return (lp->chipset == DC21143) ? (~inl(DE4X5_SISR)&SISR_LS100) : 0;
3479 
3480  spd = (lp->asBitValid & (lp->asPolarity ^ (gep_rd(dev) & lp->asBit))) |
3481  (lp->linkOK & ~lp->asBitValid);
3482  }
3483 
3484  return spd;
3485 }
3486 
3487 static int
3488 is_100_up(struct net_device *dev)
3489 {
3490  struct de4x5_private *lp = netdev_priv(dev);
3491  u_long iobase = dev->base_addr;
3492 
3493  if (lp->useMII) {
3494  /* Double read for sticky bits & temporary drops */
3495  mii_rd(MII_SR, lp->phy[lp->active].addr, DE4X5_MII);
3496  return mii_rd(MII_SR, lp->phy[lp->active].addr, DE4X5_MII) & MII_SR_LKS;
3497  } else if (!lp->useSROM) { /* de500-xa */
3498  return (~gep_rd(dev)) & GEP_SLNK;
3499  } else {
3500  if ((lp->ibn == 2) || !lp->asBitValid)
3501  return (lp->chipset == DC21143) ? (~inl(DE4X5_SISR)&SISR_LS100) : 0;
3502 
3503  return (lp->asBitValid&(lp->asPolarity^(gep_rd(dev)&lp->asBit))) |
3504  (lp->linkOK & ~lp->asBitValid);
3505  }
3506 }
3507 
3508 static int
3509 is_10_up(struct net_device *dev)
3510 {
3511  struct de4x5_private *lp = netdev_priv(dev);
3512  u_long iobase = dev->base_addr;
3513 
3514  if (lp->useMII) {
3515  /* Double read for sticky bits & temporary drops */
3516  mii_rd(MII_SR, lp->phy[lp->active].addr, DE4X5_MII);
3517  return mii_rd(MII_SR, lp->phy[lp->active].addr, DE4X5_MII) & MII_SR_LKS;
3518  } else if (!lp->useSROM) { /* de500-xa */
3519  return (~gep_rd(dev)) & GEP_LNP;
3520  } else {
3521  if ((lp->ibn == 2) || !lp->asBitValid)
3522  return ((lp->chipset & ~0x00ff) == DC2114x) ?
3523  (~inl(DE4X5_SISR)&SISR_LS10):
3524  0;
3525 
3526  return (lp->asBitValid&(lp->asPolarity^(gep_rd(dev)&lp->asBit))) |
3527  (lp->linkOK & ~lp->asBitValid);
3528  }
3529 }
3530 
3531 static int
3532 is_anc_capable(struct net_device *dev)
3533 {
3534  struct de4x5_private *lp = netdev_priv(dev);
3535  u_long iobase = dev->base_addr;
3536 
3537  if (lp->phy[lp->active].id && (!lp->useSROM || lp->useMII)) {
3538  return mii_rd(MII_SR, lp->phy[lp->active].addr, DE4X5_MII);
3539  } else if ((lp->chipset & ~0x00ff) == DC2114x) {
3540  return (inl(DE4X5_SISR) & SISR_LPN) >> 12;
3541  } else {
3542  return 0;
3543  }
3544 }
3545 
3546 /*
3547 ** Send a packet onto the media and watch for send errors that indicate the
3548 ** media is bad or unconnected.
3549 */
3550 static int
3551 ping_media(struct net_device *dev, int msec)
3552 {
3553  struct de4x5_private *lp = netdev_priv(dev);
3554  u_long iobase = dev->base_addr;
3555  int sisr;
3556 
3557  if (lp->timeout < 0) {
3558  lp->timeout = msec/100;
3559 
3560  lp->tmp = lp->tx_new; /* Remember the ring position */
3561  load_packet(dev, lp->frame, TD_LS | TD_FS | sizeof(lp->frame), (struct sk_buff *)1);
3562  lp->tx_new = (lp->tx_new + 1) % lp->txRingSize;
3564  }
3565 
3566  sisr = inl(DE4X5_SISR);
3567 
3568  if ((!(sisr & SISR_NCR)) &&
3569  ((s32)le32_to_cpu(lp->tx_ring[lp->tmp].status) < 0) &&
3570  (--lp->timeout)) {
3571  sisr = 100 | TIMER_CB;
3572  } else {
3573  if ((!(sisr & SISR_NCR)) &&
3574  !(le32_to_cpu(lp->tx_ring[lp->tmp].status) & (T_OWN | TD_ES)) &&
3575  lp->timeout) {
3576  sisr = 0;
3577  } else {
3578  sisr = 1;
3579  }
3580  lp->timeout = -1;
3581  }
3582 
3583  return sisr;
3584 }
3585 
3586 /*
3587 ** This function does 2 things: on Intels it kmalloc's another buffer to
3588 ** replace the one about to be passed up. On Alpha's it kmallocs a buffer
3589 ** into which the packet is copied.
3590 */
3591 static struct sk_buff *
3592 de4x5_alloc_rx_buff(struct net_device *dev, int index, int len)
3593 {
3594  struct de4x5_private *lp = netdev_priv(dev);
3595  struct sk_buff *p;
3596 
3597 #if !defined(__alpha__) && !defined(__powerpc__) && !defined(CONFIG_SPARC) && !defined(DE4X5_DO_MEMCPY)
3598  struct sk_buff *ret;
3599  u_long i=0, tmp;
3600 
3601  p = netdev_alloc_skb(dev, IEEE802_3_SZ + DE4X5_ALIGN + 2);
3602  if (!p) return NULL;
3603 
3604  tmp = virt_to_bus(p->data);
3605  i = ((tmp + DE4X5_ALIGN) & ~DE4X5_ALIGN) - tmp;
3606  skb_reserve(p, i);
3607  lp->rx_ring[index].buf = cpu_to_le32(tmp + i);
3608 
3609  ret = lp->rx_skb[index];
3610  lp->rx_skb[index] = p;
3611 
3612  if ((u_long) ret > 1) {
3613  skb_put(ret, len);
3614  }
3615 
3616  return ret;
3617 
3618 #else
3619  if (lp->state != OPEN) return (struct sk_buff *)1; /* Fake out the open */
3620 
3621  p = netdev_alloc_skb(dev, len + 2);
3622  if (!p) return NULL;
3623 
3624  skb_reserve(p, 2); /* Align */
3625  if (index < lp->rx_old) { /* Wrapped buffer */
3626  short tlen = (lp->rxRingSize - lp->rx_old) * RX_BUFF_SZ;
3627  memcpy(skb_put(p,tlen),lp->rx_bufs + lp->rx_old * RX_BUFF_SZ,tlen);
3628  memcpy(skb_put(p,len-tlen),lp->rx_bufs,len-tlen);
3629  } else { /* Linear buffer */
3630  memcpy(skb_put(p,len),lp->rx_bufs + lp->rx_old * RX_BUFF_SZ,len);
3631  }
3632 
3633  return p;
3634 #endif
3635 }
3636 
3637 static void
3638 de4x5_free_rx_buffs(struct net_device *dev)
3639 {
3640  struct de4x5_private *lp = netdev_priv(dev);
3641  int i;
3642 
3643  for (i=0; i<lp->rxRingSize; i++) {
3644  if ((u_long) lp->rx_skb[i] > 1) {
3645  dev_kfree_skb(lp->rx_skb[i]);
3646  }
3647  lp->rx_ring[i].status = 0;
3648  lp->rx_skb[i] = (struct sk_buff *)1; /* Dummy entry */
3649  }
3650 }
3651 
3652 static void
3653 de4x5_free_tx_buffs(struct net_device *dev)
3654 {
3655  struct de4x5_private *lp = netdev_priv(dev);
3656  int i;
3657 
3658  for (i=0; i<lp->txRingSize; i++) {
3659  if (lp->tx_skb[i])
3660  de4x5_free_tx_buff(lp, i);
3661  lp->tx_ring[i].status = 0;
3662  }
3663 
3664  /* Unload the locally queued packets */
3665  __skb_queue_purge(&lp->cache.queue);
3666 }
3667 
3668 /*
3669 ** When a user pulls a connection, the DECchip can end up in a
3670 ** 'running - waiting for end of transmission' state. This means that we
3671 ** have to perform a chip soft reset to ensure that we can synchronize
3672 ** the hardware and software and make any media probes using a loopback
3673 ** packet meaningful.
3674 */
3675 static void
3676 de4x5_save_skbs(struct net_device *dev)
3677 {
3678  struct de4x5_private *lp = netdev_priv(dev);
3679  u_long iobase = dev->base_addr;
3680  s32 omr;
3681 
3682  if (!lp->cache.save_cnt) {
3683  STOP_DE4X5;
3684  de4x5_tx(dev); /* Flush any sent skb's */
3685  de4x5_free_tx_buffs(dev);
3686  de4x5_cache_state(dev, DE4X5_SAVE_STATE);
3687  de4x5_sw_reset(dev);
3688  de4x5_cache_state(dev, DE4X5_RESTORE_STATE);
3689  lp->cache.save_cnt++;
3690  START_DE4X5;
3691  }
3692 }
3693 
3694 static void
3695 de4x5_rst_desc_ring(struct net_device *dev)
3696 {
3697  struct de4x5_private *lp = netdev_priv(dev);
3698  u_long iobase = dev->base_addr;
3699  int i;
3700  s32 omr;
3701 
3702  if (lp->cache.save_cnt) {
3703  STOP_DE4X5;
3704  outl(lp->dma_rings, DE4X5_RRBA);
3705  outl(lp->dma_rings + NUM_RX_DESC * sizeof(struct de4x5_desc),
3706  DE4X5_TRBA);
3707 
3708  lp->rx_new = lp->rx_old = 0;
3709  lp->tx_new = lp->tx_old = 0;
3710 
3711  for (i = 0; i < lp->rxRingSize; i++) {
3712  lp->rx_ring[i].status = cpu_to_le32(R_OWN);
3713  }
3714 
3715  for (i = 0; i < lp->txRingSize; i++) {
3716  lp->tx_ring[i].status = cpu_to_le32(0);
3717  }
3718 
3719  barrier();
3720  lp->cache.save_cnt--;
3721  START_DE4X5;
3722  }
3723 }
3724 
3725 static void
3726 de4x5_cache_state(struct net_device *dev, int flag)
3727 {
3728  struct de4x5_private *lp = netdev_priv(dev);
3729  u_long iobase = dev->base_addr;
3730 
3731  switch(flag) {
3732  case DE4X5_SAVE_STATE:
3733  lp->cache.csr0 = inl(DE4X5_BMR);
3734  lp->cache.csr6 = (inl(DE4X5_OMR) & ~(OMR_ST | OMR_SR));
3735  lp->cache.csr7 = inl(DE4X5_IMR);
3736  break;
3737 
3738  case DE4X5_RESTORE_STATE:
3739  outl(lp->cache.csr0, DE4X5_BMR);
3740  outl(lp->cache.csr6, DE4X5_OMR);
3741  outl(lp->cache.csr7, DE4X5_IMR);
3742  if (lp->chipset == DC21140) {
3743  gep_wr(lp->cache.gepc, dev);
3744  gep_wr(lp->cache.gep, dev);
3745  } else {
3746  reset_init_sia(dev, lp->cache.csr13, lp->cache.csr14,
3747  lp->cache.csr15);
3748  }
3749  break;
3750  }
3751 }
3752 
3753 static void
3754 de4x5_put_cache(struct net_device *dev, struct sk_buff *skb)
3755 {
3756  struct de4x5_private *lp = netdev_priv(dev);
3757 
3758  __skb_queue_tail(&lp->cache.queue, skb);
3759 }
3760 
3761 static void
3762 de4x5_putb_cache(struct net_device *dev, struct sk_buff *skb)
3763 {
3764  struct de4x5_private *lp = netdev_priv(dev);
3765 
3766  __skb_queue_head(&lp->cache.queue, skb);
3767 }
3768 
3769 static struct sk_buff *
3770 de4x5_get_cache(struct net_device *dev)
3771 {
3772  struct de4x5_private *lp = netdev_priv(dev);
3773 
3774  return __skb_dequeue(&lp->cache.queue);
3775 }
3776 
3777 /*
3778 ** Check the Auto Negotiation State. Return OK when a link pass interrupt
3779 ** is received and the auto-negotiation status is NWAY OK.
3780 */
3781 static int
3782 test_ans(struct net_device *dev, s32 irqs, s32 irq_mask, s32 msec)
3783 {
3784  struct de4x5_private *lp = netdev_priv(dev);
3785  u_long iobase = dev->base_addr;
3786  s32 sts, ans;
3787 
3788  if (lp->timeout < 0) {
3789  lp->timeout = msec/100;
3790  outl(irq_mask, DE4X5_IMR);
3791 
3792  /* clear all pending interrupts */
3793  sts = inl(DE4X5_STS);
3794  outl(sts, DE4X5_STS);
3795  }
3796 
3797  ans = inl(DE4X5_SISR) & SISR_ANS;
3798  sts = inl(DE4X5_STS) & ~TIMER_CB;
3799 
3800  if (!(sts & irqs) && (ans ^ ANS_NWOK) && --lp->timeout) {
3801  sts = 100 | TIMER_CB;
3802  } else {
3803  lp->timeout = -1;
3804  }
3805 
3806  return sts;
3807 }
3808 
3809 static void
3810 de4x5_setup_intr(struct net_device *dev)
3811 {
3812  struct de4x5_private *lp = netdev_priv(dev);
3813  u_long iobase = dev->base_addr;
3814  s32 imr, sts;
3815 
3816  if (inl(DE4X5_OMR) & OMR_SR) { /* Only unmask if TX/RX is enabled */
3817  imr = 0;
3818  UNMASK_IRQs;
3819  sts = inl(DE4X5_STS); /* Reset any pending (stale) interrupts */
3820  outl(sts, DE4X5_STS);
3821  ENABLE_IRQs;
3822  }
3823 }
3824 
3825 /*
3826 **
3827 */
3828 static void
3829 reset_init_sia(struct net_device *dev, s32 csr13, s32 csr14, s32 csr15)
3830 {
3831  struct de4x5_private *lp = netdev_priv(dev);
3832  u_long iobase = dev->base_addr;
3833 
3834  RESET_SIA;
3835  if (lp->useSROM) {
3836  if (lp->ibn == 3) {
3837  srom_exec(dev, lp->phy[lp->active].rst);
3838  srom_exec(dev, lp->phy[lp->active].gep);
3839  outl(1, DE4X5_SICR);
3840  return;
3841  } else {
3842  csr15 = lp->cache.csr15;
3843  csr14 = lp->cache.csr14;
3844  csr13 = lp->cache.csr13;
3845  outl(csr15 | lp->cache.gepc, DE4X5_SIGR);
3846  outl(csr15 | lp->cache.gep, DE4X5_SIGR);
3847  }
3848  } else {
3849  outl(csr15, DE4X5_SIGR);
3850  }
3851  outl(csr14, DE4X5_STRR);
3852  outl(csr13, DE4X5_SICR);
3853 
3854  mdelay(10);
3855 }
3856 
3857 /*
3858 ** Create a loopback ethernet packet
3859 */
3860 static void
3861 create_packet(struct net_device *dev, char *frame, int len)
3862 {
3863  int i;
3864  char *buf = frame;
3865 
3866  for (i=0; i<ETH_ALEN; i++) { /* Use this source address */
3867  *buf++ = dev->dev_addr[i];
3868  }
3869  for (i=0; i<ETH_ALEN; i++) { /* Use this destination address */
3870  *buf++ = dev->dev_addr[i];
3871  }
3872 
3873  *buf++ = 0; /* Packet length (2 bytes) */
3874  *buf++ = 1;
3875 }
3876 
3877 /*
3878 ** Look for a particular board name in the EISA configuration space
3879 */
3880 static int
3881 EISA_signature(char *name, struct device *device)
3882 {
3883  int i, status = 0, siglen = ARRAY_SIZE(de4x5_signatures);
3884  struct eisa_device *edev;
3885 
3886  *name = '\0';
3887  edev = to_eisa_device (device);
3888  i = edev->id.driver_data;
3889 
3890  if (i >= 0 && i < siglen) {
3891  strcpy (name, de4x5_signatures[i]);
3892  status = 1;
3893  }
3894 
3895  return status; /* return the device name string */
3896 }
3897 
3898 /*
3899 ** Look for a particular board name in the PCI configuration space
3900 */
3901 static int
3902 PCI_signature(char *name, struct de4x5_private *lp)
3903 {
3904  int i, status = 0, siglen = ARRAY_SIZE(de4x5_signatures);
3905 
3906  if (lp->chipset == DC21040) {
3907  strcpy(name, "DE434/5");
3908  return status;
3909  } else { /* Search for a DEC name in the SROM */
3910  int tmp = *((char *)&lp->srom + 19) * 3;
3911  strncpy(name, (char *)&lp->srom + 26 + tmp, 8);
3912  }
3913  name[8] = '\0';
3914  for (i=0; i<siglen; i++) {
3915  if (strstr(name,de4x5_signatures[i])!=NULL) break;
3916  }
3917  if (i == siglen) {
3918  if (dec_only) {
3919  *name = '\0';
3920  } else { /* Use chip name to avoid confusion */
3921  strcpy(name, (((lp->chipset == DC21040) ? "DC21040" :
3922  ((lp->chipset == DC21041) ? "DC21041" :
3923  ((lp->chipset == DC21140) ? "DC21140" :
3924  ((lp->chipset == DC21142) ? "DC21142" :
3925  ((lp->chipset == DC21143) ? "DC21143" : "UNKNOWN"
3926  )))))));
3927  }
3928  if (lp->chipset != DC21041) {
3929  lp->useSROM = true; /* card is not recognisably DEC */
3930  }
3931  } else if ((lp->chipset & ~0x00ff) == DC2114x) {
3932  lp->useSROM = true;
3933  }
3934 
3935  return status;
3936 }
3937 
3938 /*
3939 ** Set up the Ethernet PROM counter to the start of the Ethernet address on
3940 ** the DC21040, else read the SROM for the other chips.
3941 ** The SROM may not be present in a multi-MAC card, so first read the
3942 ** MAC address and check for a bad address. If there is a bad one then exit
3943 ** immediately with the prior srom contents intact (the h/w address will
3944 ** be fixed up later).
3945 */
3946 static void
3947 DevicePresent(struct net_device *dev, u_long aprom_addr)
3948 {
3949  int i, j=0;
3950  struct de4x5_private *lp = netdev_priv(dev);
3951 
3952  if (lp->chipset == DC21040) {
3953  if (lp->bus == EISA) {
3954  enet_addr_rst(aprom_addr); /* Reset Ethernet Address ROM Pointer */
3955  } else {
3956  outl(0, aprom_addr); /* Reset Ethernet Address ROM Pointer */
3957  }
3958  } else { /* Read new srom */
3959  u_short tmp;
3960  __le16 *p = (__le16 *)((char *)&lp->srom + SROM_HWADD);
3961  for (i=0; i<(ETH_ALEN>>1); i++) {
3962  tmp = srom_rd(aprom_addr, (SROM_HWADD>>1) + i);
3963  j += tmp; /* for check for 0:0:0:0:0:0 or ff:ff:ff:ff:ff:ff */
3964  *p = cpu_to_le16(tmp);
3965  }
3966  if (j == 0 || j == 3 * 0xffff) {
3967  /* could get 0 only from all-0 and 3 * 0xffff only from all-1 */
3968  return;
3969  }
3970 
3971  p = (__le16 *)&lp->srom;
3972  for (i=0; i<(sizeof(struct de4x5_srom)>>1); i++) {
3973  tmp = srom_rd(aprom_addr, i);
3974  *p++ = cpu_to_le16(tmp);
3975  }
3976  de4x5_dbg_srom(&lp->srom);
3977  }
3978 }
3979 
3980 /*
3981 ** Since the write on the Enet PROM register doesn't seem to reset the PROM
3982 ** pointer correctly (at least on my DE425 EISA card), this routine should do
3983 ** it...from depca.c.
3984 */
3985 static void
3986 enet_addr_rst(u_long aprom_addr)
3987 {
3988  union {
3989  struct {
3990  u32 a;
3991  u32 b;
3992  } llsig;
3993  char Sig[sizeof(u32) << 1];
3994  } dev;
3995  short sigLength=0;
3996  s8 data;
3997  int i, j;
3998 
3999  dev.llsig.a = ETH_PROM_SIG;
4000  dev.llsig.b = ETH_PROM_SIG;
4001  sigLength = sizeof(u32) << 1;
4002 
4003  for (i=0,j=0;j<sigLength && i<PROBE_LENGTH+sigLength-1;i++) {
4004  data = inb(aprom_addr);
4005  if (dev.Sig[j] == data) { /* track signature */
4006  j++;
4007  } else { /* lost signature; begin search again */
4008  if (data == dev.Sig[0]) { /* rare case.... */
4009  j=1;
4010  } else {
4011  j=0;
4012  }
4013  }
4014  }
4015 }
4016 
4017 /*
4018 ** For the bad status case and no SROM, then add one to the previous
4019 ** address. However, need to add one backwards in case we have 0xff
4020 ** as one or more of the bytes. Only the last 3 bytes should be checked
4021 ** as the first three are invariant - assigned to an organisation.
4022 */
4023 static int
4024 get_hw_addr(struct net_device *dev)
4025 {
4026  u_long iobase = dev->base_addr;
4027  int broken, i, k, tmp, status = 0;
4028  u_short j,chksum;
4029  struct de4x5_private *lp = netdev_priv(dev);
4030 
4031  broken = de4x5_bad_srom(lp);
4032 
4033  for (i=0,k=0,j=0;j<3;j++) {
4034  k <<= 1;
4035  if (k > 0xffff) k-=0xffff;
4036 
4037  if (lp->bus == PCI) {
4038  if (lp->chipset == DC21040) {
4039  while ((tmp = inl(DE4X5_APROM)) < 0);
4040  k += (u_char) tmp;
4041  dev->dev_addr[i++] = (u_char) tmp;
4042  while ((tmp = inl(DE4X5_APROM)) < 0);
4043  k += (u_short) (tmp << 8);
4044  dev->dev_addr[i++] = (u_char) tmp;
4045  } else if (!broken) {
4046  dev->dev_addr[i] = (u_char) lp->srom.ieee_addr[i]; i++;
4047  dev->dev_addr[i] = (u_char) lp->srom.ieee_addr[i]; i++;
4048  } else if ((broken == SMC) || (broken == ACCTON)) {
4049  dev->dev_addr[i] = *((u_char *)&lp->srom + i); i++;
4050  dev->dev_addr[i] = *((u_char *)&lp->srom + i); i++;
4051  }
4052  } else {
4053  k += (u_char) (tmp = inb(EISA_APROM));
4054  dev->dev_addr[i++] = (u_char) tmp;
4055  k += (u_short) ((tmp = inb(EISA_APROM)) << 8);
4056  dev->dev_addr[i++] = (u_char) tmp;
4057  }
4058 
4059  if (k > 0xffff) k-=0xffff;
4060  }
4061  if (k == 0xffff) k=0;
4062 
4063  if (lp->bus == PCI) {
4064  if (lp->chipset == DC21040) {
4065  while ((tmp = inl(DE4X5_APROM)) < 0);
4066  chksum = (u_char) tmp;
4067  while ((tmp = inl(DE4X5_APROM)) < 0);
4068  chksum |= (u_short) (tmp << 8);
4069  if ((k != chksum) && (dec_only)) status = -1;
4070  }
4071  } else {
4072  chksum = (u_char) inb(EISA_APROM);
4073  chksum |= (u_short) (inb(EISA_APROM) << 8);
4074  if ((k != chksum) && (dec_only)) status = -1;
4075  }
4076 
4077  /* If possible, try to fix a broken card - SMC only so far */
4078  srom_repair(dev, broken);
4079 
4080 #ifdef CONFIG_PPC_PMAC
4081  /*
4082  ** If the address starts with 00 a0, we have to bit-reverse
4083  ** each byte of the address.
4084  */
4085  if ( machine_is(powermac) &&
4086  (dev->dev_addr[0] == 0) &&
4087  (dev->dev_addr[1] == 0xa0) )
4088  {
4089  for (i = 0; i < ETH_ALEN; ++i)
4090  {
4091  int x = dev->dev_addr[i];
4092  x = ((x & 0xf) << 4) + ((x & 0xf0) >> 4);
4093  x = ((x & 0x33) << 2) + ((x & 0xcc) >> 2);
4094  dev->dev_addr[i] = ((x & 0x55) << 1) + ((x & 0xaa) >> 1);
4095  }
4096  }
4097 #endif /* CONFIG_PPC_PMAC */
4098 
4099  /* Test for a bad enet address */
4100  status = test_bad_enet(dev, status);
4101 
4102  return status;
4103 }
4104 
4105 /*
4106 ** Test for enet addresses in the first 32 bytes. The built-in strncmp
4107 ** didn't seem to work here...?
4108 */
4109 static int
4110 de4x5_bad_srom(struct de4x5_private *lp)
4111 {
4112  int i, status = 0;
4113 
4114  for (i = 0; i < ARRAY_SIZE(enet_det); i++) {
4115  if (!de4x5_strncmp((char *)&lp->srom, (char *)&enet_det[i], 3) &&
4116  !de4x5_strncmp((char *)&lp->srom+0x10, (char *)&enet_det[i], 3)) {
4117  if (i == 0) {
4118  status = SMC;
4119  } else if (i == 1) {
4120  status = ACCTON;
4121  }
4122  break;
4123  }
4124  }
4125 
4126  return status;
4127 }
4128 
4129 static int
4130 de4x5_strncmp(char *a, char *b, int n)
4131 {
4132  int ret=0;
4133 
4134  for (;n && !ret; n--) {
4135  ret = *a++ - *b++;
4136  }
4137 
4138  return ret;
4139 }
4140 
4141 static void
4142 srom_repair(struct net_device *dev, int card)
4143 {
4144  struct de4x5_private *lp = netdev_priv(dev);
4145 
4146  switch(card) {
4147  case SMC:
4148  memset((char *)&lp->srom, 0, sizeof(struct de4x5_srom));
4149  memcpy(lp->srom.ieee_addr, (char *)dev->dev_addr, ETH_ALEN);
4150  memcpy(lp->srom.info, (char *)&srom_repair_info[SMC-1], 100);
4151  lp->useSROM = true;
4152  break;
4153  }
4154 }
4155 
4156 /*
4157 ** Assume that the irq's do not follow the PCI spec - this is seems
4158 ** to be true so far (2 for 2).
4159 */
4160 static int
4161 test_bad_enet(struct net_device *dev, int status)
4162 {
4163  struct de4x5_private *lp = netdev_priv(dev);
4164  int i, tmp;
4165 
4166  for (tmp=0,i=0; i<ETH_ALEN; i++) tmp += (u_char)dev->dev_addr[i];
4167  if ((tmp == 0) || (tmp == 0x5fa)) {
4168  if ((lp->chipset == last.chipset) &&
4169  (lp->bus_num == last.bus) && (lp->bus_num > 0)) {
4170  for (i=0; i<ETH_ALEN; i++) dev->dev_addr[i] = last.addr[i];
4171  for (i=ETH_ALEN-1; i>2; --i) {
4172  dev->dev_addr[i] += 1;
4173  if (dev->dev_addr[i] != 0) break;
4174  }
4175  for (i=0; i<ETH_ALEN; i++) last.addr[i] = dev->dev_addr[i];
4176  if (!an_exception(lp)) {
4177  dev->irq = last.irq;
4178  }
4179 
4180  status = 0;
4181  }
4182  } else if (!status) {
4183  last.chipset = lp->chipset;
4184  last.bus = lp->bus_num;
4185  last.irq = dev->irq;
4186  for (i=0; i<ETH_ALEN; i++) last.addr[i] = dev->dev_addr[i];
4187  }
4188 
4189  return status;
4190 }
4191 
4192 /*
4193 ** List of board exceptions with correctly wired IRQs
4194 */
4195 static int
4196 an_exception(struct de4x5_private *lp)
4197 {
4198  if ((*(u_short *)lp->srom.sub_vendor_id == 0x00c0) &&
4199  (*(u_short *)lp->srom.sub_system_id == 0x95e0)) {
4200  return -1;
4201  }
4202 
4203  return 0;
4204 }
4205 
4206 /*
4207 ** SROM Read
4208 */
4209 static short
4210 srom_rd(u_long addr, u_char offset)
4211 {
4212  sendto_srom(SROM_RD | SROM_SR, addr);
4213 
4214  srom_latch(SROM_RD | SROM_SR | DT_CS, addr);
4215  srom_command(SROM_RD | SROM_SR | DT_IN | DT_CS, addr);
4216  srom_address(SROM_RD | SROM_SR | DT_CS, addr, offset);
4217 
4218  return srom_data(SROM_RD | SROM_SR | DT_CS, addr);
4219 }
4220 
4221 static void
4222 srom_latch(u_int command, u_long addr)
4223 {
4224  sendto_srom(command, addr);
4225  sendto_srom(command | DT_CLK, addr);
4226  sendto_srom(command, addr);
4227 }
4228 
4229 static void
4230 srom_command(u_int command, u_long addr)
4231 {
4232  srom_latch(command, addr);
4233  srom_latch(command, addr);
4234  srom_latch((command & 0x0000ff00) | DT_CS, addr);
4235 }
4236 
4237 static void
4238 srom_address(u_int command, u_long addr, u_char offset)
4239 {
4240  int i, a;
4241 
4242  a = offset << 2;
4243  for (i=0; i<6; i++, a <<= 1) {
4244  srom_latch(command | ((a & 0x80) ? DT_IN : 0), addr);
4245  }
4246  udelay(1);
4247 
4248  i = (getfrom_srom(addr) >> 3) & 0x01;
4249 }
4250 
4251 static short
4252 srom_data(u_int command, u_long addr)
4253 {
4254  int i;
4255  short word = 0;
4256  s32 tmp;
4257 
4258  for (i=0; i<16; i++) {
4259  sendto_srom(command | DT_CLK, addr);
4260  tmp = getfrom_srom(addr);
4261  sendto_srom(command, addr);
4262 
4263  word = (word << 1) | ((tmp >> 3) & 0x01);
4264  }
4265 
4266  sendto_srom(command & 0x0000ff00, addr);
4267 
4268  return word;
4269 }
4270 
4271 /*
4272 static void
4273 srom_busy(u_int command, u_long addr)
4274 {
4275  sendto_srom((command & 0x0000ff00) | DT_CS, addr);
4276 
4277  while (!((getfrom_srom(addr) >> 3) & 0x01)) {
4278  mdelay(1);
4279  }
4280 
4281  sendto_srom(command & 0x0000ff00, addr);
4282 }
4283 */
4284 
4285 static void
4286 sendto_srom(u_int command, u_long addr)
4287 {
4288  outl(command, addr);
4289  udelay(1);
4290 }
4291 
4292 static int
4293 getfrom_srom(u_long addr)
4294 {
4295  s32 tmp;
4296 
4297  tmp = inl(addr);
4298  udelay(1);
4299 
4300  return tmp;
4301 }
4302 
4303 static int
4304 srom_infoleaf_info(struct net_device *dev)
4305 {
4306  struct de4x5_private *lp = netdev_priv(dev);
4307  int i, count;
4308  u_char *p;
4309 
4310  /* Find the infoleaf decoder function that matches this chipset */
4311  for (i=0; i<INFOLEAF_SIZE; i++) {
4312  if (lp->chipset == infoleaf_array[i].chipset) break;
4313  }
4314  if (i == INFOLEAF_SIZE) {
4315  lp->useSROM = false;
4316  printk("%s: Cannot find correct chipset for SROM decoding!\n",
4317  dev->name);
4318  return -ENXIO;
4319  }
4320 
4321  lp->infoleaf_fn = infoleaf_array[i].fn;
4322 
4323  /* Find the information offset that this function should use */
4324  count = *((u_char *)&lp->srom + 19);
4325  p = (u_char *)&lp->srom + 26;
4326 
4327  if (count > 1) {
4328  for (i=count; i; --i, p+=3) {
4329  if (lp->device == *p) break;
4330  }
4331  if (i == 0) {
4332  lp->useSROM = false;
4333  printk("%s: Cannot find correct PCI device [%d] for SROM decoding!\n",
4334  dev->name, lp->device);
4335  return -ENXIO;
4336  }
4337  }
4338 
4339  lp->infoleaf_offset = get_unaligned_le16(p + 1);
4340 
4341  return 0;
4342 }
4343 
4344 /*
4345 ** This routine loads any type 1 or 3 MII info into the mii device
4346 ** struct and executes any type 5 code to reset PHY devices for this
4347 ** controller.
4348 ** The info for the MII devices will be valid since the index used
4349 ** will follow the discovery process from MII address 1-31 then 0.
4350 */
4351 static void
4352 srom_init(struct net_device *dev)
4353 {
4354  struct de4x5_private *lp = netdev_priv(dev);
4355  u_char *p = (u_char *)&lp->srom + lp->infoleaf_offset;
4356  u_char count;
4357 
4358  p+=2;
4359  if (lp->chipset == DC21140) {
4360  lp->cache.gepc = (*p++ | GEP_CTRL);
4361  gep_wr(lp->cache.gepc, dev);
4362  }
4363 
4364  /* Block count */
4365  count = *p++;
4366 
4367  /* Jump the infoblocks to find types */
4368  for (;count; --count) {
4369  if (*p < 128) {
4370  p += COMPACT_LEN;
4371  } else if (*(p+1) == 5) {
4372  type5_infoblock(dev, 1, p);
4373  p += ((*p & BLOCK_LEN) + 1);
4374  } else if (*(p+1) == 4) {
4375  p += ((*p & BLOCK_LEN) + 1);
4376  } else if (*(p+1) == 3) {
4377  type3_infoblock(dev, 1, p);
4378  p += ((*p & BLOCK_LEN) + 1);
4379  } else if (*(p+1) == 2) {
4380  p += ((*p & BLOCK_LEN) + 1);
4381  } else if (*(p+1) == 1) {
4382  type1_infoblock(dev, 1, p);
4383  p += ((*p & BLOCK_LEN) + 1);
4384  } else {
4385  p += ((*p & BLOCK_LEN) + 1);
4386  }
4387  }
4388 }
4389 
4390 /*
4391 ** A generic routine that writes GEP control, data and reset information
4392 ** to the GEP register (21140) or csr15 GEP portion (2114[23]).
4393 */
4394 static void
4395 srom_exec(struct net_device *dev, u_char *p)
4396 {
4397  struct de4x5_private *lp = netdev_priv(dev);
4398  u_long iobase = dev->base_addr;
4399  u_char count = (p ? *p++ : 0);
4400  u_short *w = (u_short *)p;
4401 
4402  if (((lp->ibn != 1) && (lp->ibn != 3) && (lp->ibn != 5)) || !count) return;
4403 
4404  if (lp->chipset != DC21140) RESET_SIA;
4405 
4406  while (count--) {
4407  gep_wr(((lp->chipset==DC21140) && (lp->ibn!=5) ?
4408  *p++ : get_unaligned_le16(w++)), dev);
4409  mdelay(2); /* 2ms per action */
4410  }
4411 
4412  if (lp->chipset != DC21140) {
4413  outl(lp->cache.csr14, DE4X5_STRR);
4414  outl(lp->cache.csr13, DE4X5_SICR);
4415  }
4416 }
4417 
4418 /*
4419 ** Basically this function is a NOP since it will never be called,
4420 ** unless I implement the DC21041 SROM functions. There's no need
4421 ** since the existing code will be satisfactory for all boards.
4422 */
4423 static int
4424 dc21041_infoleaf(struct net_device *dev)
4425 {
4426  return DE4X5_AUTOSENSE_MS;
4427 }
4428 
4429 static int
4430 dc21140_infoleaf(struct net_device *dev)
4431 {
4432  struct de4x5_private *lp = netdev_priv(dev);
4433  u_char count = 0;
4434  u_char *p = (u_char *)&lp->srom + lp->infoleaf_offset;
4435  int next_tick = DE4X5_AUTOSENSE_MS;
4436 
4437  /* Read the connection type */
4438  p+=2;
4439 
4440  /* GEP control */
4441  lp->cache.gepc = (*p++ | GEP_CTRL);
4442 
4443  /* Block count */
4444  count = *p++;
4445 
4446  /* Recursively figure out the info blocks */
4447  if (*p < 128) {
4448  next_tick = dc_infoblock[COMPACT](dev, count, p);
4449  } else {
4450  next_tick = dc_infoblock[*(p+1)](dev, count, p);
4451  }
4452 
4453  if (lp->tcount == count) {
4454  lp->media = NC;
4455  if (lp->media != lp->c_media) {
4456  de4x5_dbg_media(dev);
4457  lp->c_media = lp->media;
4458  }
4459  lp->media = INIT;
4460  lp->tcount = 0;
4461  lp->tx_enable = false;
4462  }
4463 
4464  return next_tick & ~TIMER_CB;
4465 }
4466 
4467 static int
4468 dc21142_infoleaf(struct net_device *dev)
4469 {
4470  struct de4x5_private *lp = netdev_priv(dev);
4471  u_char count = 0;
4472  u_char *p = (u_char *)&lp->srom + lp->infoleaf_offset;
4473  int next_tick = DE4X5_AUTOSENSE_MS;
4474 
4475  /* Read the connection type */
4476  p+=2;
4477 
4478  /* Block count */
4479  count = *p++;
4480 
4481  /* Recursively figure out the info blocks */
4482  if (*p < 128) {
4483  next_tick = dc_infoblock[COMPACT](dev, count, p);
4484  } else {
4485  next_tick = dc_infoblock[*(p+1)](dev, count, p);
4486  }
4487 
4488  if (lp->tcount == count) {
4489  lp->media = NC;
4490  if (lp->media != lp->c_media) {
4491  de4x5_dbg_media(dev);
4492  lp->c_media = lp->media;
4493  }
4494  lp->media = INIT;
4495  lp->tcount = 0;
4496  lp->tx_enable = false;
4497  }
4498 
4499  return next_tick & ~TIMER_CB;
4500 }
4501 
4502 static int
4503 dc21143_infoleaf(struct net_device *dev)
4504 {
4505  struct de4x5_private *lp = netdev_priv(dev);
4506  u_char count = 0;
4507  u_char *p = (u_char *)&lp->srom + lp->infoleaf_offset;
4508  int next_tick = DE4X5_AUTOSENSE_MS;
4509 
4510  /* Read the connection type */
4511  p+=2;
4512 
4513  /* Block count */
4514  count = *p++;
4515 
4516  /* Recursively figure out the info blocks */
4517  if (*p < 128) {
4518  next_tick = dc_infoblock[COMPACT](dev, count, p);
4519  } else {
4520  next_tick = dc_infoblock[*(p+1)](dev, count, p);
4521  }
4522  if (lp->tcount == count) {
4523  lp->media = NC;
4524  if (lp->media != lp->c_media) {
4525  de4x5_dbg_media(dev);
4526  lp->c_media = lp->media;
4527  }
4528  lp->media = INIT;
4529  lp->tcount = 0;
4530  lp->tx_enable = false;
4531  }
4532 
4533  return next_tick & ~TIMER_CB;
4534 }
4535 
4536 /*
4537 ** The compact infoblock is only designed for DC21140[A] chips, so
4538 ** we'll reuse the dc21140m_autoconf function. Non MII media only.
4539 */
4540 static int
4541 compact_infoblock(struct net_device *dev, u_char count, u_char *p)
4542 {
4543  struct de4x5_private *lp = netdev_priv(dev);
4544  u_char flags, csr6;
4545 
4546  /* Recursively figure out the info blocks */
4547  if (--count > lp->tcount) {
4548  if (*(p+COMPACT_LEN) < 128) {
4549  return dc_infoblock[COMPACT](dev, count, p+COMPACT_LEN);
4550  } else {
4551  return dc_infoblock[*(p+COMPACT_LEN+1)](dev, count, p+COMPACT_LEN);
4552  }
4553  }
4554 
4555  if ((lp->media == INIT) && (lp->timeout < 0)) {
4556  lp->ibn = COMPACT;
4557  lp->active = 0;
4558  gep_wr(lp->cache.gepc, dev);
4559  lp->infoblock_media = (*p++) & COMPACT_MC;
4560  lp->cache.gep = *p++;
4561  csr6 = *p++;
4562  flags = *p++;
4563 
4564  lp->asBitValid = (flags & 0x80) ? 0 : -1;
4565  lp->defMedium = (flags & 0x40) ? -1 : 0;
4566  lp->asBit = 1 << ((csr6 >> 1) & 0x07);
4567  lp->asPolarity = ((csr6 & 0x80) ? -1 : 0) & lp->asBit;
4568  lp->infoblock_csr6 = OMR_DEF | ((csr6 & 0x71) << 18);
4569  lp->useMII = false;
4570 
4571  de4x5_switch_mac_port(dev);
4572  }
4573 
4574  return dc21140m_autoconf(dev);
4575 }
4576 
4577 /*
4578 ** This block describes non MII media for the DC21140[A] only.
4579 */
4580 static int
4581 type0_infoblock(struct net_device *dev, u_char count, u_char *p)
4582 {
4583  struct de4x5_private *lp = netdev_priv(dev);
4584  u_char flags, csr6, len = (*p & BLOCK_LEN)+1;
4585 
4586  /* Recursively figure out the info blocks */
4587  if (--count > lp->tcount) {
4588  if (*(p+len) < 128) {
4589  return dc_infoblock[COMPACT](dev, count, p+len);
4590  } else {
4591  return dc_infoblock[*(p+len+1)](dev, count, p+len);
4592  }
4593  }
4594 
4595  if ((lp->media == INIT) && (lp->timeout < 0)) {
4596  lp->ibn = 0;
4597  lp->active = 0;
4598  gep_wr(lp->cache.gepc, dev);
4599  p+=2;
4600  lp->infoblock_media = (*p++) & BLOCK0_MC;
4601  lp->cache.gep = *p++;
4602  csr6 = *p++;
4603  flags = *p++;
4604 
4605  lp->asBitValid = (flags & 0x80) ? 0 : -1;
4606  lp->defMedium = (flags & 0x40) ? -1 : 0;
4607  lp->asBit = 1 << ((csr6 >> 1) & 0x07);
4608  lp->asPolarity = ((csr6 & 0x80) ? -1 : 0) & lp->asBit;
4609  lp->infoblock_csr6 = OMR_DEF | ((csr6 & 0x71) << 18);
4610  lp->useMII = false;
4611 
4612  de4x5_switch_mac_port(dev);
4613  }
4614 
4615  return dc21140m_autoconf(dev);
4616 }
4617 
4618 /* These functions are under construction! */
4619 
4620 static int
4621 type1_infoblock(struct net_device *dev, u_char count, u_char *p)
4622 {
4623  struct de4x5_private *lp = netdev_priv(dev);
4624  u_char len = (*p & BLOCK_LEN)+1;
4625 
4626  /* Recursively figure out the info blocks */
4627  if (--count > lp->tcount) {
4628  if (*(p+len) < 128) {
4629  return dc_infoblock[COMPACT](dev, count, p+len);
4630  } else {
4631  return dc_infoblock[*(p+len+1)](dev, count, p+len);
4632  }
4633  }
4634 
4635  p += 2;
4636  if (lp->state == INITIALISED) {
4637  lp->ibn = 1;
4638  lp->active = *p++;
4639  lp->phy[lp->active].gep = (*p ? p : NULL); p += (*p + 1);
4640  lp->phy[lp->active].rst = (*p ? p : NULL); p += (*p + 1);
4641  lp->phy[lp->active].mc = get_unaligned_le16(p); p += 2;
4642  lp->phy[lp->active].ana = get_unaligned_le16(p); p += 2;
4643  lp->phy[lp->active].fdx = get_unaligned_le16(p); p += 2;
4644  lp->phy[lp->active].ttm = get_unaligned_le16(p);
4645  return 0;
4646  } else if ((lp->media == INIT) && (lp->timeout < 0)) {
4647  lp->ibn = 1;
4648  lp->active = *p;
4650  lp->useMII = true;
4651  lp->infoblock_media = ANS;
4652 
4653  de4x5_switch_mac_port(dev);
4654  }
4655 
4656  return dc21140m_autoconf(dev);
4657 }
4658 
4659 static int
4660 type2_infoblock(struct net_device *dev, u_char count, u_char *p)
4661 {
4662  struct de4x5_private *lp = netdev_priv(dev);
4663  u_char len = (*p & BLOCK_LEN)+1;
4664 
4665  /* Recursively figure out the info blocks */
4666  if (--count > lp->tcount) {
4667  if (*(p+len) < 128) {
4668  return dc_infoblock[COMPACT](dev, count, p+len);
4669  } else {
4670  return dc_infoblock[*(p+len+1)](dev, count, p+len);
4671  }
4672  }
4673 
4674  if ((lp->media == INIT) && (lp->timeout < 0)) {
4675  lp->ibn = 2;
4676  lp->active = 0;
4677  p += 2;
4678  lp->infoblock_media = (*p) & MEDIA_CODE;
4679 
4680  if ((*p++) & EXT_FIELD) {
4681  lp->cache.csr13 = get_unaligned_le16(p); p += 2;
4682  lp->cache.csr14 = get_unaligned_le16(p); p += 2;
4683  lp->cache.csr15 = get_unaligned_le16(p); p += 2;
4684  } else {
4685  lp->cache.csr13 = CSR13;
4686  lp->cache.csr14 = CSR14;
4687  lp->cache.csr15 = CSR15;
4688  }
4689  lp->cache.gepc = ((s32)(get_unaligned_le16(p)) << 16); p += 2;
4690  lp->cache.gep = ((s32)(get_unaligned_le16(p)) << 16);
4691  lp->infoblock_csr6 = OMR_SIA;
4692  lp->useMII = false;
4693 
4694  de4x5_switch_mac_port(dev);
4695  }
4696 
4697  return dc2114x_autoconf(dev);
4698 }
4699 
4700 static int
4701 type3_infoblock(struct net_device *dev, u_char count, u_char *p)
4702 {
4703  struct de4x5_private *lp = netdev_priv(dev);
4704  u_char len = (*p & BLOCK_LEN)+1;
4705 
4706  /* Recursively figure out the info blocks */
4707  if (--count > lp->tcount) {
4708  if (*(p+len) < 128) {
4709  return dc_infoblock[COMPACT](dev, count, p+len);
4710  } else {
4711  return dc_infoblock[*(p+len+1)](dev, count, p+len);
4712  }
4713  }
4714 
4715  p += 2;
4716  if (lp->state == INITIALISED) {
4717  lp->ibn = 3;
4718  lp->active = *p++;
4719  if (MOTO_SROM_BUG) lp->active = 0;
4720  lp->phy[lp->active].gep = (*p ? p : NULL); p += (2 * (*p) + 1);
4721  lp->phy[lp->active].rst = (*p ? p : NULL); p += (2 * (*p) + 1);
4722  lp->phy[lp->active].mc = get_unaligned_le16(p); p += 2;
4723  lp->phy[lp->active].ana = get_unaligned_le16(p); p += 2;
4724  lp->phy[lp->active].fdx = get_unaligned_le16(p); p += 2;
4725  lp->phy[lp->active].ttm = get_unaligned_le16(p); p += 2;
4726  lp->phy[lp->active].mci = *p;
4727  return 0;
4728  } else if ((lp->media == INIT) && (lp->timeout < 0)) {
4729  lp->ibn = 3;
4730  lp->active = *p;
4731  if (MOTO_SROM_BUG) lp->active = 0;
4733  lp->useMII = true;
4734  lp->infoblock_media = ANS;
4735 
4736  de4x5_switch_mac_port(dev);
4737  }
4738 
4739  return dc2114x_autoconf(dev);
4740 }
4741 
4742 static int
4743 type4_infoblock(struct net_device *dev, u_char count, u_char *p)
4744 {
4745  struct de4x5_private *lp = netdev_priv(dev);
4746  u_char flags, csr6, len = (*p & BLOCK_LEN)+1;
4747 
4748  /* Recursively figure out the info blocks */
4749  if (--count > lp->tcount) {
4750  if (*(p+len) < 128) {
4751  return dc_infoblock[COMPACT](dev, count, p+len);
4752  } else {
4753  return dc_infoblock[*(p+len+1)](dev, count, p+len);
4754  }
4755  }
4756 
4757  if ((lp->media == INIT) && (lp->timeout < 0)) {
4758  lp->ibn = 4;
4759  lp->active = 0;
4760  p+=2;
4761  lp->infoblock_media = (*p++) & MEDIA_CODE;
4762  lp->cache.csr13 = CSR13; /* Hard coded defaults */
4763  lp->cache.csr14 = CSR14;
4764  lp->cache.csr15 = CSR15;
4765  lp->cache.gepc = ((s32)(get_unaligned_le16(p)) << 16); p += 2;
4766  lp->cache.gep = ((s32)(get_unaligned_le16(p)) << 16); p += 2;
4767  csr6 = *p++;
4768  flags = *p++;
4769 
4770  lp->asBitValid = (flags & 0x80) ? 0 : -1;
4771  lp->defMedium = (flags & 0x40) ? -1 : 0;
4772  lp->asBit = 1 << ((csr6 >> 1) & 0x07);
4773  lp->asPolarity = ((csr6 & 0x80) ? -1 : 0) & lp->asBit;
4774  lp->infoblock_csr6 = OMR_DEF | ((csr6 & 0x71) << 18);
4775  lp->useMII = false;
4776 
4777  de4x5_switch_mac_port(dev);
4778  }
4779 
4780  return dc2114x_autoconf(dev);
4781 }
4782 
4783 /*
4784 ** This block type provides information for resetting external devices
4785 ** (chips) through the General Purpose Register.
4786 */
4787 static int
4788 type5_infoblock(struct net_device *dev, u_char count, u_char *p)
4789 {
4790  struct de4x5_private *lp = netdev_priv(dev);
4791  u_char len = (*p & BLOCK_LEN)+1;
4792 
4793  /* Recursively figure out the info blocks */
4794  if (--count > lp->tcount) {
4795  if (*(p+len) < 128) {
4796  return dc_infoblock[COMPACT](dev, count, p+len);
4797  } else {
4798  return dc_infoblock[*(p+len+1)](dev, count, p+len);
4799  }
4800  }
4801 
4802  /* Must be initializing to run this code */
4803  if ((lp->state == INITIALISED) || (lp->media == INIT)) {
4804  p+=2;
4805  lp->rst = p;
4806  srom_exec(dev, lp->rst);
4807  }
4808 
4809  return DE4X5_AUTOSENSE_MS;
4810 }
4811 
4812 /*
4813 ** MII Read/Write
4814 */
4815 
4816 static int
4817 mii_rd(u_char phyreg, u_char phyaddr, u_long ioaddr)
4818 {
4819  mii_wdata(MII_PREAMBLE, 2, ioaddr); /* Start of 34 bit preamble... */
4820  mii_wdata(MII_PREAMBLE, 32, ioaddr); /* ...continued */
4821  mii_wdata(MII_STRD, 4, ioaddr); /* SFD and Read operation */
4822  mii_address(phyaddr, ioaddr); /* PHY address to be accessed */
4823  mii_address(phyreg, ioaddr); /* PHY Register to read */
4824  mii_ta(MII_STRD, ioaddr); /* Turn around time - 2 MDC */
4825 
4826  return mii_rdata(ioaddr); /* Read data */
4827 }
4828 
4829 static void
4830 mii_wr(int data, u_char phyreg, u_char phyaddr, u_long ioaddr)
4831 {
4832  mii_wdata(MII_PREAMBLE, 2, ioaddr); /* Start of 34 bit preamble... */
4833  mii_wdata(MII_PREAMBLE, 32, ioaddr); /* ...continued */
4834  mii_wdata(MII_STWR, 4, ioaddr); /* SFD and Write operation */
4835  mii_address(phyaddr, ioaddr); /* PHY address to be accessed */
4836  mii_address(phyreg, ioaddr); /* PHY Register to write */
4837  mii_ta(MII_STWR, ioaddr); /* Turn around time - 2 MDC */
4838  data = mii_swap(data, 16); /* Swap data bit ordering */
4839  mii_wdata(data, 16, ioaddr); /* Write data */
4840 }
4841 
4842 static int
4843 mii_rdata(u_long ioaddr)
4844 {
4845  int i;
4846  s32 tmp = 0;
4847 
4848  for (i=0; i<16; i++) {
4849  tmp <<= 1;
4850  tmp |= getfrom_mii(MII_MRD | MII_RD, ioaddr);
4851  }
4852 
4853  return tmp;
4854 }
4855 
4856 static void
4857 mii_wdata(int data, int len, u_long ioaddr)
4858 {
4859  int i;
4860 
4861  for (i=0; i<len; i++) {
4862  sendto_mii(MII_MWR | MII_WR, data, ioaddr);
4863  data >>= 1;
4864  }
4865 }
4866 
4867 static void
4868 mii_address(u_char addr, u_long ioaddr)
4869 {
4870  int i;
4871 
4872  addr = mii_swap(addr, 5);
4873  for (i=0; i<5; i++) {
4874  sendto_mii(MII_MWR | MII_WR, addr, ioaddr);
4875  addr >>= 1;
4876  }
4877 }
4878 
4879 static void
4880 mii_ta(u_long rw, u_long ioaddr)
4881 {
4882  if (rw == MII_STWR) {
4883  sendto_mii(MII_MWR | MII_WR, 1, ioaddr);
4884  sendto_mii(MII_MWR | MII_WR, 0, ioaddr);
4885  } else {
4886  getfrom_mii(MII_MRD | MII_RD, ioaddr); /* Tri-state MDIO */
4887  }
4888 }
4889 
4890 static int
4891 mii_swap(int data, int len)
4892 {
4893  int i, tmp = 0;
4894 
4895  for (i=0; i<len; i++) {
4896  tmp <<= 1;
4897  tmp |= (data & 1);
4898  data >>= 1;
4899  }
4900 
4901  return tmp;
4902 }
4903 
4904 static void
4905 sendto_mii(u32 command, int data, u_long ioaddr)
4906 {
4907  u32 j;
4908 
4909  j = (data & 1) << 17;
4910  outl(command | j, ioaddr);
4911  udelay(1);
4912  outl(command | MII_MDC | j, ioaddr);
4913  udelay(1);
4914 }
4915 
4916 static int
4917 getfrom_mii(u32 command, u_long ioaddr)
4918 {
4919  outl(command, ioaddr);
4920  udelay(1);
4921  outl(command | MII_MDC, ioaddr);
4922  udelay(1);
4923 
4924  return (inl(ioaddr) >> 19) & 1;
4925 }
4926 
4927 /*
4928 ** Here's 3 ways to calculate the OUI from the ID registers.
4929 */
4930 static int
4931 mii_get_oui(u_char phyaddr, u_long ioaddr)
4932 {
4933 /*
4934  union {
4935  u_short reg;
4936  u_char breg[2];
4937  } a;
4938  int i, r2, r3, ret=0;*/
4939  int r2, r3;
4940 
4941  /* Read r2 and r3 */
4942  r2 = mii_rd(MII_ID0, phyaddr, ioaddr);
4943  r3 = mii_rd(MII_ID1, phyaddr, ioaddr);
4944  /* SEEQ and Cypress way * /
4945  / * Shuffle r2 and r3 * /
4946  a.reg=0;
4947  r3 = ((r3>>10)|(r2<<6))&0x0ff;
4948  r2 = ((r2>>2)&0x3fff);
4949 
4950  / * Bit reverse r3 * /
4951  for (i=0;i<8;i++) {
4952  ret<<=1;
4953  ret |= (r3&1);
4954  r3>>=1;
4955  }
4956 
4957  / * Bit reverse r2 * /
4958  for (i=0;i<16;i++) {
4959  a.reg<<=1;
4960  a.reg |= (r2&1);
4961  r2>>=1;
4962  }
4963 
4964  / * Swap r2 bytes * /
4965  i=a.breg[0];
4966  a.breg[0]=a.breg[1];
4967  a.breg[1]=i;
4968 
4969  return (a.reg<<8)|ret; */ /* SEEQ and Cypress way */
4970 /* return (r2<<6)|(u_int)(r3>>10); */ /* NATIONAL and BROADCOM way */
4971  return r2; /* (I did it) My way */
4972 }
4973 
4974 /*
4975 ** The SROM spec forces us to search addresses [1-31 0]. Bummer.
4976 */
4977 static int
4978 mii_get_phy(struct net_device *dev)
4979 {
4980  struct de4x5_private *lp = netdev_priv(dev);
4981  u_long iobase = dev->base_addr;
4982  int i, j, k, n, limit=ARRAY_SIZE(phy_info);
4983  int id;
4984 
4985  lp->active = 0;
4986  lp->useMII = true;
4987 
4988  /* Search the MII address space for possible PHY devices */
4989  for (n=0, lp->mii_cnt=0, i=1; !((i==1) && (n==1)); i=(i+1)%DE4X5_MAX_MII) {
4990  lp->phy[lp->active].addr = i;
4991  if (i==0) n++; /* Count cycles */
4992  while (de4x5_reset_phy(dev)<0) udelay(100);/* Wait for reset */
4993  id = mii_get_oui(i, DE4X5_MII);
4994  if ((id == 0) || (id == 65535)) continue; /* Valid ID? */
4995  for (j=0; j<limit; j++) { /* Search PHY table */
4996  if (id != phy_info[j].id) continue; /* ID match? */
4997  for (k=0; k < DE4X5_MAX_PHY && lp->phy[k].id; k++);
4998  if (k < DE4X5_MAX_PHY) {
4999  memcpy((char *)&lp->phy[k],
5000  (char *)&phy_info[j], sizeof(struct phy_table));
5001  lp->phy[k].addr = i;
5002  lp->mii_cnt++;
5003  lp->active++;
5004  } else {
5005  goto purgatory; /* Stop the search */
5006  }
5007  break;
5008  }
5009  if ((j == limit) && (i < DE4X5_MAX_MII)) {
5010  for (k=0; k < DE4X5_MAX_PHY && lp->phy[k].id; k++);
5011  lp->phy[k].addr = i;
5012  lp->phy[k].id = id;
5013  lp->phy[k].spd.reg = GENERIC_REG; /* ANLPA register */
5014  lp->phy[k].spd.mask = GENERIC_MASK; /* 100Mb/s technologies */
5015  lp->phy[k].spd.value = GENERIC_VALUE; /* TX & T4, H/F Duplex */
5016  lp->mii_cnt++;
5017  lp->active++;
5018  printk("%s: Using generic MII device control. If the board doesn't operate,\nplease mail the following dump to the author:\n", dev->name);
5019  j = de4x5_debug;
5020  de4x5_debug |= DEBUG_MII;
5021  de4x5_dbg_mii(dev, k);
5022  de4x5_debug = j;
5023  printk("\n");
5024  }
5025  }
5026  purgatory:
5027  lp->active = 0;
5028  if (lp->phy[0].id) { /* Reset the PHY devices */
5029  for (k=0; k < DE4X5_MAX_PHY && lp->phy[k].id; k++) { /*For each PHY*/
5030  mii_wr(MII_CR_RST, MII_CR, lp->phy[k].addr, DE4X5_MII);
5031  while (mii_rd(MII_CR, lp->phy[k].addr, DE4X5_MII) & MII_CR_RST);
5032 
5033  de4x5_dbg_mii(dev, k);
5034  }
5035  }
5036  if (!lp->mii_cnt) lp->useMII = false;
5037 
5038  return lp->mii_cnt;
5039 }
5040 
5041 static char *
5042 build_setup_frame(struct net_device *dev, int mode)
5043 {
5044  struct de4x5_private *lp = netdev_priv(dev);
5045  int i;
5046  char *pa = lp->setup_frame;
5047 
5048  /* Initialise the setup frame */
5049  if (mode == ALL) {
5051  }
5052 
5053  if (lp->setup_f == HASH_PERF) {
5054  for (pa=lp->setup_frame+IMPERF_PA_OFFSET, i=0; i<ETH_ALEN; i++) {
5055  *(pa + i) = dev->dev_addr[i]; /* Host address */
5056  if (i & 0x01) pa += 2;
5057  }
5058  *(lp->setup_frame + (HASH_TABLE_LEN >> 3) - 3) = 0x80;
5059  } else {
5060  for (i=0; i<ETH_ALEN; i++) { /* Host address */
5061  *(pa + (i&1)) = dev->dev_addr[i];
5062  if (i & 0x01) pa += 4;
5063  }
5064  for (i=0; i<ETH_ALEN; i++) { /* Broadcast address */
5065  *(pa + (i&1)) = (char) 0xff;
5066  if (i & 0x01) pa += 4;
5067  }
5068  }
5069 
5070  return pa; /* Points to the next entry */
5071 }
5072 
5073 static void
5074 disable_ast(struct net_device *dev)
5075 {
5076  struct de4x5_private *lp = netdev_priv(dev);
5077  del_timer_sync(&lp->timer);
5078 }
5079 
5080 static long
5081 de4x5_switch_mac_port(struct net_device *dev)
5082 {
5083  struct de4x5_private *lp = netdev_priv(dev);
5084  u_long iobase = dev->base_addr;
5085  s32 omr;
5086 
5087  STOP_DE4X5;
5088 
5089  /* Assert the OMR_PS bit in CSR6 */
5090  omr = (inl(DE4X5_OMR) & ~(OMR_PS | OMR_HBD | OMR_TTM | OMR_PCS | OMR_SCR |
5091  OMR_FDX));
5092  omr |= lp->infoblock_csr6;
5093  if (omr & OMR_PS) omr |= OMR_HBD;
5094  outl(omr, DE4X5_OMR);
5095 
5096  /* Soft Reset */
5097  RESET_DE4X5;
5098 
5099  /* Restore the GEP - especially for COMPACT and Type 0 Infoblocks */
5100  if (lp->chipset == DC21140) {
5101  gep_wr(lp->cache.gepc, dev);
5102  gep_wr(lp->cache.gep, dev);
5103  } else if ((lp->chipset & ~0x0ff) == DC2114x) {
5104  reset_init_sia(dev, lp->cache.csr13, lp->cache.csr14, lp->cache.csr15);
5105  }
5106 
5107  /* Restore CSR6 */
5108  outl(omr, DE4X5_OMR);
5109 
5110  /* Reset CSR8 */
5111  inl(DE4X5_MFC);
5112 
5113  return omr;
5114 }
5115 
5116 static void
5117 gep_wr(s32 data, struct net_device *dev)
5118 {
5119  struct de4x5_private *lp = netdev_priv(dev);
5120  u_long iobase = dev->base_addr;
5121 
5122  if (lp->chipset == DC21140) {
5123  outl(data, DE4X5_GEP);
5124  } else if ((lp->chipset & ~0x00ff) == DC2114x) {
5125  outl((data<<16) | lp->cache.csr15, DE4X5_SIGR);
5126  }
5127 }
5128 
5129 static int
5130 gep_rd(struct net_device *dev)
5131 {
5132  struct de4x5_private *lp = netdev_priv(dev);
5133  u_long iobase = dev->base_addr;
5134 
5135  if (lp->chipset == DC21140) {
5136  return inl(DE4X5_GEP);
5137  } else if ((lp->chipset & ~0x00ff) == DC2114x) {
5138  return inl(DE4X5_SIGR) & 0x000fffff;
5139  }
5140 
5141  return 0;
5142 }
5143 
5144 static void
5145 yawn(struct net_device *dev, int state)
5146 {
5147  struct de4x5_private *lp = netdev_priv(dev);
5148  u_long iobase = dev->base_addr;
5149 
5150  if ((lp->chipset == DC21040) || (lp->chipset == DC21140)) return;
5151 
5152  if(lp->bus == EISA) {
5153  switch(state) {
5154  case WAKEUP:
5155  outb(WAKEUP, PCI_CFPM);
5156  mdelay(10);
5157  break;
5158 
5159  case SNOOZE:
5160  outb(SNOOZE, PCI_CFPM);
5161  break;
5162 
5163  case SLEEP:
5164  outl(0, DE4X5_SICR);
5165  outb(SLEEP, PCI_CFPM);
5166  break;
5167  }
5168  } else {
5169  struct pci_dev *pdev = to_pci_dev (lp->gendev);
5170  switch(state) {
5171  case WAKEUP:
5172  pci_write_config_byte(pdev, PCI_CFDA_PSM, WAKEUP);
5173  mdelay(10);
5174  break;
5175 
5176  case SNOOZE:
5177  pci_write_config_byte(pdev, PCI_CFDA_PSM, SNOOZE);
5178  break;
5179 
5180  case SLEEP:
5181  outl(0, DE4X5_SICR);
5182  pci_write_config_byte(pdev, PCI_CFDA_PSM, SLEEP);
5183  break;
5184  }
5185  }
5186 }
5187 
5188 static void
5189 de4x5_parse_params(struct net_device *dev)
5190 {
5191  struct de4x5_private *lp = netdev_priv(dev);
5192  char *p, *q, t;
5193 
5194  lp->params.fdx = false;
5195  lp->params.autosense = AUTO;
5196 
5197  if (args == NULL) return;
5198 
5199  if ((p = strstr(args, dev->name))) {
5200  if (!(q = strstr(p+strlen(dev->name), "eth"))) q = p + strlen(p);
5201  t = *q;
5202  *q = '\0';
5203 
5204  if (strstr(p, "fdx") || strstr(p, "FDX")) lp->params.fdx = true;
5205 
5206  if (strstr(p, "autosense") || strstr(p, "AUTOSENSE")) {
5207  if (strstr(p, "TP")) {
5208  lp->params.autosense = TP;
5209  } else if (strstr(p, "TP_NW")) {
5210  lp->params.autosense = TP_NW;
5211  } else if (strstr(p, "BNC")) {
5212  lp->params.autosense = BNC;
5213  } else if (strstr(p, "AUI")) {
5214  lp->params.autosense = AUI;
5215  } else if (strstr(p, "BNC_AUI")) {
5216  lp->params.autosense = BNC;
5217  } else if (strstr(p, "10Mb")) {
5218  lp->params.autosense = _10Mb;
5219  } else if (strstr(p, "100Mb")) {
5220  lp->params.autosense = _100Mb;
5221  } else if (strstr(p, "AUTO")) {
5222  lp->params.autosense = AUTO;
5223  }
5224  }
5225  *q = t;
5226  }
5227 }
5228 
5229 static void
5230 de4x5_dbg_open(struct net_device *dev)
5231 {
5232  struct de4x5_private *lp = netdev_priv(dev);
5233  int i;
5234 
5235  if (de4x5_debug & DEBUG_OPEN) {
5236  printk("%s: de4x5 opening with irq %d\n",dev->name,dev->irq);
5237  printk("\tphysical address: %pM\n", dev->dev_addr);
5238  printk("Descriptor head addresses:\n");
5239  printk("\t0x%8.8lx 0x%8.8lx\n",(u_long)lp->rx_ring,(u_long)lp->tx_ring);
5240  printk("Descriptor addresses:\nRX: ");
5241  for (i=0;i<lp->rxRingSize-1;i++){
5242  if (i < 3) {
5243  printk("0x%8.8lx ",(u_long)&lp->rx_ring[i].status);
5244  }
5245  }
5246  printk("...0x%8.8lx\n",(u_long)&lp->rx_ring[i].status);
5247  printk("TX: ");
5248  for (i=0;i<lp->txRingSize-1;i++){
5249  if (i < 3) {
5250  printk("0x%8.8lx ", (u_long)&lp->tx_ring[i].status);
5251  }
5252  }
5253  printk("...0x%8.8lx\n", (u_long)&lp->tx_ring[i].status);
5254  printk("Descriptor buffers:\nRX: ");
5255  for (i=0;i<lp->rxRingSize-1;i++){
5256  if (i < 3) {
5257  printk("0x%8.8x ",le32_to_cpu(lp->rx_ring[i].buf));
5258  }
5259  }
5260  printk("...0x%8.8x\n",le32_to_cpu(lp->rx_ring[i].buf));
5261  printk("TX: ");
5262  for (i=0;i<lp->txRingSize-1;i++){
5263  if (i < 3) {
5264  printk("0x%8.8x ", le32_to_cpu(lp->tx_ring[i].buf));
5265  }
5266  }
5267  printk("...0x%8.8x\n", le32_to_cpu(lp->tx_ring[i].buf));
5268  printk("Ring size:\nRX: %d\nTX: %d\n",
5269  (short)lp->rxRingSize,
5270  (short)lp->txRingSize);
5271  }
5272 }
5273 
5274 static void
5275 de4x5_dbg_mii(struct net_device *dev, int k)
5276 {
5277  struct de4x5_private *lp = netdev_priv(dev);
5278  u_long iobase = dev->base_addr;
5279 
5280  if (de4x5_debug & DEBUG_MII) {
5281  printk("\nMII device address: %d\n", lp->phy[k].addr);
5282  printk("MII CR: %x\n",mii_rd(MII_CR,lp->phy[k].addr,DE4X5_MII));
5283  printk("MII SR: %x\n",mii_rd(MII_SR,lp->phy[k].addr,DE4X5_MII));
5284  printk("MII ID0: %x\n",mii_rd(MII_ID0,lp->phy[k].addr,DE4X5_MII));
5285  printk("MII ID1: %x\n",mii_rd(MII_ID1,lp->phy[k].addr,DE4X5_MII));
5286  if (lp->phy[k].id != BROADCOM_T4) {
5287  printk("MII ANA: %x\n",mii_rd(0x04,lp->phy[k].addr,DE4X5_MII));
5288  printk("MII ANC: %x\n",mii_rd(0x05,lp->phy[k].addr,DE4X5_MII));
5289  }
5290  printk("MII 16: %x\n",mii_rd(0x10,lp->phy[k].addr,DE4X5_MII));
5291  if (lp->phy[k].id != BROADCOM_T4) {
5292  printk("MII 17: %x\n",mii_rd(0x11,lp->phy[k].addr,DE4X5_MII));
5293  printk("MII 18: %x\n",mii_rd(0x12,lp->phy[k].addr,DE4X5_MII));
5294  } else {
5295  printk("MII 20: %x\n",mii_rd(0x14,lp->phy[k].addr,DE4X5_MII));
5296  }
5297  }
5298 }
5299 
5300 static void
5301 de4x5_dbg_media(struct net_device *dev)
5302 {
5303  struct de4x5_private *lp = netdev_priv(dev);
5304 
5305  if (lp->media != lp->c_media) {
5306  if (de4x5_debug & DEBUG_MEDIA) {
5307  printk("%s: media is %s%s\n", dev->name,
5308  (lp->media == NC ? "unconnected, link down or incompatible connection" :
5309  (lp->media == TP ? "TP" :
5310  (lp->media == ANS ? "TP/Nway" :
5311  (lp->media == BNC ? "BNC" :
5312  (lp->media == AUI ? "AUI" :
5313  (lp->media == BNC_AUI ? "BNC/AUI" :
5314  (lp->media == EXT_SIA ? "EXT SIA" :
5315  (lp->media == _100Mb ? "100Mb/s" :
5316  (lp->media == _10Mb ? "10Mb/s" :
5317  "???"
5318  ))))))))), (lp->fdx?" full duplex.":"."));
5319  }
5320  lp->c_media = lp->media;
5321  }
5322 }
5323 
5324 static void
5325 de4x5_dbg_srom(struct de4x5_srom *p)
5326 {
5327  int i;
5328 
5329  if (de4x5_debug & DEBUG_SROM) {
5330  printk("Sub-system Vendor ID: %04x\n", *((u_short *)p->sub_vendor_id));
5331  printk("Sub-system ID: %04x\n", *((u_short *)p->sub_system_id));
5332  printk("ID Block CRC: %02x\n", (u_char)(p->id_block_crc));
5333  printk("SROM version: %02x\n", (u_char)(p->version));
5334  printk("# controllers: %02x\n", (u_char)(p->num_controllers));
5335 
5336  printk("Hardware Address: %pM\n", p->ieee_addr);
5337  printk("CRC checksum: %04x\n", (u_short)(p->chksum));
5338  for (i=0; i<64; i++) {
5339  printk("%3d %04x\n", i<<1, (u_short)*((u_short *)p+i));
5340  }
5341  }
5342 }
5343 
5344 static void
5345 de4x5_dbg_rx(struct sk_buff *skb, int len)
5346 {
5347  int i, j;
5348 
5349  if (de4x5_debug & DEBUG_RX) {
5350  printk("R: %pM <- %pM len/SAP:%02x%02x [%d]\n",
5351  skb->data, &skb->data[6],
5352  (u_char)skb->data[12],
5353  (u_char)skb->data[13],
5354  len);
5355  for (j=0; len>0;j+=16, len-=16) {
5356  printk(" %03x: ",j);
5357  for (i=0; i<16 && i<len; i++) {
5358  printk("%02x ",(u_char)skb->data[i+j]);
5359  }
5360  printk("\n");
5361  }
5362  }
5363 }
5364 
5365 /*
5366 ** Perform IOCTL call functions here. Some are privileged operations and the
5367 ** effective uid is checked in those cases. In the normal course of events
5368 ** this function is only used for my testing.
5369 */
5370 static int
5371 de4x5_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
5372 {
5373  struct de4x5_private *lp = netdev_priv(dev);
5374  struct de4x5_ioctl *ioc = (struct de4x5_ioctl *) &rq->ifr_ifru;
5375  u_long iobase = dev->base_addr;
5376  int i, j, status = 0;
5377  s32 omr;
5378  union {
5379  u8 addr[144];
5380  u16 sval[72];
5381  u32 lval[36];
5382  } tmp;
5383  u_long flags = 0;
5384 
5385  switch(ioc->cmd) {
5386  case DE4X5_GET_HWADDR: /* Get the hardware address */
5387  ioc->len = ETH_ALEN;
5388  for (i=0; i<ETH_ALEN; i++) {
5389  tmp.addr[i] = dev->dev_addr[i];
5390  }
5391  if (copy_to_user(ioc->data, tmp.addr, ioc->len)) return -EFAULT;
5392  break;
5393 
5394  case DE4X5_SET_HWADDR: /* Set the hardware address */
5395  if (!capable(CAP_NET_ADMIN)) return -EPERM;
5396  if (copy_from_user(tmp.addr, ioc->data, ETH_ALEN)) return -EFAULT;
5397  if (netif_queue_stopped(dev))
5398  return -EBUSY;
5399  netif_stop_queue(dev);
5400  for (i=0; i<ETH_ALEN; i++) {
5401  dev->dev_addr[i] = tmp.addr[i];
5402  }
5403  build_setup_frame(dev, PHYS_ADDR_ONLY);
5404  /* Set up the descriptor and give ownership to the card */
5405  load_packet(dev, lp->setup_frame, TD_IC | PERFECT_F | TD_SET |
5406  SETUP_FRAME_LEN, (struct sk_buff *)1);
5407  lp->tx_new = (lp->tx_new + 1) % lp->txRingSize;
5408  outl(POLL_DEMAND, DE4X5_TPD); /* Start the TX */
5409  netif_wake_queue(dev); /* Unlock the TX ring */
5410  break;
5411 
5412  case DE4X5_SAY_BOO: /* Say "Boo!" to the kernel log file */
5413  if (!capable(CAP_NET_ADMIN)) return -EPERM;
5414  printk("%s: Boo!\n", dev->name);
5415  break;
5416 
5417  case DE4X5_MCA_EN: /* Enable pass all multicast addressing */
5418  if (!capable(CAP_NET_ADMIN)) return -EPERM;
5419  omr = inl(DE4X5_OMR);
5420  omr |= OMR_PM;
5421  outl(omr, DE4X5_OMR);
5422  break;
5423 
5424  case DE4X5_GET_STATS: /* Get the driver statistics */
5425  {
5426  struct pkt_stats statbuf;
5427  ioc->len = sizeof(statbuf);
5428  spin_lock_irqsave(&lp->lock, flags);
5429  memcpy(&statbuf, &lp->pktStats, ioc->len);
5430  spin_unlock_irqrestore(&lp->lock, flags);
5431  if (copy_to_user(ioc->data, &statbuf, ioc->len))
5432  return -EFAULT;
5433  break;
5434  }
5435  case DE4X5_CLR_STATS: /* Zero out the driver statistics */
5436  if (!capable(CAP_NET_ADMIN)) return -EPERM;
5437  spin_lock_irqsave(&lp->lock, flags);
5438  memset(&lp->pktStats, 0, sizeof(lp->pktStats));
5439  spin_unlock_irqrestore(&lp->lock, flags);
5440  break;
5441 
5442  case DE4X5_GET_OMR: /* Get the OMR Register contents */
5443  tmp.addr[0] = inl(DE4X5_OMR);
5444  if (copy_to_user(ioc->data, tmp.addr, 1)) return -EFAULT;
5445  break;
5446 
5447  case DE4X5_SET_OMR: /* Set the OMR Register contents */
5448  if (!capable(CAP_NET_ADMIN)) return -EPERM;
5449  if (copy_from_user(tmp.addr, ioc->data, 1)) return -EFAULT;
5450  outl(tmp.addr[0], DE4X5_OMR);
5451  break;
5452 
5453  case DE4X5_GET_REG: /* Get the DE4X5 Registers */
5454  j = 0;
5455  tmp.lval[0] = inl(DE4X5_STS); j+=4;
5456  tmp.lval[1] = inl(DE4X5_BMR); j+=4;
5457  tmp.lval[2] = inl(DE4X5_IMR); j+=4;
5458  tmp.lval[3] = inl(DE4X5_OMR); j+=4;
5459  tmp.lval[4] = inl(DE4X5_SISR); j+=4;
5460  tmp.lval[5] = inl(DE4X5_SICR); j+=4;
5461  tmp.lval[6] = inl(DE4X5_STRR); j+=4;
5462  tmp.lval[7] = inl(DE4X5_SIGR); j+=4;
5463  ioc->len = j;
5464  if (copy_to_user(ioc->data, tmp.lval, ioc->len))
5465  return -EFAULT;
5466  break;
5467 
5468 #define DE4X5_DUMP 0x0f /* Dump the DE4X5 Status */
5469 /*
5470  case DE4X5_DUMP:
5471  j = 0;
5472  tmp.addr[j++] = dev->irq;
5473  for (i=0; i<ETH_ALEN; i++) {
5474  tmp.addr[j++] = dev->dev_addr[i];
5475  }
5476  tmp.addr[j++] = lp->rxRingSize;
5477  tmp.lval[j>>2] = (long)lp->rx_ring; j+=4;
5478  tmp.lval[j>>2] = (long)lp->tx_ring; j+=4;
5479 
5480  for (i=0;i<lp->rxRingSize-1;i++){
5481  if (i < 3) {
5482  tmp.lval[j>>2] = (long)&lp->rx_ring[i].status; j+=4;
5483  }
5484  }
5485  tmp.lval[j>>2] = (long)&lp->rx_ring[i].status; j+=4;
5486  for (i=0;i<lp->txRingSize-1;i++){
5487  if (i < 3) {
5488  tmp.lval[j>>2] = (long)&lp->tx_ring[i].status; j+=4;
5489  }
5490  }
5491  tmp.lval[j>>2] = (long)&lp->tx_ring[i].status; j+=4;
5492 
5493  for (i=0;i<lp->rxRingSize-1;i++){
5494  if (i < 3) {
5495  tmp.lval[j>>2] = (s32)le32_to_cpu(lp->rx_ring[i].buf); j+=4;
5496  }
5497  }
5498  tmp.lval[j>>2] = (s32)le32_to_cpu(lp->rx_ring[i].buf); j+=4;
5499  for (i=0;i<lp->txRingSize-1;i++){
5500  if (i < 3) {
5501  tmp.lval[j>>2] = (s32)le32_to_cpu(lp->tx_ring[i].buf); j+=4;
5502  }
5503  }
5504  tmp.lval[j>>2] = (s32)le32_to_cpu(lp->tx_ring[i].buf); j+=4;
5505 
5506  for (i=0;i<lp->rxRingSize;i++){
5507  tmp.lval[j>>2] = le32_to_cpu(lp->rx_ring[i].status); j+=4;
5508  }
5509  for (i=0;i<lp->txRingSize;i++){
5510  tmp.lval[j>>2] = le32_to_cpu(lp->tx_ring[i].status); j+=4;
5511  }
5512 
5513  tmp.lval[j>>2] = inl(DE4X5_BMR); j+=4;
5514  tmp.lval[j>>2] = inl(DE4X5_TPD); j+=4;
5515  tmp.lval[j>>2] = inl(DE4X5_RPD); j+=4;
5516  tmp.lval[j>>2] = inl(DE4X5_RRBA); j+=4;
5517  tmp.lval[j>>2] = inl(DE4X5_TRBA); j+=4;
5518  tmp.lval[j>>2] = inl(DE4X5_STS); j+=4;
5519  tmp.lval[j>>2] = inl(DE4X5_OMR); j+=4;
5520  tmp.lval[j>>2] = inl(DE4X5_IMR); j+=4;
5521  tmp.lval[j>>2] = lp->chipset; j+=4;
5522  if (lp->chipset == DC21140) {
5523  tmp.lval[j>>2] = gep_rd(dev); j+=4;
5524  } else {
5525  tmp.lval[j>>2] = inl(DE4X5_SISR); j+=4;
5526  tmp.lval[j>>2] = inl(DE4X5_SICR); j+=4;
5527  tmp.lval[j>>2] = inl(DE4X5_STRR); j+=4;
5528  tmp.lval[j>>2] = inl(DE4X5_SIGR); j+=4;
5529  }
5530  tmp.lval[j>>2] = lp->phy[lp->active].id; j+=4;
5531  if (lp->phy[lp->active].id && (!lp->useSROM || lp->useMII)) {
5532  tmp.lval[j>>2] = lp->active; j+=4;
5533  tmp.lval[j>>2]=mii_rd(MII_CR,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5534  tmp.lval[j>>2]=mii_rd(MII_SR,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5535  tmp.lval[j>>2]=mii_rd(MII_ID0,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5536  tmp.lval[j>>2]=mii_rd(MII_ID1,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5537  if (lp->phy[lp->active].id != BROADCOM_T4) {
5538  tmp.lval[j>>2]=mii_rd(MII_ANA,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5539  tmp.lval[j>>2]=mii_rd(MII_ANLPA,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5540  }
5541  tmp.lval[j>>2]=mii_rd(0x10,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5542  if (lp->phy[lp->active].id != BROADCOM_T4) {
5543  tmp.lval[j>>2]=mii_rd(0x11,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5544  tmp.lval[j>>2]=mii_rd(0x12,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5545  } else {
5546  tmp.lval[j>>2]=mii_rd(0x14,lp->phy[lp->active].addr,DE4X5_MII); j+=4;
5547  }
5548  }
5549 
5550  tmp.addr[j++] = lp->txRingSize;
5551  tmp.addr[j++] = netif_queue_stopped(dev);
5552 
5553  ioc->len = j;
5554  if (copy_to_user(ioc->data, tmp.addr, ioc->len)) return -EFAULT;
5555  break;
5556 
5557 */
5558  default:
5559  return -EOPNOTSUPP;
5560  }
5561 
5562  return status;
5563 }
5564 
5565 static int __init de4x5_module_init (void)
5566 {
5567  int err = 0;
5568 
5569 #ifdef CONFIG_PCI
5570  err = pci_register_driver(&de4x5_pci_driver);
5571 #endif
5572 #ifdef CONFIG_EISA
5573  err |= eisa_driver_register (&de4x5_eisa_driver);
5574 #endif
5575 
5576  return err;
5577 }
5578 
5579 static void __exit de4x5_module_exit (void)
5580 {
5581 #ifdef CONFIG_PCI
5582  pci_unregister_driver (&de4x5_pci_driver);
5583 #endif
5584 #ifdef CONFIG_EISA
5585  eisa_driver_unregister (&de4x5_eisa_driver);
5586 #endif
5587 }
5588 
5589 module_init (de4x5_module_init);
5590 module_exit (de4x5_module_exit);