kuba | 23 Jan 19:11 2014
Picon

Auto detection vs specifying ports

  Hi,

I have a pci parallel port card based on CH353L chip and built in 
parallel port. When I load parport_pc, It sucessfully finds built in 
parallel port but does not recognize the card. If I specify ports for 
the card and built in pport, It successfully finds my card but not the 
built in pport.

When I look at parport_pc_init function:

3296         if (io[0]) {
3297                 int i;
3298                 /* Only probe the ports we were given. */
3299                 user_specified = 1;
3300                 for (i = 0; i < PARPORT_PC_MAX_PORTS; i++) {
3301                         if (!io[i])
3302                                 break;
3303                         if (io_hi[i] == PARPORT_IOHI_AUTO)
3304                                 io_hi[i] = 0x400 + io[i];
3305                         parport_pc_probe_port(io[i], io_hi[i],
3306                                         irqval[i], dmaval[i], NULL, 0);
3307                 }
3308         } else
3309                 parport_pc_find_ports(irqval[0], dmaval[0])

it is autodetecting only if i don't specify ports. Why not search for 
specified ports and do autodetect?

if I do something like this:

(Continue reading)

Sebastian Andrzej Siewior | 4 Dec 21:08 2013
Picon

[PATCH] parport: parport_pc: fix id print of a device

Since commit 7106b4e3 ("8250: Oxford Semiconductor Devices") the debug
print of the device id does no longer match the real device if it is
located in the "enum" behind oxsemi_pcie_pport. The reason is that the
code assumes that each id contains one entry in the PCI table.
The fix is to lookup the currently used id from the id-> parameter.

Cc: Lee Howard <lee.howard <at> mainpine.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy <at> linutronix.de>
---
 drivers/parport/parport_pc.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
index b0a0d53..3cdb60d 100644
--- a/drivers/parport/parport_pc.c
+++ b/drivers/parport/parport_pc.c
 <at>  <at>  -2817,16 +2817,12  <at>  <at>  static int parport_pc_pci_probe(struct pci_dev *dev,
 		if (irq == IRQ_NONE) {
 			printk(KERN_DEBUG
 	"PCI parallel port detected: %04x:%04x, I/O at %#lx(%#lx)\n",
-				parport_pc_pci_tbl[i + last_sio].vendor,
-				parport_pc_pci_tbl[i + last_sio].device,
-				io_lo, io_hi);
+				id->vendor, id->device, io_lo, io_hi);
 			irq = PARPORT_IRQ_NONE;
 		} else {
 			printk(KERN_DEBUG
 	"PCI parallel port detected: %04x:%04x, I/O at %#lx(%#lx), IRQ %d\n",
-				parport_pc_pci_tbl[i + last_sio].vendor,
-				parport_pc_pci_tbl[i + last_sio].device,
(Continue reading)

Sebastian Andrzej Siewior | 27 Nov 17:43 2013
Picon

[PATCH] parport: parport_pc: remove double PCI ID for NetMos

In commit 85747f ("PATCH] parport: add NetMOS 9805 support") Max added
the PCI ID for NetMOS 9805 based on a Debian bug report from 2k4 which
was at the v2.4.26 time frame. The patch made into 2.6.14.
Shortly before that patch akpm merged commit 296d3c783b ("[PATCH] Support
NetMOS based PCI cards providing serial and parallel ports") which made
into v2.6.9-rc1.
Now we have two different entries for the same PCI id.
I have here the NetMos 9805 which claims to support SPP/EPP/ECP mode.
This patch takes Max's entry for titan_1284p1 (base != -1 specifies the
ioport for ECP mode) and replaces akpm's entry for netmos_9805 which
specified -1 (=none). Both share the same PCI-ID (my card has subsystem
0x1000 / 0x0020 so it should match PCI_ANY).

While here I also drop the entry for titan_1284p2 which is the same as
netmos_9815.

Cc: Maximilian Attems <maks <at> stro.at>
Cc: Andrew Morton <akpm <at> linux-foundation.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy <at> linutronix.de>
---
 drivers/parport/parport_pc.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
index 903e128..b0a0d53 100644
--- a/drivers/parport/parport_pc.c
+++ b/drivers/parport/parport_pc.c
 <at>  <at>  -2596,8 +2596,6  <at>  <at>  enum parport_pc_pci_cards {
 	syba_2p_epp,
 	syba_1p_ecp,
(Continue reading)

Jingoo Han | 25 Nov 03:16 2013

[PATCH 1/2] parport_serial: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han <at> samsung.com>
---
 drivers/parport/parport_serial.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/drivers/parport/parport_serial.c b/drivers/parport/parport_serial.c
index 1b8bdb7..ff53314 100644
--- a/drivers/parport/parport_serial.c
+++ b/drivers/parport/parport_serial.c
 <at>  <at>  -596,13 +596,11  <at>  <at>  static int parport_serial_pci_probe(struct pci_dev *dev,

 	err = pci_enable_device (dev);
 	if (err) {
-		pci_set_drvdata (dev, NULL);
 		kfree (priv);
 		return err;
 	}

 	if (parport_register (dev, id)) {
-		pci_set_drvdata (dev, NULL);
 		kfree (priv);
 		return -ENODEV;
 	}
 <at>  <at>  -611,7 +609,6  <at>  <at>  static int parport_serial_pci_probe(struct pci_dev *dev,
 		int i;
 		for (i = 0; i < priv->num_par; i++)
(Continue reading)

Jingoo Han | 22 Aug 04:14 2013

[PATCH] parport: amiga: remove unnecessary platform_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han <at> samsung.com>
---
 drivers/parport/parport_amiga.c |    1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/parport/parport_amiga.c b/drivers/parport/parport_amiga.c
index 09503b8..26ecdea 100644
--- a/drivers/parport/parport_amiga.c
+++ b/drivers/parport/parport_amiga.c
 <at>  <at>  -232,7 +232,6  <at>  <at>  static int __exit amiga_parallel_remove(struct platform_device *pdev)
 	if (port->irq != PARPORT_IRQ_NONE)
 		free_irq(IRQ_AMIGA_CIAA_FLG, port);
 	parport_put_port(port);
-	platform_set_drvdata(pdev, NULL);
 	return 0;
 }

--

-- 
1.7.10.4
John Coppens | 14 Feb 18:46 2013

Parport woes

Hello people.

I have this homebrew parallel PIC programmer which I've been using for years.
With the advent of parport-less PC, I transferred the programmer to an older
PC. When I tried to run the program, I found that the programmer acted
erratically.

I wrote a small program in C, which just ioperm's the ports, writes a 
counter to the parport a number of timpes, and releases the port,
and noticed that there were glitches where the bits were set to 0. So I
modified the test program to just write 1's, at a rate of 1000/second.

On the oscilloscope I could then see that the port was regularly (about twice
per second, though not precise) cleared to 0.

I tried: rmmod'ing lp, ppdev, parport, parport_pc (in difference combinations
and one by one). I stopped cupsd. I went into the BIOS and set the parport
to basic SPP operation.

Does anyone have suggestions? Who could be clearing the parport output?
Is there any way to detect which program accesses the port?

John
Picon

не запускается parport




-- parport как запустить


_______________________________________________
Linux-parport mailing list
Linux-parport <at> lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-parport
Johann Klammer | 18 Oct 07:31 2012
Picon

[PATCH/RFC] ppdev: Adds support for async I/O with DMA and ECP

Hello,
This patch adds AIO+ECP+DMA capability to the userspace ppdev device.
For an example on how this may be used, see:
https://github.com/klammerj/dspar/blob/master/dump/main.c

Patch is against linux 3.7.0-rc1 from kernel.org

Was once used for this piece of hardware:
https://github.com/klammerj/dspar

Had to edit the code to match current kernel sources, but had no 
opportunity to test it, as new kernel f*s up the screen here (it does 
compile).

Old code can be found here:
http://members.aon.at/~aklamme4/parport/index.html

The plip2.c does not work very well, and is not included in patch.

regards,
JK

diff -uprN prev/drivers/char/ppdev.c new/drivers/char/ppdev.c
--- prev/drivers/char/ppdev.c	2012-10-14 23:41:04.000000000 +0200
+++ new/drivers/char/ppdev.c	2012-10-16 16:03:18.000000000 +0200
 <at>  <at>  -104,6 +104,300  <at>  <at>  static inline void pp_enable_irq (struct
 	port->ops->enable_irq (port);
 }

+struct pp_ki
+{
+	int len_real;
+	void * buffer;
+	struct kiocb *cb;
+	const struct iovec *iov;
+	struct parport *pport;
+	unsigned long count;
+};
+
+
+/**
+ * pp_aio_cancel - cancel an async parport_read/write
+ * Context: !in_interrupt()
+ *
+ * May fail with -EAGAIN if transfer has already started
+ */
+static int pp_aio_cancel(struct kiocb *iocb, struct io_event *e)
+{
+	struct pp_ki * u=iocb->private;
+	e->res=0;
+	e->res2=-EAGAIN;
+	if(!u->pport->ops->cancel_transaction(u->pport, u->buffer))
+		return -EAGAIN;
+	e->res=0;
+	e->res2=-ECANCELED;
+	kfree(u->buffer);
+	kfree(u);
+	return 0;
+}
+
+/**
+ * pp_aio_read_retry - entry point for copying and returning read data
+ *
+ * The aio core sets mm context up so that copy_to_user works as expected here.
+ */
+static ssize_t pp_aio_read_retry(struct kiocb *iocb)
+{
+	struct pp_ki * u=iocb->private;
+	ssize_t			len, total;
+	void			*to_copy;
+	int			i;
+
+	total = u->len_real;
+	len = 0;
+	to_copy = u->buffer;
+	for (i=0; i < u->count; i++) {
+		ssize_t this = min((ssize_t)(u->iov[i].iov_len), total);
+
+		if (copy_to_user(u->iov[i].iov_base, to_copy, this)) {
+			if (len == 0)
+				len = -EFAULT;
+			break;
+		}
+
+		total -= this;
+		len += this;
+		to_copy += this;
+		if (total == 0)
+			break;
+	}
+	kfree(u->buffer);
+	kfree(u);
+	return len;
+}
+
+/**
+ * pp_aio_read_cb - gets called by the parport driver
+ *
+ */
+static void pp_aio_read_cb(int *data,void * buffer,int len_real)
+{
+	struct pp_ki * u=(struct pp_ki *)data;
+	struct kiocb * cb=u->cb;
+	u->len_real=len_real;	
+	cb->private=u;
+	kick_iocb(cb);
+}
+
+/**
+ * pp_aio_read - submits an async read request
+ *  <at> iocb: the aio control block to work on
+ *  <at> iov: io vector to fill
+ *  <at> count: size of the io vector
+ *  <at> o: file offset. ignored
+ * Context: !in_interrupt()
+ *
+ * Uses the async parport interface. Requires the peripheral to use ECP handshakes.
+ * Submits the request to the lowlevel parport driver,
+ * sets the retry function and returns -EIOCBRETRY.
+ * The notification callback pp_aio_read_cb() will later kick_iocb() to
+ * retry and return the read data.
+ * May return -ENOMEM if allocation of dma buffer or user data fails.
+ * May read less data than asked to if DMA memory is low or the request is too big.
+ * May return other errors if parport_submit_transaction() fails fo any reason.
+ * Page Migration would be nice.
+ */
+static ssize_t pp_aio_read (struct kiocb * iocb, const struct iovec * iov, 
+			unsigned long count, loff_t o)
+{
+	struct file* file;
+	struct pp_struct * pp;
+	struct parport *pport;
+	struct pp_ki * u;
+	char * bufptr;
+	int len,rv;
+	file=iocb->ki_filp;
+	if(!file)
+		return -EBADF;
+	pp=file->private_data;
+	if(!pp)
+		return -EBADF;
+	pport=pp->pdev->port;
+	if(!pport)
+		return -EBADF;
+	
+	if (!(pp->flags & PP_CLAIMED)) {
+		/* Don't have the port claimed */
+		printk (KERN_DEBUG CHRDEV " claim the port first\n");
+		return -EINVAL;
+	}
+	
+	if(!pport->ops->submit_transaction)
+	{
+		return -ENOSYS;
+	}
+	
+	len=iov_length(iov,count);
+	bufptr=kmalloc(len,GFP_DMA);
+	//degrade gracefully
+	while((len>PAGE_SIZE)&&(!bufptr))
+	{
+		len/=2;
+		if(len<PAGE_SIZE)
+		len=PAGE_SIZE;
+		bufptr=kmalloc(len,GFP_DMA);
+	}
+	if(!bufptr)//not even a single page
+		return -ENOMEM;
+	
+	u=kmalloc(sizeof(struct pp_ki),GFP_KERNEL);
+	if(!u)
+	{
+		kfree(bufptr);
+		return -ENOMEM;
+	}
+		
+	u->buffer=bufptr;
+	u->cb=iocb;
+	u->iov=iov;
+	u->count=count;
+	u->pport=pport;
+	iocb->private=u;
+	iocb->ki_cancel = pp_aio_cancel;
+	iocb->ki_retry=pp_aio_read_retry;
+
+	rv=pport->ops->submit_transaction(pport, bufptr, len, 0, PARPORT_AIO_ACT_ECP_R_D, (int *)u, pp_aio_read_cb);
+	if(rv)
+	{
+		kfree(bufptr);
+		kfree(u);
+		iocb->private=NULL;
+		return rv;
+	}
+	return -EIOCBRETRY;
+}
+
+/**
+ * pp_aio_write_cb - gets called by the parport driver
+ *
+ */
+static void pp_aio_write_cb(int *data,void * buf,int len_real)
+{
+	struct pp_ki * u=(struct pp_ki *)data;
+	struct kiocb * cb=u->cb;
+	u->len_real=len_real;
+	cb->private = NULL;
+	aio_complete(cb, len_real, 0);
+	kfree(u->buffer);
+	kfree(u);
+}
+
+/**
+ * pp_aio_write - submits an async write request
+ *  <at> iocb: the aio control block to work on
+ *  <at> iov: io vector to fill
+ *  <at> count: size of the io vector
+ *  <at> o: file offset. ignored
+ * Context: !in_interrupt()
+ *
+ * very similar to pp_aio_read().
+ * Does not use or need a retry.
+ */
+static ssize_t pp_aio_write (struct kiocb * iocb, const struct iovec * iov, 
+			unsigned long count, loff_t o)
+{
+	struct file* file;
+	struct pp_struct * pp;
+	struct parport *pport;
+	struct pp_ki * u;
+	ssize_t			len, total;
+	void			*to_copy;
+	int			i;
+	char * bufptr;
+	int rv;
+	file=iocb->ki_filp;
+	if(!file)
+		return -EBADF;
+	pp=file->private_data;
+	if(!pp)
+		return -EBADF;
+	pport=pp->pdev->port;
+	if(!pport)
+		return -EBADF;
+	
+	if (!(pp->flags & PP_CLAIMED)) {
+		/* Don't have the port claimed */
+		printk (KERN_DEBUG CHRDEV " claim the port first\n");
+		return -EINVAL;
+	}
+	
+	if(!pport->ops->submit_transaction)
+	{
+		return -ENOSYS;
+	}
+	
+	len=iov_length(iov,count);
+	bufptr=kmalloc(len,GFP_DMA);
+	//degrade gracefully
+	while((len>PAGE_SIZE)&&(!bufptr))
+	{
+		len/=2;
+		if(len<PAGE_SIZE)
+		len=PAGE_SIZE;
+		bufptr=kmalloc(len,GFP_DMA);
+	}
+	if(!bufptr)//not even a single page
+		return -ENOMEM;
+	
+	u=kmalloc(sizeof(struct pp_ki),GFP_KERNEL);
+	if(!u)
+	{
+		kfree(bufptr);
+		return -ENOMEM;
+	}
+		
+	u->buffer=bufptr;
+	u->cb=iocb;
+	u->iov=iov;
+	u->count=count;
+	u->pport=pport;
+	iocb->private=u;
+	iocb->ki_cancel = pp_aio_cancel;
+	iocb->ki_retry=pp_aio_read_retry;
+
+	total = len;
+	len = 0;
+	to_copy = u->buffer;
+	for (i=0; i < u->count; i++) {
+		ssize_t this = min((ssize_t)(u->iov[i].iov_len), total);
+
+		if (copy_from_user(to_copy, u->iov[i].iov_base, this)) {
+			if (len == 0)
+				len = -EFAULT;
+			break;
+		}
+
+		total -= this;
+		len += this;
+		to_copy += this;
+		if (total == 0)
+			break;
+	}
+	
+	if(len<0)
+	{
+		kfree(bufptr);
+		kfree(u);
+		iocb->private=NULL;
+		return len;
+	}
+	
+	rv=pport->ops->submit_transaction(pport, bufptr, len, 0, PARPORT_AIO_ACT_ECP_W_D, (int *)u, pp_aio_write_cb);
+	if(rv)
+	{
+		kfree(bufptr);
+		kfree(u);
+		iocb->private=NULL;
+		return rv;
+	}
+	
+	return -EIOCBQUEUED;
+}
+
 static ssize_t pp_read (struct file * file, char __user * buf, size_t count,
 			loff_t * ppos)
 {
 <at>  <at>  -750,6 +1044,8  <at>  <at>  static const struct file_operations pp_f
 	.unlocked_ioctl	= pp_ioctl,
 	.open		= pp_open,
 	.release	= pp_release,
+	.aio_read = 	pp_aio_read,
+	.aio_write = 	pp_aio_write,
 };

 static void pp_attach(struct parport *port)
diff -uprN prev/drivers/parport/parport_pc.c new/drivers/parport/parport_pc.c
--- prev/drivers/parport/parport_pc.c	2012-10-14 23:41:04.000000000 +0200
+++ new/drivers/parport/parport_pc.c	2012-10-18 07:16:58.000000000 +0200
 <at>  <at>  -892,6 +892,777  <at>  <at>  static size_t parport_pc_ecp_write_block

 	return written;
 }
+
+static int parport_end_dma(struct parport *port,struct parport_aio_te *te);
+static int parport_finish_dma(struct parport *port,struct parport_aio_te *te);
+static int parport_init_dma(struct parport *port,struct parport_aio_te *te);
+static int parport_start_dma(struct parport *port,struct parport_aio_te *te);
+
+/* ecp_forward_to_reverse - reverses link in ECP mode. AUTOFD STROBE and SELECTIN DCR Bits should be zero
before doing this
+
+Extended Capabilities Port: Specifications
+Revision 1.06
+July 14, 1993
+Microsoft Corporation
+
+page 34:
+
+1. Complete the current forward transfer.
+2. Place the ECP port into PS2 mode (001).
+3. Set the direction bit to 1 (reverse), causing the ECP port data drivers to tri-state.
+4. Set the ECP port into ECP mode (011), enabling the hardware assist.
+5. Write to the DCR, causing nInit to go low. This requests a reverse transfer from the
+   peripheral.
+6. The peripheral will drive pe low when it has started the reverse transfer. Hardware will
+   automatically move data into the ECP FIFO from the ECP data lines.
+7. Set up a ReadString or execute a ReadByte operation.
+
+number 1 and 2 should be done before calling this function
+*/
+static
+int ecp_forward_to_reverse (struct parport *port)
+{
+	int retval;
+	
+/*3. Set the direction bit to 1 (reverse), causing the ECP port data drivers to tri-state.*/
+	parport_data_reverse (port);
+
+/*4. Set the ECP port into ECP mode (011), enabling the hardware assist.*/
+	frob_econtrol (port, (7<<5), (ECR_ECP<<5));
+	
+/*5. Write to the DCR, causing nInit to go low. This requests a reverse transfer from the
+   peripheral.*/
+	parport_frob_control (port,
+			      PARPORT_CONTROL_INIT,
+			      0);
+
+/*6. The peripheral will drive pe low when it has started the reverse transfer. Hardware will
+   automatically move data into the ECP FIFO from the ECP data lines.*/
+	retval = parport_wait_peripheral (port,
+					  PARPORT_STATUS_PAPEROUT, 0);
+	
+	if (!retval) {
+		DPRINTK (KERN_DEBUG "%s: ECP direction: reverse\n",
+			 port->name);
+		port->ieee1284.phase = IEEE1284_PH_REV_IDLE;
+	} else {
+		parport_data_forward (port);
+		DPRINTK (KERN_DEBUG "%s: ECP direction: failed to reverse\n",
+			 port->name);
+		port->ieee1284.phase = IEEE1284_PH_ECP_DIR_UNKNOWN;
+	}
+
+	return retval;
+}
+
+/* ecp_reverse_to_forward - reverses link in ECP mode. AUTOFD STROBE and SELECTIN DCR Bits should be zero
before doing this
+
+Extended Capabilities Port: Specifications
+Revision 1.06
+July 14, 1993
+Microsoft Corporation
+
+page 36:
+
+Reverse to Forward Negotiation
+After the ECP port has moved data in ECP mode (011) in the reverse direction and a change of
+direction is required, the following steps must be taken:
+1. First, negotiate the state of the ECP port (the peripheral) back into forward mode. This is
+    done by setting nInit high and waiting for the state of pe go high. This causes the peripheral
+    to terminate any ongoing reverse transfer.
+2. The mode of the ECP port is changed to PS2 mode 001.
+3. The direction bit is changed to 0. At this point, the bus and the ECP port are in the
+    forward-idle state.
+*/
+
+static
+int ecp_reverse_to_forward (struct parport *port)
+{
+	int retval;
+
+	/* 1. First, negotiate the state of the ECP port (the peripheral) back into forward mode. This is
+    done by setting nInit high and waiting for the state of pe go high. This causes the peripheral
+    to terminate any ongoing reverse transfer. */
+	parport_frob_control (port,
+			      PARPORT_CONTROL_INIT,
+			      PARPORT_CONTROL_INIT);
+
+	retval = parport_wait_peripheral (port,
+					  PARPORT_STATUS_PAPEROUT,
+					  PARPORT_STATUS_PAPEROUT);
+
+	if (!retval) {
+		/* 2. The mode of the ECP port is changed to PS2 mode 001. */
+		frob_econtrol (port, (7<<5), (ECR_PS2<<5));
+
+		/* 3. The direction bit is changed to 0. At this point, the bus and the ECP port are in the
+	    forward-idle state. */
+		parport_data_forward (port);
+		DPRINTK (KERN_DEBUG "%s: ECP direction: forward\n",
+			 port->name);
+		port->ieee1284.phase = IEEE1284_PH_FWD_IDLE;
+	} else {
+		DPRINTK (KERN_DEBUG
+			 "%s: ECP direction: failed to switch forward\n",
+			 port->name);
+		port->ieee1284.phase = IEEE1284_PH_ECP_DIR_UNKNOWN;
+	}
+
+
+	return retval;
+}
+
+/**
+ * wait_ecr - wait for flag(s) to change
+ *  <at> port the port to use
+ *  <at> which which bits to check
+ *  <at> state state to wait for
+ *  <at> timeout how long to wait in jiffies
+ * will busy wait for at least 1000 microseconds
+ */
+static int wait_ecr(struct parport *port, unsigned char which, unsigned char state, unsigned long timeout)
+{
+	unsigned char st;
+	unsigned long deadline;
+	unsigned long ctr;//timeout for busy waiting
+	deadline=jiffies+timeout;
+	st=inb (ECONTROL (port));
+	st&=which;
+	
+	ctr=0;
+	while(ctr<1000)
+	{
+		if(st==state)
+			return 0;
+		udelay(1);
+		ctr++;
+		st=inb (ECONTROL (port));
+		st&=which;
+	}
+	
+	while(st!=state)
+	{
+		DPRINTK( KERN_DEBUG "waiting\n");
+		if(time_after (jiffies, deadline))
+			return 1;
+		schedule_timeout_uninterruptible(1);
+		st=inb (ECONTROL (port));
+		st&=which;
+	}
+	return 0;
+}
+
+
+/**
+ * set_dir - changes direction appropriately
+ *  <at> port the port to work on
+ *  <at> te the transaction to set up direction for
+ * Context: !in_interrupt()
+ * 
+ * reverses the link direction according to the action code of the te. 
+ * may sleep. 
+ * Returns -EIO on error (peripheral timeout). 0 on Success
+ */
+static int set_dir(struct parport *port,struct parport_aio_te *te)
+{
+//	struct parport_pc_private *priv = port->physport->private_data;
+	if(te->flags&PARPORT_AIO_FLG_NOSETUP)
+		return 0;
+		
+	
+	if((inb (ECONTROL (port))&(7<<5))!=(ECR_ECP<<5))//to be certain state machine starts
+	{
+		parport_write_control(port,parport_read_control(port)&~(0x0B));
+	}
+	
+	if(te->action==PARPORT_AIO_ACT_ECP_W_D)
+	{
+		DPRINTK (KERN_DEBUG " te->action==PARPORT_AIO_ACT_ECP_W_D\n");
+		if (port->ieee1284.phase != IEEE1284_PH_FWD_IDLE)
+		{//we don't try to drain fifo here.
+			DPRINTK (KERN_DEBUG " ecp_reverse_to_forward (port)\n");
+			if (ecp_reverse_to_forward (port))
+			{
+				DPRINTK (KERN_DEBUG " failed\n");
+				return -EIO;
+			}
+		}
+		port->ieee1284.phase = IEEE1284_PH_FWD_IDLE;
+	}
+	else
+	{
+		DPRINTK (KERN_DEBUG " te->action!=PARPORT_AIO_ACT_ECP_W_D\n");
+		if (port->ieee1284.phase != IEEE1284_PH_REV_IDLE)
+		{
+			unsigned char ectr;
+			DPRINTK (KERN_DEBUG " ecp_forward_to_reverse (port)\n");
+			ectr = inb (ECONTROL (port));
+			if((port->ieee1284.phase == IEEE1284_PH_FWD_IDLE)&&((ectr&(7<<5))==(ECR_ECP<<5)))
+			{//we just wrote in ECP mode. wait some time for fifo to go empty before reversing direction. If it does
not work, give warning but continue.
+				const unsigned long FLUSH_DELAY=4*HZ/100;
+				const unsigned char FIFO_EMPTY=0x01;
+				if(wait_ecr(port,FIFO_EMPTY,FIFO_EMPTY,FLUSH_DELAY))
+					printk(KERN_WARNING "Couldn't flush fifo post-write in %lu jiffies. Continuing anyway.\n",
(unsigned long)FLUSH_DELAY);
+			}
+			parport_wait_peripheral (port,
+							  PARPORT_STATUS_BUSY,
+							  PARPORT_STATUS_BUSY);
+
+			frob_econtrol (port, (7<<5), (ECR_PS2<<5));
+			if (ecp_forward_to_reverse (port))
+			{
+				DPRINTK (KERN_DEBUG " failed\n");
+				return -EIO;
+			}
+		}
+		port->ieee1284.phase = IEEE1284_PH_REV_IDLE;
+	}
+	return 0;
+}
+
+/**
+ * parport_aio_done - called from interrupt context on dma block completion.
+ *  <at> port the port to work on
+ *
+ */
+void parport_aio_done(struct parport *port)
+{
+	unsigned long flags;
+	unsigned long dmaflag,count,residue;
+	struct parport_aio_te *te;
+	struct list_head * l;
+	struct parport_pc_private *priv = port->physport->private_data;
+	if(!(inb (ECONTROL (port)) & (1<<2)))
+	{
+		return;
+	}
+	
+	spin_lock_irqsave(&(priv->aio_lock),flags);
+	if(!priv->dma_active)
+	{
+		DPRINTK (KERN_DEBUG " dma_not_active\n");
+		spin_unlock_irqrestore(&(priv->aio_lock),flags);
+		return;
+	}
+
+	outb (ECONTROL (port),0x74);
+
+	if(!del_timer(&priv->aio_action_timeout))//if we are too late
+	{
+		spin_unlock_irqrestore(&(priv->aio_lock),flags);
+		return;
+	}
+	DPRINTK (KERN_DEBUG " parport_aio_done Ok\n");
+	l=(struct list_head *)&(priv->aio_action_list);
+	if(list_empty(l))
+	{
+		DPRINTK (KERN_DEBUG " list empty\n");
+		spin_unlock_irqrestore(&(priv->aio_lock),flags);
+		return;
+	}
+
+	DPRINTK (KERN_DEBUG " first entry\n");
+	te=list_first_entry(l,struct parport_aio_te,aio_action_list);
+	
+	dmaflag = claim_dma_lock();
+
+	DPRINTK (KERN_DEBUG " disable\n");
+	disable_dma(port->dma);
+	residue = get_dma_residue(port->dma);
+	DPRINTK (KERN_DEBUG " residue: %lu\n",residue);
+
+	release_dma_lock(dmaflag);
+	count=priv->dma_blocksize-residue;
+
+	if((priv->size_done+count)<te->size_to_transfer)//another one
+	{
+		DPRINTK (KERN_DEBUG " end dma\n");
+		parport_end_dma(port,te);
+		DPRINTK (KERN_DEBUG " another one\n");
+		parport_start_dma(port,te);
+	}
+	else//next te
+	{
+		DPRINTK (KERN_DEBUG " schedule_work\n");
+		schedule_work(&priv->aio_softirq);
+	}
+	spin_unlock_irqrestore(&(priv->aio_lock),flags);
+}
+
+EXPORT_SYMBOL (parport_aio_done);
+
+/**
+ * parport_aio_soft - task for handling notifications and dma init
+ *  <at> work work item
+ */
+void parport_aio_soft(struct work_struct * work)
+{
+	struct parport_pc_private *priv = container_of(work,struct parport_pc_private,aio_softirq);
+	struct parport *port=priv->port;
+//	unsigned long flags;
+	unsigned long ret;
+	unsigned char ectr;
+	struct parport_aio_te *te;
+	struct list_head * l;
+	mutex_lock(&(priv->aio_mutex));
+	DPRINTK (KERN_DEBUG " aio_soft\n");
+	l=&(priv->aio_action_list);
+	DPRINTK (KERN_DEBUG " next_te\n");
+	te=list_first_entry(l,struct parport_aio_te,aio_action_list);
+	DPRINTK (KERN_DEBUG " list_del\n");
+	list_del(l->next);
+
+	DPRINTK (KERN_DEBUG " end dma\n");
+	parport_end_dma(port,te);
+	DPRINTK (KERN_DEBUG " finish\n");
+	parport_finish_dma(port,te);
+	priv->dma_active=0;
+	
+	if(te->notify)
+	{
+		DPRINTK (KERN_DEBUG " notify\n");
+		mutex_unlock(&(priv->aio_mutex));
+		te->notify(te->res_ptr,te->buf,te->size_done);
+		mutex_lock(&(priv->aio_mutex));
+	}
+	else if(te->res_ptr)
+	{
+		DPRINTK (KERN_DEBUG " te->res_ptr\n");
+		*(te->res_ptr)=te->size_done;
+	}
+	DPRINTK (KERN_DEBUG " kfree\n");
+	kfree(te);
+
+
+	l=&(priv->aio_action_list);
+	if(list_empty(l))
+	{
+		DPRINTK (KERN_DEBUG " list_empty, active=0\n");
+		frob_econtrol (port, (7<<5), (ECR_PS2<<5));
+		priv->dma_active=0;
+	}
+	else if(priv->dma_active==0)//if notify called submit_...
+	{
+		DPRINTK (KERN_DEBUG " next entry\n");
+		//start next
+		do{
+			te=list_first_entry(l,struct parport_aio_te,aio_action_list);
+			DPRINTK (KERN_DEBUG " init_dma\n");
+			if((ret=(set_dir(port,te)||parport_init_dma(port,te))))//assign!
+			{
+				DPRINTK (KERN_DEBUG " error, remove\n");
+				list_del(l->next);
+				if(te->notify)
+				{
+					DPRINTK (KERN_DEBUG " notify\n");
+					mutex_unlock(&(priv->aio_mutex));
+					te->notify(te->res_ptr,te->buf,0);
+					mutex_lock(&(priv->aio_mutex));
+				}
+				else if(te->res_ptr)
+				{
+					DPRINTK (KERN_DEBUG " te->res_ptr\n");
+					*(te->res_ptr)=te->size_done;
+				}
+				DPRINTK (KERN_DEBUG " kfree\n");
+				kfree(te);
+			}
+		}while((priv->dma_active==0)&&(ret!=0)&&(!list_empty(l)));
+
+		if((ret==0)&&(priv->dma_active==0))
+		{
+			DPRINTK (KERN_DEBUG " start_dma\n");
+			priv->dma_active=1;
+			parport_start_dma(port,te);
+		}
+		
+		if(ret)
+		{
+			ectr = inb (ECONTROL (port));
+			if((port->ieee1284.phase == IEEE1284_PH_FWD_IDLE)&&((ectr&(7<<5))==(ECR_ECP<<5)))
+			{//we just wrote in ECP mode. wait some time for fifo to go empty before reversing direction. If it does
not work, give warning but continue.
+				const unsigned long FLUSH_DELAY=4*HZ/100;
+				const unsigned char FIFO_EMPTY=0x01;
+				if(wait_ecr(port,FIFO_EMPTY,FIFO_EMPTY,FLUSH_DELAY))
+					printk(KERN_WARNING "Couldn't flush FIFO in %lu jiffies. Continuing anyway.\n", (unsigned long)FLUSH_DELAY);
+			}
+			frob_econtrol (port, (7<<5), (ECR_PS2<<5));
+		}
+	}
+	mutex_unlock(&(priv->aio_mutex));
+}
+
+/*disable dma
+*returns 1 if not all has been transmitted
+*/
+static int parport_end_dma(struct parport *port,struct parport_aio_te *te)
+{
+	unsigned long dmaflag;
+	unsigned long count,residue;
+	struct parport_pc_private *priv = port->physport->private_data;
+	DPRINTK (KERN_DEBUG " parport_end_dma Ok\n");
+	dmaflag = claim_dma_lock();
+
+	DPRINTK (KERN_DEBUG " disable\n");
+	disable_dma(port->dma);
+	residue = get_dma_residue(port->dma);
+	DPRINTK (KERN_DEBUG " residue: %lu\n",residue);
+
+	release_dma_lock(dmaflag);
+	count=priv->dma_blocksize-residue;
+	DPRINTK (KERN_DEBUG " count: %lu\n",count);
+	priv->dma_aio_addr+=count;//increment address
+	priv->size_done+=count;
+	DPRINTK (KERN_DEBUG " done: %i\n",priv->size_done);
+	if(residue)
+		return 1;
+	return 0;
+}
+
+/*
+dma transfer has timed out.
+*/
+void parport_aio_timeout(unsigned long p)
+{
+	struct list_head * l;
+	unsigned long flags;
+	struct parport *port=(struct parport *)p;
+	struct parport_pc_private *priv = port->physport->private_data;
+
+	outb (ECONTROL (port),0x74);
+
+	DPRINTK (KERN_DEBUG " aio_timeout\n");
+	spin_lock_irqsave(&(priv->aio_lock),flags);
+	if(!priv->dma_active)
+	{
+		DPRINTK (KERN_DEBUG " dma_not_active\n");
+		spin_unlock_irqrestore(&(priv->aio_lock),flags);
+		return;
+	}
+	l=(struct list_head *)&(priv->aio_action_list);
+	if(list_empty(l))
+	{
+		DPRINTK (KERN_DEBUG " list empty\n");
+		spin_unlock_irqrestore(&(priv->aio_lock),flags);
+		return;
+	}
+
+	DPRINTK (KERN_DEBUG " schedule_work\n");
+	schedule_work(&priv->aio_softirq);
+	spin_unlock_irqrestore(&(priv->aio_lock),flags);
+
+}
+
+
+/*finish dma
+*/
+static int parport_finish_dma(struct parport *port,struct parport_aio_te *te)
+{
+	struct device *dev = port->physport->dev;
+	struct parport_pc_private *priv = port->physport->private_data;
+	DPRINTK (KERN_DEBUG " parport_finish_dma Ok\n");
+	if (priv->dma_aio_handle) {
+		if(te->action==PARPORT_AIO_ACT_ECP_W_D)
+		{
+			DPRINTK (KERN_DEBUG " unmap, DMA_TO_DEVICE\n");
+			dma_unmap_single(dev, priv->dma_aio_handle, te->size_to_transfer, DMA_TO_DEVICE);
+		}
+		else
+		{
+			DPRINTK (KERN_DEBUG " unmap, DMA_FROM_DEVICE\n");
+			dma_unmap_single(dev, priv->dma_aio_handle, te->size_to_transfer, DMA_FROM_DEVICE);
+		}
+  }
+	DPRINTK (KERN_DEBUG " add completed request\n");
+	te->size_done=priv->size_done;
+	return 0;
+}
+
+
+/* I/O to memory, no autoinit, increment, demand mode */
+#define DMA_MODE_READ_DM		0x04
+/* memory to I/O, no autoinit, increment, demand mode */
+#define DMA_MODE_WRITE_DM		0x08
+
+/*start a dma transfer
+*/
+static int parport_start_dma(struct parport *port,struct parport_aio_te *te)
+{
+	unsigned long dmaflag;
+	size_t count;
+	struct parport_pc_private *priv = port->physport->private_data;
+	size_t left = te->size_to_transfer-priv-≥size_done;
+	size_t maxlen = 0x10000; /* max 64k per DMA transfer */
+	unsigned long start = (unsigned long) te->buf + priv->size_done;
+	unsigned long end = (unsigned long) te->buf + te->size_to_transfer - 1;
+
+	DPRINTK (KERN_DEBUG " parport_start_dma Ok\n");
+
+	count = left;
+	if ((start ^ end) & ~0xffffUL)
+	{
+		maxlen = 0x10000 - (start & 0xffff);
+		DPRINTK (KERN_DEBUG " buffer crosses 64k boundary, maxlen:%i\n",maxlen);
+	}
+	if (count > maxlen)
+	{
+		DPRINTK (KERN_DEBUG " count > maxlen ... count=maxlen\n");
+		count = maxlen;
+	}
+	priv->dma_blocksize = count;
+
+	dump_parport_state ("Before",port);
+
+	dmaflag = claim_dma_lock();
+	DPRINTK (KERN_DEBUG " disable_dma\n");
+	disable_dma(port->dma);
+	DPRINTK (KERN_DEBUG " clear_dma\n");
+	clear_dma_ff(port->dma);
+	if(te->action==PARPORT_AIO_ACT_ECP_W_D)
+	{
+		DPRINTK (KERN_DEBUG " set_dma_mode (write)\n");
+		set_dma_mode(port->dma, DMA_MODE_WRITE);
+	}
+	else
+	{
+		DPRINTK (KERN_DEBUG " set_dma_mode (read)\n");
+		set_dma_mode(port->dma, DMA_MODE_READ);
+	}
+	DPRINTK (KERN_DEBUG " set_dma_addr: %i, %x\n",port->dma, priv->dma_aio_addr);
+	set_dma_addr(port->dma, priv->dma_aio_addr);
+	DPRINTK (KERN_DEBUG " set_dma_count: %i\n", count);
+	set_dma_count(port->dma, count);
+
+	/* set ECP mode, disable(set) serviceIntr, disable dma, disable(set) err intr*/
+	DPRINTK (KERN_DEBUG " frob_econtrol....\n");
+	frob_econtrol (port, (7<<5)|(1<<3)|(1<<2)|(1<<4),(ECR_ECP<<5)|(1<<2)|(1<<4));
+
+	DPRINTK (KERN_DEBUG " frob_econtrol\n");
+	/* Set DMA mode */
+	frob_econtrol (port, 1<<3, 1<<3);
+
+	/* Clear serviceIntr */
+	frob_econtrol (port, 1<<2, 0);
+
+
+	DPRINTK (KERN_DEBUG " enable\n");
+	enable_dma(port->dma);
+	release_dma_lock(dmaflag);
+
+/*	DPRINTK (KERN_DEBUG " autofd dwn\n");
+	parport_frob_control (port,
+			       PARPORT_CONTROL_AUTOFD,
+			       0);*/
+	
+	dump_parport_state ("After",port);
+	
+	DPRINTK (KERN_DEBUG " init_timer\n");
+	init_timer(&priv->aio_action_timeout);
+	priv->aio_action_timeout.expires=jiffies+(PARPORT_INACTIVITY_O_NONBLOCK*count/10)+10*HZ/100;
+	DPRINTK (KERN_DEBUG " expires:%lu\n",priv->aio_action_timeout.expires);
+	priv->aio_action_timeout.data=(unsigned long)port;
+	priv->aio_action_timeout.function=parport_aio_timeout;
+	add_timer(&priv->aio_action_timeout);
+	return 0;
+}
+
+//dma transfer one time (per te)initialisation
+static int parport_init_dma(struct parport *port,struct parport_aio_te *te)
+{
+	struct parport_pc_private *priv = port->physport->private_data;
+	struct device *dev = port->physport->dev;
+	size_t count;
+	size_t left = te->size_to_transfer;
+	unsigned long end = (unsigned long) te->buf + te->size_to_transfer - 1;
+
+	DPRINTK (KERN_DEBUG " parport_init_dma Ok\n");
+	priv->size_done=0;
+	dump_parport_state (" Before Init",port);
+
+	/* We don't want to be interrupted every ack. */
+	DPRINTK (KERN_DEBUG " parport_pc_disable_irq (port)\n");
+	parport_pc_disable_irq (port);
+	/* set ECP mode, disable(set) serviceIntr, disable dma, disable(set) err intr*/
+	DPRINTK (KERN_DEBUG " frob_econtrol....\n");
+//	frob_econtrol (port, (7<<5)|(1<<3)|(1<<2)|(1<<4),(ECR_PS2<<5)|(1<<2)|(1<<4));
+//	frob_econtrol (port, (7<<5)|(1<<3)|(1<<2)|(1<<4),(ECR_ECP<<5)|(1<<2)|(1<<4));
+	
+	//set 1284 active
+	//do this before
+//	parport_frob_control (port,
+ 	//					PARPORT_CONTROL_SELECT,
+		//	       0);
+
+	count=left;
+	if (end < MAX_DMA_ADDRESS) {
+		if(te->action==PARPORT_AIO_ACT_ECP_W_D)
+		{
+			DPRINTK (KERN_DEBUG " dma_map... DMA_TO_DEVICE\n");
+			priv->dma_aio_addr = priv->dma_aio_handle = dma_map_single(dev, (void *)te->buf,
te->size_to_transfer, DMA_TO_DEVICE);
+		}
+		else
+		{
+			DPRINTK (KERN_DEBUG " dma_map... DMA_FROM_DEVICE\n");
+			priv->dma_aio_addr = priv->dma_aio_handle = dma_map_single(dev, (void *)te->buf,
te->size_to_transfer, DMA_FROM_DEVICE);
+		}
+		if(dma_mapping_error(dev,priv->dma_aio_handle))
+		{
+			DPRINTK (KERN_DEBUG " failed\n");
+			return -EFAULT;
+		}
+  } else {
+		DPRINTK (KERN_DEBUG " error: DMA Buffer not valid\n");
+		return -EFAULT;
+	}
+	DPRINTK (KERN_DEBUG " first_byte:0x%02x\n",te->buf[0]);
+//	schedule_delayed_work(&priv->aio_action_timeout,(PARPORT_INACTIVITY_O_NONBLOCK*te->size_to_transfer/10)+1);
+	return 0;
+}
+
+/** 
+ * parport_submit_transaction - submit a buffer for async read or write
+ *  <at> port the port to write or read
+ *  <at> buf must be a pointer to a buffer suitable for DMA transfer.
+ *  <at> len the length of the buffer
+ *  <at> flags PARPORT_AIO_FLG_NOSETUP won't setup direction or Signals before starting transfer. 
+ * Has to be done manually. May be necessary for unusual setups (host2host). Use with caution.
+ * The lowest 4 bits of DCR must be 0000 or 0100 for the ecp state machine to start. 
+ * With PARPORT_AIO_FLG_NOSETUP this will _not_ be set up on first transfer(PS2 -> ECP) and has to be done before.
+ *  <at> action_code may be PARPORT_AIO_ACT_ECP_W_D or PARPORT_AIO_ACT_ECP_R_D
+ *  <at> complete if not NULL and if notify is NULL this should point to an integer. 
+ * It will be written to -1 by this function, and after the transfer completes 
+ * will hold the number of bytes successfully transferred. can be used for polling
while(complete!=-1)... 
+ * If notify is not NULL complete will _not_ be written to but acts as user data to the notify function.
+ *  <at> notify a notification callback. Is called from a work_queue (process context). A driver may use it to
trigger actions on transfer completion.
+ * The arguments to notify() are the complete pointer, the buffer address and the length of the data
successfully transferred.
+ * multiple transfers may be submitted without waiting for completion of previous ones. they will be
started(and finished) in the order they were submitted in.
+ *
+ * submit_transaction() returns -EINVAL if the action code is invalid, 
+ * -ENOMEM if the transaction could not be allocated, -EIO if there was a problem setting up the initial link direction.
+ * and 0 on successful queueing of the request. Notification will occur in the last case _only_.
+ */
+int parport_submit_transaction (struct parport *port, const void *buf,
+				  size_t len, int flags, int action_code, int * complete,void (*notify)(int *data,void * buf,int len_real))
+{
+	unsigned long iflags;
+	int ret;
+	struct parport_pc_private *priv = port->physport->private_data;
+	struct parport_aio_te * te=NULL;
+	struct list_head * l=NULL;
+	if((action_code<0)||(action_code>=PARPORT_AIO_ACT_BAD))
+		return -EINVAL;
+	//ready
+	te = kmalloc(sizeof(struct parport_aio_te), GFP_KERNEL);
+	if(!te)
+		return -ENOMEM;
+	DPRINTK (KERN_DEBUG " parport_submit_transaction Ok\n");
+	te->buf=(char *)buf;
+	te->size_to_transfer=len;
+	te->size_done=0;
+	te->action=action_code;
+	te->flags=flags;
+	te->res_ptr=complete;
+	te->notify=notify;//	PARPORT_INIT_TE(te)
+	if((!te->notify)&&(te->res_ptr))
+		*(te->res_ptr)=-1;
+
+	mutex_lock(&(priv->aio_mutex));
+	spin_lock_irqsave(&(priv->aio_lock),iflags);
+//	act=priv->dma_active;
+//	spin_unlock_irqrestore(&(priv->aio_lock),iflags);
+
+	//spin_lock_irqsave(&(priv->aio_lock),iflags);
+	//set
+	DPRINTK (KERN_DEBUG " list_add_tail\n");
+	list_add_tail(&te->aio_action_list, &priv->aio_action_list);
+	//go
+	if(priv->dma_active) {
+		DPRINTK (KERN_DEBUG " dma already active\n");
+		spin_unlock_irqrestore(&(priv->aio_lock),iflags);
+		mutex_unlock(&(priv->aio_mutex));
+		return 0;
+	}
+	else 
+	{
+		//ok there is no dma going on so we are the only ones accessing this data... or not?
+		spin_unlock_irqrestore(&(priv->aio_lock),iflags);
+		l=&(priv->aio_action_list);
+		if(list_empty(l))	{
+			DPRINTK (KERN_DEBUG " list empty\n");
+			mutex_unlock(&(priv->aio_mutex));
+			return 0;
+		}
+		DPRINTK (KERN_DEBUG " get first entry\n");
+		te=list_first_entry(l,struct parport_aio_te,aio_action_list);
+		
+		//we need to be able to sleep here (for set_dir())
+		if((ret=(set_dir(port,te)||parport_init_dma(port,te))))//assign!
+		{
+			//remove
+			list_del(l->next);
+			frob_econtrol (port, (7<<5), (ECR_PS2<<5));
+			DPRINTK (KERN_DEBUG " error, remove\n");
+			priv->dma_active=0;
+			DPRINTK (KERN_DEBUG " kfree\n");
+			kfree(te);
+			mutex_unlock(&(priv->aio_mutex));
+			return ret;
+		}
+		spin_lock_irqsave(&(priv->aio_lock),iflags);
+		priv->dma_active=1;
+		DPRINTK (KERN_DEBUG " dma_active=1\n");
+		parport_start_dma(port,te);
+		spin_unlock_irqrestore(&(priv->aio_lock),iflags);
+	}
+
+	mutex_unlock(&(priv->aio_mutex));
+	return 0;
+}
+
+/**
+ * parport_cancel_transaction - will remove the first te with matching buf ptr. will not notify of
cancelled transfers.
+ *  <at> port port to work on
+ *  <at> buf the pointer to look for (they'd better be unique)
+ *
+ * returns 1 if inactive queued transfer was removed
+ * returns 0 if nothing was removed. either it was not found or already started
+ */
+int parport_cancel_transaction (struct parport *port, const void *buf)
+{
+	unsigned long flags;
+	struct parport_pc_private *priv = port->physport->private_data;
+	struct parport_aio_te * te;
+	struct list_head * l;
+	mutex_lock(&(priv->aio_mutex));
+	spin_lock_irqsave(&(priv->aio_lock),flags);
+	list_for_each(l, &(priv->aio_action_list))
+	{
+		te=list_entry(l,struct parport_aio_te,aio_action_list);
+		if(te->buf==buf)
+		{
+			if(l==priv->aio_action_list.next)//already active
+			{
+				spin_unlock_irqrestore(&(priv->aio_lock),flags);
+				mutex_unlock(&(priv->aio_mutex));
+				return 0;
+			}
+			else
+			{
+				//remove
+				list_del(l);
+				kfree(te);
+				spin_unlock_irqrestore(&(priv->aio_lock),flags);
+				mutex_unlock(&(priv->aio_mutex));
+				return 1;
+			}
+		}
+	}
+	spin_unlock_irqrestore(&(priv->aio_lock),flags);
+	mutex_unlock(&(priv->aio_mutex));
+	return 0;
+}
+
+
 #endif /* IEEE 1284 support */
 #endif /* Allowed to use FIFO/DMA */

 <at>  <at>  -1986,6 +2757,14  <at>  <at>  static int parport_dma_probe(struct parp
 	return p->dma;
 }

+irqreturn_t parport_pc_irq_handler(int irq, void *dev_id)
+{
+	struct parport *port = dev_id;
+	parport_aio_done(port);
+	return parport_irq_handler(irq,dev_id);
+}
+
+
 /* --- Initialisation code -------------------------------- */

 static LIST_HEAD(ports_list);
 <at>  <at>  -2046,6 +2825,17  <at>  <at>  struct parport *parport_pc_probe_port(un
 	INIT_LIST_HEAD(&priv->list);
 	priv->port = p;

+	priv->dma_active=\
+	priv->dma_aio_addr=\
+	priv->dma_aio_handle=\
+	priv->dma_blocksize=\
+	priv->size_done=0;
+
+	INIT_LIST_HEAD(&priv->aio_action_list);
+	spin_lock_init(&priv->aio_lock);
+	mutex_init(&priv->aio_mutex);
+	INIT_WORK(&priv->aio_softirq,parport_aio_soft);
+	
 	p->dev = dev;
 	p->base_hi = base_hi;
 	p->modes = PARPORT_MODE_PCSPP | PARPORT_MODE_SAFEININT;
 <at>  <at>  -2157,7 +2947,7  <at>  <at>  struct parport *parport_pc_probe_port(un
 		EPP_res = NULL;
 	}
 	if (p->irq != PARPORT_IRQ_NONE) {
-		if (request_irq(p->irq, parport_irq_handler,
+		if (request_irq(p->irq, parport_pc_irq_handler,
 				 irqflags, p->name, p)) {
 			printk(KERN_WARNING "%s: irq %d in use, "
 				"resorting to polled operation\n",
 <at>  <at>  -2187,6 +2977,10  <at>  <at>  struct parport *parport_pc_probe_port(un
 						p->name);
 					free_dma(p->dma);
 					p->dma = PARPORT_DMA_NONE;
+				}	else {
+					printk("async ops available\n");
+					p->ops->submit_transaction=parport_submit_transaction;
+					p->ops->cancel_transaction=parport_cancel_transaction;
 				}
 			}
 		}
diff -uprN prev/drivers/parport/share.c new/drivers/parport/share.c
--- prev/drivers/parport/share.c	2012-10-14 23:41:04.000000000 +0200
+++ new/drivers/parport/share.c	2012-10-16 16:14:36.000000000 +0200
 <at>  <at>  -64,6 +64,12  <at>  <at>  static size_t dead_write (struct parport
 { return 0; }
 static size_t dead_read (struct parport *p, void *b, size_t l, int f)
 { return 0; }
+int dead_submit(struct parport *port, const void *buf,
+				  size_t len, int flags, int action_code, int * complete,void (*notify)(int *data,void * buf,int len_real))
+				  { return -ENODEV;}
+int dead_cancel(struct parport *port, const void *buf)
+				  { return -ENODEV;}
+
 static struct parport_operations dead_ops = {
 	.write_data	= dead_write_lines,	/* data */
 	.read_data	= dead_read_lines,
 <at>  <at>  -93,6 +99,9  <at>  <at>  static struct parport_operations dead_op
 	.ecp_read_data	= dead_read,
 	.ecp_write_addr	= dead_write,

+ 	.submit_transaction=dead_submit,/* async */
+ 	.cancel_transaction=dead_cancel,
+
 	.compat_write_data	= dead_write,	/* compat */
 	.nibble_read_data	= dead_read,	/* nibble */
 	.byte_read_data		= dead_read,	/* byte */
diff -uprN prev/include/linux/parport.h new/include/linux/parport.h
--- prev/include/linux/parport.h	2012-10-14 23:41:04.000000000 +0200
+++ new/include/linux/parport.h	2012-10-16 16:22:25.000000000 +0200
 <at>  <at>  -103,6 +103,19  <at>  <at>  struct parport_operations {
 	size_t (*ecp_write_addr) (struct parport *port, const void *buf,
 				  size_t len, int flags);

+/* action values describe what to do */
+#define PARPORT_AIO_ACT_ECP_W_D 0
+#define PARPORT_AIO_ACT_ECP_R_D 1
+#define PARPORT_AIO_ACT_BAD 2
+	
+/* flags */
+#define PARPORT_AIO_FLG_NOSETUP (1<<4)
+
+	int (*submit_transaction) (struct parport *port, const void *buf,
+				  size_t len, int flags, int action_code, int * complete,void (*notify)(int *data,void * buf,int len_real));
+				  
+	int (*cancel_transaction) (struct parport *port, const void *buf);
+	
 	size_t (*compat_write_data) (struct parport *port, const void *buf,
 				     size_t len, int flags);
 	size_t (*nibble_read_data) (struct parport *port, void *buf,
diff -uprN prev/include/linux/parport_pc.h new/include/linux/parport_pc.h
--- prev/include/linux/parport_pc.h	2012-10-14 23:41:04.000000000 +0200
+++ new/include/linux/parport_pc.h	2012-10-16 15:14:05.000000000 +0200
 <at>  <at>  -2,6 +2,7  <at>  <at> 
 #define __LINUX_PARPORT_PC_H

 #include <asm/io.h>
+#include <linux/interrupt.h>

 /* --- register definitions ------------------------------- */

 <at>  <at>  -14,6 +15,24  <at>  <at> 
 #define CONTROL(p)  ((p)->base    + 0x2)
 #define STATUS(p)   ((p)->base    + 0x1)
 #define DATA(p)     ((p)->base    + 0x0)
+/**parport asynchronous transaction entry- they are organised as a first in first out queue
+ */
+struct parport_aio_te{
+	/* links us to the list */
+	struct list_head aio_action_list;
+	/* the buffer pointer */
+	char * buf;
+	/* how much is left, how much is done. We memorize the latter to return in the notification */
+	size_t size_to_transfer,size_done;
+	/* what to do see parport.h*/
+	int action;
+	/* flags, ignored */
+	int flags;
+	/* user data or notification var, may be NULL */
+	int * res_ptr;
+	/* notification callback, may be NULL */
+	void (*notify)(int *data,void * buf,int len_real);
+};

 struct parport_pc_private {
 	/* Contents of CTR. */
 <at>  <at>  -40,6 +59,26  <at>  <at>  struct parport_pc_private {
 	dma_addr_t dma_handle;
 	struct list_head list;
 	struct parport *port;
+
+	/* whether dma is currently going on */
+	int dma_active;
+	/* size of current block */
+	int dma_blocksize;
+	/* dma_address(variable) and handle */
+	dma_addr_t dma_aio_addr, dma_aio_handle;
+	/* accumulated size for current transfer*/
+	size_t size_done;
+	
+	/* work_struct to notify, start next transfer etc */
+	struct work_struct aio_softirq;
+	/* timeout if something goes wrong */
+	struct timer_list aio_action_timeout;
+	/* list(queue) of all (pending and active) transaction states for this port */
+	struct list_head aio_action_list;
+	/* protects accesses to this structure (especially dma_active) */
+	spinlock_t aio_lock;
+	/* protects against concurrent accesses between work_structs */
+	struct mutex aio_mutex;
 };

 struct parport_pc_via_data
_______________________________________________
Linux-parport mailing list
Linux-parport <at> lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-parport
Andre Puschmann | 25 Jun 10:08 2012
Picon
Picon

IRQ storm from Express card parallel port

Hi,

I've been looking at using a Express card to parallel port adapter to
connect an external device to a PC without on-board parallel port. I got
an adapter from Delock [1] which seems to work fine at first glance.
Unfortunately, the IRQ handling seems to have problems. Whenever I
trigger the first IRQ (I've just shortened pin 10 and ground), I receive
an infinite number of IRQ from the parallel port which only stop after
unloading the kernel modules. The testing code [2] works fine on another
machine with on-board parallel port. I am using the standard Ubuntu
12.04 LTS kernel (3.2.0-25). I am wondering whether anybody else has
observed such a strange behaviour before? It might be also related to
Guan Xin's post "[BUG] IRQ storm from linux/drivers/char/ppdev.c".

Thanks
Andre

[1] http://www.delock.de/produkte/F_263_Parallel_66220/merkmale.html
[2] https://github.com/andrepuschmann/lptirq
Matt Porter | 20 Apr 17:28 2012
Picon

[PATCH] parport: remove unused dead code from lowlevel drivers

This unused code has been untouched for over 7 years and must
go.

Signed-off-by: Matt Porter <mporter <at> ti.com>
---
 drivers/parport/parport_amiga.c  |   36 -----
 drivers/parport/parport_atari.c  |    9 --
 drivers/parport/parport_mfc3.c   |   35 -----
 drivers/parport/parport_pc.c     |  276 --------------------------------------
 drivers/parport/parport_sunbpp.c |   21 ---
 5 files changed, 0 insertions(+), 377 deletions(-)

diff --git a/drivers/parport/parport_amiga.c b/drivers/parport/parport_amiga.c
index 8bef6d6..ee78e0e 100644
--- a/drivers/parport/parport_amiga.c
+++ b/drivers/parport/parport_amiga.c
 <at>  <at>  -48,23 +48,6  <at>  <at>  static unsigned char amiga_read_data(struct parport *p)
 	return ciaa.prb;
 }

-#if 0
-static unsigned char control_pc_to_amiga(unsigned char control)
-{
-	unsigned char ret = 0;
-
-	if (control & PARPORT_CONTROL_SELECT) /* XXX: What is SELECP? */
-		;
-	if (control & PARPORT_CONTROL_INIT) /* INITP */
-		/* reset connected to cpu reset pin */;
-	if (control & PARPORT_CONTROL_AUTOFD) /* AUTOLF */
-		/* Not connected */;
-	if (control & PARPORT_CONTROL_STROBE) /* Strobe */
-		/* Handled only directly by hardware */;
-	return ret;
-}
-#endif
-
 static unsigned char control_amiga_to_pc(unsigned char control)
 {
 	return PARPORT_CONTROL_SELECT |
 <at>  <at>  -95,25 +78,6  <at>  <at>  static unsigned char amiga_frob_control( struct parport *p, unsigned char mask,
 	return old;
 }

-#if 0 /* currently unused */
-static unsigned char status_pc_to_amiga(unsigned char status)
-{
-	unsigned char ret = 1;
-
-	if (status & PARPORT_STATUS_BUSY) /* Busy */
-		ret &= ~1;
-	if (status & PARPORT_STATUS_ACK) /* Ack */
-		/* handled in hardware */;
-	if (status & PARPORT_STATUS_PAPEROUT) /* PaperOut */
-		ret |= 2;
-	if (status & PARPORT_STATUS_SELECT) /* select */
-		ret |= 4;
-	if (status & PARPORT_STATUS_ERROR) /* error */
-		/* not connected */;
-	return ret;
-}
-#endif
-
 static unsigned char status_amiga_to_pc(unsigned char status)
 {
 	unsigned char ret = PARPORT_STATUS_BUSY | PARPORT_STATUS_ACK | PARPORT_STATUS_ERROR;
diff --git a/drivers/parport/parport_atari.c b/drivers/parport/parport_atari.c
index 0b28fcc..7ad59ac 100644
--- a/drivers/parport/parport_atari.c
+++ b/drivers/parport/parport_atari.c
 <at>  <at>  -130,15 +130,6  <at>  <at>  parport_atari_data_forward(struct parport *p)
 static void
 parport_atari_data_reverse(struct parport *p)
 {
-#if 0 /* too dangerous, can kill sound chip */
-	unsigned long flags;
-
-	local_irq_save(flags);
-	/* Soundchip port B as input. */
-	sound_ym.rd_data_reg_sel = 7;
-	sound_ym.wd_data = sound_ym.rd_data_reg_sel & ~0x40;
-	local_irq_restore(flags);
-#endif
 }

 static struct parport_operations parport_atari_ops = {
diff --git a/drivers/parport/parport_mfc3.c b/drivers/parport/parport_mfc3.c
index 1c0c642..7578d79 100644
--- a/drivers/parport/parport_mfc3.c
+++ b/drivers/parport/parport_mfc3.c
 <at>  <at>  -147,25 +147,6  <at>  <at>  DPRINTK(KERN_DEBUG "frob_control mask %02x, value %02x\n",mask,val);
 	return old;
 }

-#if 0 /* currently unused */
-static unsigned char status_pc_to_mfc3(unsigned char status)
-{
-	unsigned char ret = 1;
-
-	if (status & PARPORT_STATUS_BUSY) /* Busy */
-		ret &= ~1;
-	if (status & PARPORT_STATUS_ACK) /* Ack */
-		ret |= 8;
-	if (status & PARPORT_STATUS_PAPEROUT) /* PaperOut */
-		ret |= 2;
-	if (status & PARPORT_STATUS_SELECT) /* select */
-		ret |= 4;
-	if (status & PARPORT_STATUS_ERROR) /* error */
-		ret |= 16;
-	return ret;
-}
-#endif
-
 static unsigned char status_mfc3_to_pc(unsigned char status)
 {
 	unsigned char ret = PARPORT_STATUS_BUSY;
 <at>  <at>  -184,14 +165,6  <at>  <at>  static unsigned char status_mfc3_to_pc(unsigned char status)
 	return ret;
 }

-#if 0 /* currently unused */
-static void mfc3_write_status( struct parport *p, unsigned char status)
-{
-DPRINTK(KERN_DEBUG "write_status %02x\n",status);
-	pia(p)->ppra = (pia(p)->ppra & 0xe0) | status_pc_to_mfc3(status);
-}
-#endif
-
 static unsigned char mfc3_read_status(struct parport *p)
 {
 	unsigned char status;
 <at>  <at>  -201,14 +174,6  <at>  <at>  DPRINTK(KERN_DEBUG "read_status %02x\n", status);
 	return status;
 }

-#if 0 /* currently unused */
-static void mfc3_change_mode( struct parport *p, int m)
-{
-	/* XXX: This port only has one mode, and I am
-	not sure about the corresponding PC-style mode*/
-}
-#endif
-
 static int use_cnt = 0;

 static irqreturn_t mfc3_interrupt(int irq, void *dev_id)
diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
index 0cb64f5..4029563 100644
--- a/drivers/parport/parport_pc.c
+++ b/drivers/parport/parport_pc.c
 <at>  <at>  -197,54 +197,6  <at>  <at>  static int change_mode(struct parport *p, int m)
 	ECR_WRITE(p, oecr);
 	return 0;
 }
-
-#ifdef CONFIG_PARPORT_1284
-/* Find FIFO lossage; FIFO is reset */
-#if 0
-static int get_fifo_residue(struct parport *p)
-{
-	int residue;
-	int cnfga;
-	const struct parport_pc_private *priv = p->physport->private_data;
-
-	/* Adjust for the contents of the FIFO. */
-	for (residue = priv->fifo_depth; ; residue--) {
-		if (inb(ECONTROL(p)) & 0x2)
-				/* Full up. */
-			break;
-
-		outb(0, FIFO(p));
-	}
-
-	printk(KERN_DEBUG "%s: %d PWords were left in FIFO\n", p->name,
-		residue);
-
-	/* Reset the FIFO. */
-	frob_set_mode(p, ECR_PS2);
-
-	/* Now change to config mode and clean up. FIXME */
-	frob_set_mode(p, ECR_CNF);
-	cnfga = inb(CONFIGA(p));
-	printk(KERN_DEBUG "%s: cnfgA contains 0x%02x\n", p->name, cnfga);
-
-	if (!(cnfga & (1<<2))) {
-		printk(KERN_DEBUG "%s: Accounting for extra byte\n", p->name);
-		residue++;
-	}
-
-	/* Don't care about partial PWords until support is added for
-	 * PWord != 1 byte. */
-
-	/* Back to PS2 mode. */
-	frob_set_mode(p, ECR_PS2);
-
-	DPRINTK(KERN_DEBUG
-	     "*** get_fifo_residue: done residue collecting (ecr = 0x%2.2x)\n",
-							inb(ECONTROL(p)));
-	return residue;
-}
-#endif  /*  0 */
-#endif /* IEEE 1284 support */
 #endif /* FIFO support */

 /*
 <at>  <at>  -940,234 +892,6  <at>  <at>  static size_t parport_pc_ecp_write_block_pio(struct parport *port,

 	return written;
 }
-
-#if 0
-static size_t parport_pc_ecp_read_block_pio(struct parport *port,
-					     void *buf, size_t length,
-					     int flags)
-{
-	size_t left = length;
-	size_t fifofull;
-	int r;
-	const int fifo = FIFO(port);
-	const struct parport_pc_private *priv = port->physport->private_data;
-	const int fifo_depth = priv->fifo_depth;
-	char *bufp = buf;
-
-	port = port->physport;
-	DPRINTK(KERN_DEBUG "parport_pc: parport_pc_ecp_read_block_pio\n");
-	dump_parport_state("enter fcn", port);
-
-	/* Special case: a timeout of zero means we cannot call schedule().
-	 * Also if O_NONBLOCK is set then use the default implementation. */
-	if (port->cad->timeout <= PARPORT_INACTIVITY_O_NONBLOCK)
-		return parport_ieee1284_ecp_read_data(port, buf,
-						       length, flags);
-
-	if (port->ieee1284.mode == IEEE1284_MODE_ECPRLE) {
-		/* If the peripheral is allowed to send RLE compressed
-		 * data, it is possible for a byte to expand to 128
-		 * bytes in the FIFO. */
-		fifofull = 128;
-	} else {
-		fifofull = fifo_depth;
-	}
-
-	/* If the caller wants less than a full FIFO's worth of data,
-	 * go through software emulation.  Otherwise we may have to throw
-	 * away data. */
-	if (length < fifofull)
-		return parport_ieee1284_ecp_read_data(port, buf,
-						       length, flags);
-
-	if (port->ieee1284.phase != IEEE1284_PH_REV_IDLE) {
-		/* change to reverse-idle phase (must be in forward-idle) */
-
-		/* Event 38: Set nAutoFd low (also make sure nStrobe is high) */
-		parport_frob_control(port,
-				      PARPORT_CONTROL_AUTOFD
-				      | PARPORT_CONTROL_STROBE,
-				      PARPORT_CONTROL_AUTOFD);
-		parport_pc_data_reverse(port); /* Must be in PS2 mode */
-		udelay(5);
-		/* Event 39: Set nInit low to initiate bus reversal */
-		parport_frob_control(port,
-				      PARPORT_CONTROL_INIT,
-				      0);
-		/* Event 40: Wait for  nAckReverse (PError) to go low */
-		r = parport_wait_peripheral(port, PARPORT_STATUS_PAPEROUT, 0);
-		if (r) {
-			printk(KERN_DEBUG "%s: PE timeout Event 40 (%d) "
-				"in ecp_read_block_pio\n", port->name, r);
-			return 0;
-		}
-	}
-
-	/* Set up ECP FIFO mode.*/
-/*	parport_pc_frob_control(port,
-				 PARPORT_CONTROL_STROBE |
-				 PARPORT_CONTROL_AUTOFD,
-				 PARPORT_CONTROL_AUTOFD); */
-	r = change_mode(port, ECR_ECP); /* ECP FIFO */
-	if (r)
-		printk(KERN_DEBUG "%s: Warning change_mode ECR_ECP failed\n",
-								port->name);
-
-	port->ieee1284.phase = IEEE1284_PH_REV_DATA;
-
-	/* the first byte must be collected manually */
-	dump_parport_state("pre 43", port);
-	/* Event 43: Wait for nAck to go low */
-	r = parport_wait_peripheral(port, PARPORT_STATUS_ACK, 0);
-	if (r) {
-		/* timed out while reading -- no data */
-		printk(KERN_DEBUG "PIO read timed out (initial byte)\n");
-		goto out_no_data;
-	}
-	/* read byte */
-	*bufp++ = inb(DATA(port));
-	left--;
-	dump_parport_state("43-44", port);
-	/* Event 44: nAutoFd (HostAck) goes high to acknowledge */
-	parport_pc_frob_control(port,
-				 PARPORT_CONTROL_AUTOFD,
-				 0);
-	dump_parport_state("pre 45", port);
-	/* Event 45: Wait for nAck to go high */
-	/* r = parport_wait_peripheral(port, PARPORT_STATUS_ACK,
-						PARPORT_STATUS_ACK); */
-	dump_parport_state("post 45", port);
-	r = 0;
-	if (r) {
-		/* timed out while waiting for peripheral to respond to ack */
-		printk(KERN_DEBUG "ECP PIO read timed out (waiting for nAck)\n");
-
-		/* keep hold of the byte we've got already */
-		goto out_no_data;
-	}
-	/* Event 46: nAutoFd (HostAck) goes low to accept more data */
-	parport_pc_frob_control(port,
-				 PARPORT_CONTROL_AUTOFD,
-				 PARPORT_CONTROL_AUTOFD);
-
-
-	dump_parport_state("rev idle", port);
-	/* Do the transfer. */
-	while (left > fifofull) {
-		int ret;
-		unsigned long expire = jiffies + port->cad->timeout;
-		unsigned char ecrval = inb(ECONTROL(port));
-
-		if (need_resched() && time_before(jiffies, expire))
-			/* Can't yield the port. */
-			schedule();
-
-		/* At this point, the FIFO may already be full. In
-		 * that case ECP is already holding back the
-		 * peripheral (assuming proper design) with a delayed
-		 * handshake.  Work fast to avoid a peripheral
-		 * timeout.  */
-
-		if (ecrval & 0x01) {
-			/* FIFO is empty. Wait for interrupt. */
-			dump_parport_state("FIFO empty", port);
-
-			/* Anyone else waiting for the port? */
-			if (port->waithead) {
-				printk(KERN_DEBUG "Somebody wants the port\n");
-				break;
-			}
-
-			/* Clear serviceIntr */
-			ECR_WRITE(port, ecrval & ~(1<<2));
-false_alarm:
-			dump_parport_state("waiting", port);
-			ret = parport_wait_event(port, HZ);
-			DPRINTK(KERN_DEBUG "parport_wait_event returned %d\n",
-									ret);
-			if (ret < 0)
-				break;
-			ret = 0;
-			if (!time_before(jiffies, expire)) {
-				/* Timed out. */
-				dump_parport_state("timeout", port);
-				printk(KERN_DEBUG "PIO read timed out\n");
-				break;
-			}
-			ecrval = inb(ECONTROL(port));
-			if (!(ecrval & (1<<2))) {
-				if (need_resched() &&
-				    time_before(jiffies, expire)) {
-					schedule();
-				}
-				goto false_alarm;
-			}
-
-			/* Depending on how the FIFO threshold was
-			 * set, how long interrupt service took, and
-			 * how fast the peripheral is, we might be
-			 * lucky and have a just filled FIFO. */
-			continue;
-		}
-
-		if (ecrval & 0x02) {
-			/* FIFO is full. */
-			dump_parport_state("FIFO full", port);
-			insb(fifo, bufp, fifo_depth);
-			bufp += fifo_depth;
-			left -= fifo_depth;
-			continue;
-		}
-
-		DPRINTK(KERN_DEBUG
-		  "*** ecp_read_block_pio: reading one byte from the FIFO\n");
-
-		/* FIFO not filled.  We will cycle this loop for a while
-		 * and either the peripheral will fill it faster,
-		 * tripping a fast empty with insb, or we empty it. */
-		*bufp++ = inb(fifo);
-		left--;
-	}
-
-	/* scoop up anything left in the FIFO */
-	while (left && !(inb(ECONTROL(port) & 0x01))) {
-		*bufp++ = inb(fifo);
-		left--;
-	}
-
-	port->ieee1284.phase = IEEE1284_PH_REV_IDLE;
-	dump_parport_state("rev idle2", port);
-
-out_no_data:
-
-	/* Go to forward idle mode to shut the peripheral up (event 47). */
-	parport_frob_control(port, PARPORT_CONTROL_INIT, PARPORT_CONTROL_INIT);
-
-	/* event 49: PError goes high */
-	r = parport_wait_peripheral(port,
-				     PARPORT_STATUS_PAPEROUT,
-				     PARPORT_STATUS_PAPEROUT);
-	if (r) {
-		printk(KERN_DEBUG
-			"%s: PE timeout FWDIDLE (%d) in ecp_read_block_pio\n",
-			port->name, r);
-	}
-
-	port->ieee1284.phase = IEEE1284_PH_FWD_IDLE;
-
-	/* Finish up. */
-	{
-		int lost = get_fifo_residue(port);
-		if (lost)
-			/* Shouldn't happen with compliant peripherals. */
-			printk(KERN_DEBUG "%s: DATA LOSS (%d bytes)!\n",
-				port->name, lost);
-	}
-
-	dump_parport_state("fwd idle", port);
-	return length - left;
-}
-#endif  /*  0  */
 #endif /* IEEE 1284 support */
 #endif /* Allowed to use FIFO/DMA */

diff --git a/drivers/parport/parport_sunbpp.c b/drivers/parport/parport_sunbpp.c
index 9390a53..983a2d2 100644
--- a/drivers/parport/parport_sunbpp.c
+++ b/drivers/parport/parport_sunbpp.c
 <at>  <at>  -82,27 +82,6  <at>  <at>  static unsigned char parport_sunbpp_read_data(struct parport *p)
 	return sbus_readb(&regs->p_dr);
 }

-#if 0
-static void control_pc_to_sunbpp(struct parport *p, unsigned char status)
-{
-	struct bpp_regs __iomem *regs = (struct bpp_regs __iomem *)p->base;
-	unsigned char value_tcr = sbus_readb(&regs->p_tcr);
-	unsigned char value_or = sbus_readb(&regs->p_or);
-
-	if (status & PARPORT_CONTROL_STROBE) 
-		value_tcr |= P_TCR_DS;
-	if (status & PARPORT_CONTROL_AUTOFD) 
-		value_or |= P_OR_AFXN;
-	if (status & PARPORT_CONTROL_INIT) 
-		value_or |= P_OR_INIT;
-	if (status & PARPORT_CONTROL_SELECT) 
-		value_or |= P_OR_SLCT_IN;
-
-	sbus_writeb(value_or, &regs->p_or);
-	sbus_writeb(value_tcr, &regs->p_tcr);
-}
-#endif
-
 static unsigned char status_sunbpp_to_pc(struct parport *p)
 {
 	struct bpp_regs __iomem *regs = (struct bpp_regs __iomem *)p->base;
--

-- 
1.7.5.4

John Heim | 26 Feb 00:40 2012
Picon

calling request_resource

I’m sorry this is a little off topic but I’m pretty desperate. I need to find out where I can get help fixing a bug in a kernel module for a serial device. The bug is in the driver for a speech synthesizer. I’ve already emailed the original developer and he is not interested in continuing to work on the driver.  I’m blind and I need my hardware speech synthesizer to work.
 
I’ve traced the bug to code that calls the function request_resource. The code for the request_region function says its copyright Linus Torvalds so I’m guessing its part of the linux core functions. Below is a code snippet that is like the code that is failing. An error code of –16 is always returned.  According to the comments in the driver module, this code is supposed to “steal” the serial port.   But it gets the -16 error code and errors out. 
 
int error;
struct resource myres;
myres.name = "ltlk";
myres.start = 0x3F8;
    myres.end = 0x3FF;
    myres.flags = IORESOURCE_BUSY;
    error = request_resource (&ioport_resource, &myres);
 
The actual code is in drivers/staging/speakup/synth.c. Its not exactly like the code snippet above but it does exactly the same thing. I pasted this code into the module, recompiled the kernel, and it has all the same values and has the same result. So if anyone could tell me what’s wrong with the above code, I could probably fix the real code.But if you want to look at the real code its in the kernel code in drivers/staging/speakup/synth.c.
 
The code in the driver module has not changed. They must have changed something elsewhere in the kernel code that broke this module. I don’t know exactly when it started happening but it was sometime after 2.6.32 and before 2.6.37.  The problem applies only to 64 bit hardware but it doesn’t matter if the kernel is compiled for 686 or amd64. So, for example, I can get speech with the 32-bit version of the  grml live CD on a 32 bit machine but not on a 64 bit machine. And I can’t get speech at all with the 64 bit grml CD. Same is true for stock debian kernels and kernels I compile myself.  But a 2.6.32-amd64 stock debian kernel does work.
 
 
 
_______________________________________________
Linux-parport mailing list
Linux-parport <at> lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-parport

Gmane