Robert Lucente - Pipeline | 21 Oct 03:47 2014


Other Python signal processing resources that might be of interest

1)	Python for Signal Processing by Unpingco, José
2)	Gavish, Noam (
3)	Think DSP by Allen B Downey

-----Original Message-----
From: scipy-dev-bounces <at> [mailto:scipy-dev-bounces <at>] On
Behalf Of scipy-dev-request <at>
Sent: Monday, October 20, 2014 1:00 PM
To: scipy-dev <at>
Subject: SciPy-Dev Digest, Vol 132, Issue 23

Send SciPy-Dev mailing list submissions to
	scipy-dev <at>

To subscribe or unsubscribe via the World Wide Web, visit
or, via email, send a message with subject or body 'help' to
	scipy-dev-request <at>

You can reach the person managing the list at
	scipy-dev-owner <at>

When replying, please edit your Subject line so it is more specific than
"Re: Contents of SciPy-Dev digest..."

Today's Topics:

(Continue reading)

Lars Buitinck | 20 Oct 17:53 2014

extended scipy.sparse.linalg LinearOperator interface

Hi all,

[TL;DR I want to extend scipy.sparse.linalg.LinearOperator and I'm
looking for current usage patterns.]

To solve a few problems that I and others [1,2] encountered with the
scipy.sparse.linalg LinearOperator interface, I decided to expand and
refactor it. In a nutshell:

* Linear operators cannot be transposed. They do have an rmatvec that
implements A^H * x, but no matrix-matrix multiplication version of
same, so this has be implemented as a loop.
* While a lot of subclasses exist in various parts of
scipy.sparse.linalg, there was no documentation on how to roll your
own operator by subclassing. Instead you have to call the constructor
with a few functions, which become the methods on the custom operator.
This doesn't scale if we want to add more methods.
* The current implementation uses monkey-patching, causing memory
leaks due to reference cycles and making the code hard to read.

I've already submitted an early PR [3] with the main parts of my proposal:

* LinearOperator is now an abstract base class. An overloaded __new__
makes sure you can still call the constructor with the old calling
conventions; it returns a subclass.
* Subclassing is possible (and documented): you must provide a method
_matvec and optionally a few more. These get used to implement the
public matvec method, which uses _matvec but adds input validation
boilerplate (the "template method pattern").
* An "adjoint" method is added, and can be overridden by supplying an
(Continue reading)

Haslwanter Thomas | 19 Oct 22:15 2014


Over the last few years, I have created a collection of tools that comprise functions for


-          Working with quaternions and rotation matrices

-          Analyzing signals from inertial sensors and video systems

-          Working with sound

-          Signal processing

-          Interactive data analysis


They are available from pypi as the package “thLib”.

More info is available under


I would be grateful for any kind of feedback.


Thomas Haslwanter

SciPy-Dev mailing list
SciPy-Dev <at>
Noam Gavish | 18 Oct 23:35 2014

Advice on an open source signal processing package


I'm an Israeli undergrad student, new to the open source community. I've noted that Scipy might benefit from a solid signal processing toolbox, and have began writing one.
The package presents a natural interface for signals, and simplifies signal processing code.

This package emerged from research in the LEIsec lab in Tel-Aviv University and is supposed to be a one stop shop for DSP in python. I think that as the module matures, if adoption is positive, it can possibly become part of SciPy.

Your advice about the API, design, existing python signal processing resources, and roadmap for this module would be sincerely appreciated.
(Note that the inner implementation at the moment is basic, until the interface is decided)

ipython notebook example:

Noam Gavish


- - -
If you walk the footsteps of a stranger
You will learn things you never knew you never knew
SciPy-Dev mailing list
SciPy-Dev <at>
Nils Wagner | 12 Oct 18:39 2014

make latex failure

Attachment (scipy-ref.log.gz): application/x-gzip, 128 KiB
SciPy-Dev mailing list
SciPy-Dev <at>
Robert Lucente - Pipeline | 12 Oct 00:03 2014

Re: scipy.optimize.anneal - deprecation

Did not hear back. Not sure how to interpret that. Would really like to know why anneal was deprecated and why recommend basin hopping which won’t apply for discrete optimization. I realize that simulated annealing requires a lot of tweeking.


From: Robert Lucente - Pipeline [mailto:rlucente <at>]
Sent: Tuesday, September 30, 2014 9:45 PM
To: 'scipy-dev <at>'
Cc: Robert Gmail Backup 1 Lucente Gmail Backup 1 (robert.backup.lucente <at>
Subject: scipy.optimize.anneal - deprecation


Hi everyone,


I am a newbie to open source and so I am not sure what the appropriate tribal norms are. So, if the “To” emailing is too broad, I apologize ahead of time. Please let me know if there is a more appropriate mailing list.


I am also a newbie to Python. I haven’t gotten much past the hello world stage.


However, as I am poking around, simulated annealing got my attention because of a project that I am tangentially involved w/. As usual, I love “open source” because well, it is open. I can look at code and know exactly what is going on.

I was surprised to learn that scipy.optimize.anneal is being deprecated. It is a “standard” mathematical optimization technique which is used. Also, there is a decent amount of literature on it. For some references, refer to the blog “Simulated Annealing (SA) for Mathematical Optimization.” The recommendation seems to be to use basinhopping. Unfortunately, it assumes “smooth scalar function”. Unfortunately, this smoothness does not apply in my case.

I am sure that the deprecation of anneal was given a lot of thought. Is that documented anywhere or would someone be willing to share why it was deprecated?


SciPy-Dev mailing list
SciPy-Dev <at>
Sturla Molden | 11 Oct 14:53 2014

Use vecLibFort instead of Accelerate?

There is a library called vecLibFort that re-exports all the BLAS and
LAPACK symbols in Accelerate with gfortran ABI:

For scipy.linalg vecLibFort would solve several issues:

- Special wrappers for Accelerate would no longer be needed and can be
removed from the source.

- For the new Cython layer, the fortranname bug in f2py would go away – the
wrappers are no longer needed.

- The sgemv AVX segfault in Mavericks is corrected for by calling sgemm
when data are not 32 byte aligned.


SciPy-Dev mailing list
SciPy-Dev <at>
Saullo Castro | 10 Oct 12:00 2014

Function to check if sparse matrix is symmetric

I developed a function (shown below) to check if a sparse matrix is symmetric and would like to know if the community is interested to include in scipy.sparse.


def is_symmetric(m):
    """Check if a sparse matrix is symmetric

    m : array or sparse matrix
        A square matrix.

    check : bool
        The check result.

    if m.shape[0] != m.shape[1]:
        raise ValueError('m must be a square matrix')

    if not isinstance(m, coo_matrix):
        m = coo_matrix(m)

    r, c, v = m.row, m.col,
    tril_no_diag = r > c
    triu_no_diag = c > r

    if triu_no_diag.sum() != tril_no_diag.sum():
        return False

    rl = r[tril_no_diag]
    cl = c[tril_no_diag]
    vl = v[tril_no_diag]
    ru = r[triu_no_diag]
    cu = c[triu_no_diag]
    vu = v[triu_no_diag]

    sortl = np.lexsort((cl, rl))
    sortu = np.lexsort((ru, cu))
    vl = vl[sortl]
    vu = vu[sortu]

    check = np.allclose(vl, vu)

    return check

SciPy-Dev mailing list
SciPy-Dev <at>
Andrew Nelson | 5 Oct 05:54 2014

Global optimisation test functions

Dear team,
Andrea Gavana has kindly made the code to his benchmark suite and the AMPGO solver available.  I am working on getting the benchmark suite suitable for inclusion into Scipy (pull request to follow).  It's essentially going to be a vastly expanded version of scipy/optimize/benchmarks/  The benchmark suite needs to be accompanied by tests, but I'm not sure where to put them, presumably in a tests directory below the benchmarks directory.
However,  the benchmark directory is not a scipy module, so how would I import go_benchmark_functions from the testing code?


Dr. Andrew Nelson

SciPy-Dev mailing list
SciPy-Dev <at>
Todd | 2 Oct 19:28 2014

Circular statistics revisted

Recently I added a vector strength function to scipy.  I was interested in also adding some tests of circular statistics, which scipy currently lacks, and noticed there is already a circmean, circvar, and circstd functions.  Matlab and R both have circular statistics toolboxes.

I don't think this is something that would qualify for an entirely different namespace, but what about  moving "vector_strength" to scipy.stats and having a separate section for circular statistics in the scipy.stats documentation?  Then we might be able to add some more circular statistics functions there.
SciPy-Dev mailing list
SciPy-Dev <at>
Andrea Gavana | 1 Oct 21:39 2014

AMPGO - Take 3

So, after receiving a bunch of requests for the AMPGO global optimization code and the associated benchmarks, I decided to put what I have on the public domain. I don't have that much time to work on it myself *but* I would still appreciate bug reports and feature requests.

The AMPGO algorithm code and the 184 ND test functions - plus 20 1D test functions - live here:

The algorithm itself can be tuned to use various local optimizers from the SciPy stack - and also from the OpenOpt stack, although the overall performances of the OpenOpt algorithms ranges between poor and abysmal - with the exception of the BOBYQA algorithm, and I'd say it's high time whoever is developing SciPy should include it as numerical optimization option.

The AMPGO code as it stands is *not* suitable for inclusion into SciPy: the docstrings are almost non-existent, the return values are probably not compatible with the SciPy standards and in any case I am using SciPy 0.12.b1 still. Some time ago a kind soul provided me with the analytic definition of the gradient of the Tunnelling function inside AMPGO but I never managed to code it. This will give a great boost to AMPGO for problems with readily available gradient information. 

My main page of results and conclusions about global optimization algorithms available in Python is here:

All the conclusions and results and ranking criteria are based on the *number of function evaluations*: everything else- like CPU time and internal algorithm performances, has been scrapped in the most brutal and merciless way. Most of us deal with real life problems where a single function evaluation (a simulation) may take hours to complete, so a few milliseconds gain in the internal algorithm processing feels like a joke. That,of course, has not stopped any of the "big" Global Optimization Projects (see COCONUT and funny friends), nor any of the reported benchmarks for "famous" solvers (BARON, I'm looking at you) from using ridiculous CPU-time metrics.

Comments and suggestions are most welcome - especially if they help improving the overall performance of the AMPGO code (Tunnelling function gradient is a good start) or fixing bugs in the test functions - I'm sure there are a few lurking by, it's not that easy to stay focused while coding so much numerical optimization bruhaha.


"Imagination Is The Only Weapon In The War Against Reality."

# ------------------------------------------------------------- #
def ask_mailing_list_support(email):

    if mention_platform_and_version() and include_sample_app():
# ------------------------------------------------------------- #
SciPy-Dev mailing list
SciPy-Dev <at>