Jiri Krtek | 23 Apr 13:38 2014

kmeans with weights to scipy.cluster

Hi all,

 

I need to take into consideration weights in k-means. Specifically, the first (Assignment) step of the k-means algorithm remains the same, but in the second (Update) step there is not a simple average, but weighted average. I searched for some implementation of such kind of algorithm in Python, but I wasn’t successful. So I hacked a little bit the kmeans2 function from scipy.cluster. I want to ask if it sounds interesting. Would you add it to scipy.cluster module?

 

Regards,

Jiri

_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Ralf Gommers | 22 Apr 22:40 2014
Picon

welcome to Scipy GSoC'14 students!

Hi all,

The accepted projects for Google Summer of Code have been announced: congratulations and a warm welcome to Janani Padmanbhan and Richard Tsai!

Janani will be working on spherical harmonic and hypergemetric functions in scipy.special [1], with Pauli and Stefan as main mentors, and Evgeni and me also pitching in.

Richard will be working on rewriting scipy.cluster in Cython and improving it further [2], with Charles as main mentor and David W-F as domain expert providing help.

From now until May 19th is the community bonding period [3] - the period of time when students further get to know the project, figure out people & processes within the project in more detail, and prepare for the actual coding period. Please help them find their way! And Richard, Janani, feel free to ask questions, explore or work on issues / documentation / website / whatever you think is interesting.

Cheers,
Ralf

_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Warren Weckesser | 22 Apr 14:22 2014
Picon

Re: GSoC Draft Proposal: Rewrite and improve cluster package in Cython

On 4/22/14, Richard Tsai <richard9404 <at> gmail.com> wrote:
> 2014-03-21 22:18 GMT+08:00 Richard Tsai <richard9404 <at> gmail.com>:
>
>> Hi all,
>>
>> I've posted my proposal to melange but there's still some potential
>> features to the package (cluster) I want to discuss here.
>>
>> The first one is about the stopping criterion of kmeans/kmeans. These two
>> functions are using the average distance from observations to their
>> corresponding centroids currently. But a more accurate exiting condition
>> will be the average *squared* distance. Besides, the average centroids
>> moving distance, and the changes of the results of vq are both better
>> than
>> the original one.
>> Second, finding convex hulls of hierarchical clustering seems interesting
>> but I'm not sure if there's a demand for it.
>> The third one is gap statistics for automatic determination of k in
>> kmeans. David supposed that it should be scikit-learn territory and I
>> plan
>> to put it to the end.
>>
>> I'm not sure if these features are proper to be integrated into cluster
>> and Ralf doubts that there's some overlap with scikit-learn so I post
>> them
>> here to discuss at his suggestion. I've also made my proposal public:
>> http://www.google-melange.com/gsoc/proposal/public/google/gsoc2014/richardtsai/5629499534213120
>> Comments/suggestions are welcome.
>>
>> Regards,
>> Richard
>>
>
> Hi all,
>
> I've received emails from GSoC saying that my proposal has been
> accepted. Thanks
> to those who have help me with my application!
>
> I'll submit the required materials soon then make a more detailed plan and
> prepare for coding. If you have any thoughts about my project, please
> discuss with me!
>
> Richard
>

Congratulations, Richard!  That's great news.

Warren
Matthew Brett | 22 Apr 03:05 2014
Picon

Powell failure on MingW windows build - any insights?

Hi,

I'm experimenting with Carl Kleffner's MingW-w64 builds of numpy and scipy.

Numpy now passes all tests for me (building with Carl's toolchain and
ATLAS 64-bit).

Scipy fails 2 tests only, both using the Powell routine.

Errors look like this:

======================================================================
FAIL: Powell (direction set) optimization routine
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\devel\py27\lib\site-packages\nose\case.py", line 197, in runTest
    self.test(*self.arg)
  File "D:\devel\py27\lib\site-packages\scipy\optimize\tests\test_optimize.py",
line 209, in test_powell
    atol=1e-14, rtol=1e-7)
  File "D:\devel\py27\lib\site-packages\numpy\testing\utils.py", line
1181, in assert_allclose
    verbose=verbose, header=header)
  File "D:\devel\py27\lib\site-packages\numpy\testing\utils.py", line
644, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-07, atol=1e-14

(mismatch 100.0%)
 x: array([[ 0.75077639, -0.44156936,  0.47100962],
       [ 0.75077639, -0.44156936,  0.48052496],
       [ 1.50155279, -0.88313872,  0.95153458],...
 y: array([[ 0.72949016, -0.44156936,  0.47100962],
       [ 0.72949016, -0.44156936,  0.48052496],
       [ 1.45898031, -0.88313872,  0.95153458],...

Does anyone have any insight as to what might be going on here?  Same
failure on Windows 32 and 64 bit, with ATLAS or OpenBLAS...

Many thanks for any pointers,

Matthew
Wim R. Cardoen | 22 Apr 01:07 2014
Picon

Errors in Scipy 0.13.3 & Python 3.3.5

Hello

I compiled scipy 0.13.3 successfully
on a Centos 6 machine.
When I ran the test suite I obtained the following 5 errors
(I used OpenBlas (single threaded) for the BLAS/LAPACK library and SuiteSparse for the AMD,UMFPACK libraries)


======================================================================
FAIL: test_arpack.test_real_nonsymmetric_modes(False, <std-real-nonsym>, 'f', 2, 'LM', None, 0.1, <function asarray at 0x7f989ca49b00>, 'r')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/nose-1.3.1-py3.3.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 259, in eval_evec
    assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 1183, in assert_allclose
    verbose=verbose, header=header)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.000357628, atol=0.000357628
error for eigs:standard, typ=f, which=LM, sigma=0.1, mattype=asarray, OPpart=r, mode=normal
(mismatch 100.0%)
 x: array([[-0.11649324+0.j, -0.05435310+0.j],
       [ 0.13801208+0.j, -0.05894032+0.j],
       [-0.21229853+0.j, -0.02815625+0.j],...
 y: array([[-0.05698580+0.08018474j, -0.05435318+0.j        ],
       [ 0.06439485-0.09061003j, -0.05894023+0.j        ],
       [-0.12503998+0.17594382j, -0.02815617+0.j        ],...

======================================================================
FAIL: test_arpack.test_real_nonsymmetric_modes(False, <std-real-nonsym>, 'f', 2, 'LR', None, None, <function aslinearoperator at 0x7f988cd9fb90>, None)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/nose-1.3.1-py3.3.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 259, in eval_evec
    assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 1183, in assert_allclose
    verbose=verbose, header=header)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.00178814, atol=0.000357628
error for eigs:standard, typ=f, which=LR, sigma=None, mattype=aslinearoperator, OPpart=None, mode=normal
(mismatch 100.0%)
 x: array([[ 0.13884316+0.j, -1.07112074+0.j],
       [-0.08962911+0.j, -1.39801252+0.j],
       [ 0.21701422+0.j, -0.93968379+0.j],...
 y: array([[ 0.06020537+0.08471544j, -1.07112110+0.j        ],
       [-0.07497437-0.10549703j, -1.39801240+0.j        ],
       [ 0.13326693+0.18752094j, -0.93968397+0.j        ],...

======================================================================
FAIL: test_basic.test_xlogy
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/nose-1.3.1-py3.3.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", line 2736, in test_xlogy
    assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy/special/_testutils.py", line 87, in assert_func_equal
    fdata.check()
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy/special/_testutils.py", line 292, in check
    assert_(False, "\n".join(msg))
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 44, in assert_
    raise AssertionError(msg)
AssertionError:
Max |adiff|: 712.209
Max |rdiff|: 1027.5
Bad results (3 out of 6) for the following points (in output 0):
                            0j                        (nan+0j) =>                        (-0+0j) !=                     (nan+nanj)  (rdiff                            0.0)
                        (1+0j)                          (2+0j) => (-711.5155851371305+0.7853981633976757j) !=        (0.6931471805599453+0j)  (rdiff             1027.5006309578175)
                        (1+0j)                              1j => (-711.5155851371305+0.7853981633976757j) !=            1.5707963267948966j  (rdiff              452.9651658054808)

======================================================================
FAIL: test_lambertw.test_values
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/nose-1.3.1-py3.3.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 21, in test_values
    assert_equal(lambertw(inf,1).real, inf)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 304, in assert_equal
    raise AssertionError(msg)
AssertionError:
Items are not equal:
 ACTUAL: nan
 DESIRED: inf

======================================================================
FAIL: test_lambertw.test_ufunc
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 581, in chk_same_position
    assert_array_equal(x_id, y_id)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 718, in assert_array_equal
    verbose=verbose, header='Arrays are not equal')
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not equal

(mismatch 66.66666666666666%)
 x: array([False,  True,  True], dtype=bool)
 y: array([False, False, False], dtype=bool)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/nose-1.3.1-py3.3.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 93, in test_ufunc
    lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873])
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 607, in assert_array_compare
    chk_same_position(x_isnan, y_isnan, hasval='nan')
  File "/software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy/testing/utils.py", line 587, in chk_same_position
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 6 decimals

x and y nan location mismatch:
 x: array([  0.+0.j,  nan+0.j,  nan+0.j])
 y: array([ 0.   ,  1.   ,  0.567])

----------------------------------------------------------------------
Ran 8775 tests in 239.751s

FAILED (KNOWNFAIL=114, SKIP=220, failures=5)
Running unit tests for scipy
NumPy version 1.8.1
NumPy is installed in /software/pkg/python/3.3.5/lib/python3.3/site-packages/numpy
SciPy version 0.13.3
SciPy is installed in /software/pkg/python/3.3.5/lib/python3.3/site-packages/scipy
Python version 3.3.5 (default, Apr 16 2014, 19:42:58) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)]
nose version 1.3.1

Do you have any fix for these errors?

Thanks in advance.

Wim


--
---------------------------------------------------------------
Wim R. Cardoen, PhD
Staff Scientist,
Center for High Performance Computing
University of Utah
---------------------------------------------------------------
_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Christoph Sawade | 16 Apr 14:16 2014

Fwd: Binomial proportion confidence interval

Hi,

I would like to add approximate intervals [1] to the scipy project. Approximate intervals often have higher statistical power than exact intervals for binomial random variables.
Example usage can be found in the "binom R" package [2]. In analogy to [3], I see two possible ways to integrate these methods into the scipy API:

(a,b) = scipy.stats.binom.approx_interval(alpha, loc, scale, method='wilson')
(a,b) = scipy.stats.binom.wilson_interval(alpha, loc, scale), 

where (a,b) are the end-points of range that contain 100*alpha % of the rv's possible values.

What do you think?

Best, Christoph

[1] http://www-stat.wharton.upenn.edu/~tcai/paper/Binomial-StatSci.pdf
[2] http://cran.r-project.org/web/packages/binom/index.html
[3] 
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.interval.html#scipy.stats.rv_discrete.interval




_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Manoj Kumar | 15 Apr 15:24 2014
Picon

Mp3 support for scipy.io and some help

Hello Scipy-Devs,

I was working on an application that involves analyzing sound files. Basically, I would like to extract information like frequencies, amplitudes so that I can plot a frequency time transform. Since mp3 is the most widely used format, I would have to write my own function to transform the mp3 format into a numpy array (similar to what .wav
does). Would scipy be interested in such a thing?

However, Lars tells me there a lot of patent issues. Is there any workaround to this?
Sorry if this was off topic and any help would be greatly appreciated.

--
Regards,
Manoj Kumar,
Mech Undergrad
http://manojbits.wordpress.com
_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Benny Malengier | 15 Apr 08:32 2014
Picon

libflame integrated LAPACK

I think the announcement of libflame is of interest for scipy, http://www.cs.utexas.edu/~flame/web/.
It seems all is under a license scipy can  use:

From: Field G. Van Zee field <at> cs.utexas.edu
Date: April 07, 2014
Subject: The union of libflame and LAPACK

Sponsored by an NSF Software Infrastructure for Sustained
Innovation grant, we have been developing a new, vertically
integrated dense linear algebra software stack. At the bottom of
this software stack is the BLAS-like Library Instantiation Software
(BLIS). Above this, targeting sequential and multithreaded
architectures is libflame. At the top of the stack is Elemental
for distributed memory architectures.

libflame targets roughly the same layer as does LAPACK, and now we
have incorporated the LAPACK code base into libflame. For those
operations where libflame has the native functionality, the LAPACK
code becomes an interface. For all other operations, the netlib
implementation provides that functionality. We affectionately call
this new union "flapack", which offers the following benefits:

1) The libflame implementation of LAPACK is entirely coded in C.
No Fortran libraries or compilers are required.
2) The libflame library builds upon the BLIS interface. This
interface, unlike the BLAS, allows for arbitrary row and column
stride. While some applications may benefit from this (e.g., those
that perform computation with slices of tensors), from a
development and maintainability point of view it allows more
functionality to be supported with less code.
3) The union of the two libraries allows users to benefit from both
the LAPACK and libflame code base, within one package.
4) "flapack" passes the LAPACK test suite on platforms where we
have tested this. (There is one exception of a test case that
involves packed matrices that we believe is not in general use.)

The library is available under a 3-clause BSD license at:
https://github.com/flame/libflame

_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Charles R Harris | 13 Apr 07:42 2014
Picon

Errors with numpy-devel

Hi All,

I get 75 errors and 3 failures when testing against current numpy on my machine. Most of the errors are due to either the deprecation of the binary '-' operator for booleans or to the deprecation of double ellipsis for indexing, i.e., '(..., ...)' . The remainder look like two numerical precision problems and one I can't immediately identify.

The main question I have is what is the best way to deal with the deprecations?


FAIL: test_lsmr.TestLSMR.testBidiagonalA
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/charris/.local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/tests/test_lsmr.py", line 60, in testBidiagonalA
    self.assertCompatibleSystem(A,xtrue)
  File "/home/charris/.local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/tests/test_lsmr.py", line 40, in assertCompatibleSystem
    assert_almost_equal(norm(x - xtrue), 0, 6)
  File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/utils.py", line 486, in assert_almost_equal
    raise AssertionError(_build_err_msg())
AssertionError:
Arrays are not almost equal to 6 decimals
 ACTUAL: 6.048630163037888e-07
 DESIRED: 0

======================================================================
FAIL: test_qhull.TestUtilities.test_degenerate_barycentric_transforms
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/decorators.py", line 146, in skipper_func
    return f(*args, **kwargs)
  File "/home/charris/.local/lib/python2.7/site-packages/scipy/spatial/tests/test_qhull.py", line 296, in test_degenerate_barycentric_transforms
    assert_(bad_count < 20, bad_count)
  File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/utils.py", line 50, in assert_
    raise AssertionError(smsg)
AssertionError: 20

======================================================================
FAIL: test_trim (test_mstats_basic.TestTrimming)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/charris/.local/lib/python2.7/site-packages/scipy/stats/tests/test_mstats_basic.py", line 270, in test_trim
    assert_equal(trimx._mask.ravel(),[1]*20+[0]*70+[1]*20)
  File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 123, in assert_equal
    return assert_array_equal(actual, desired, err_msg)
  File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 196, in assert_array_equal
    header='Arrays are not equal')
  File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 189, in assert_array_compare
    verbose=verbose, header=header)
  File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/utils.py", line 660, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not equal

(mismatch 9.09090909091%)
 x: array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True, False, False, False, False, False, False, False,...
 y: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...

Chuck


_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Nathaniel Smith | 10 Apr 21:45 2014
Picon

Fwd: [Python-Dev] PEP 465: A dedicated infix operator for matrix multiplication

Hey all,

Given the sometimes rocky history of collaboration between numerical
Python and core Python, I thought it might be helpful to flag this
posting for broader distribution -- it gives one perspective on how
the core devs see things. (And is certainly consistent with my
experience around PEP 465.)

(Nick is, among other things, core Python's packaging czar, and
previously on record [1] as wishing for more feedback on how the
current energy around python packaging could take our needs into
account.)

-n

[1] https://ncoghlan_devs-python-notes.readthedocs.org/en/latest/pep_ideas/core_packaging_api.html#a-long-caveat-on-this-essay

---------- Forwarded message ----------
From: Nick Coghlan <ncoghlan <at> gmail.com>
Date: Tue, Apr 8, 2014 at 1:32 PM
Subject: Re: [Python-Dev] PEP 465: A dedicated infix operator for
matrix multiplication
To: Björn Lindqvist <bjourne <at> gmail.com>
Cc: Sturla Molden <sturla.molden <at> gmail.com>, "python-dev <at> python.org"
<python-dev <at> python.org>

On 8 April 2014 21:24, Björn Lindqvist <bjourne <at> gmail.com> wrote:
> 2014-04-08 12:23 GMT+02:00 Sturla Molden <sturla.molden <at> gmail.com>:
>> Björn Lindqvist <bjourne <at> gmail.com> wrote:
>>
>>> import numpy as np
>>> from numpy.linalg import inv, solve
>>>
>>> # Using dot function:
>>> S = np.dot((np.dot(H, beta) - r).T,
>>>            np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r))
>>>
>>> # Using dot method:
>>> S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
>>>
>>> Don't keep your reader hanging! Tell us what the magical variables H,
>>> beta, r and V are. And why import solve when you aren't using it?
>>> Curious readers that aren't very good at matrix math, like me, should
>>> still be able to follow your logic. Even if it is just random data,
>>> it's better than nothing!
>>
>> Perhaps. But you don't need to know matrix multiplication to see that those
>> expressions are not readable. And by extension, you can still imagine that
>> bugs can easily hide in unreadable code.
>>
>> Matrix multiplications are used extensively in anything from engineering to
>> statistics to computer graphics (2D and 3D). This operator will be a good
>> thing for a lot of us.
>
> All I ask for is to be able to see that with my own eyes. Maybe there
> is some drastic improvement I can invent to make the algorithm much
> more readable? Then the PEP:s motivation falls down. I don't want to
> have to believe that the code the pep author came up with is the most
> optimal. I want to prove that for myself.

Note that the relationship between the CPython core development team
and the Numeric Python community is based heavily on trust - we don't
expect them to teach us to become numeric experts, we just expect them
to explain themselves well enough to persuade us that a core language
or interpreter change is the only realistic way to achieve a
particular goal. This does occasionally result in quirky patterns of
feature adoption, as things like extended slicing, full rich
comparison support, ellipsis support, rich buffer API support, and now
matrix multiplication support, were added for the numeric community's
benefit without necessarily offering any immediately obvious benefit
for those not using the numeric Python stack - it was only later that
they became pervasively adopted throughout the standard library (with
matmul, for example, a follow on proposal to allow tuples, lists and
arrays to handle vector dot products may make sense).

This particular problem has been kicking around long enough, and is
sufficiently familiar to several of us, that what's in the PEP already
presents a compelling rationale for the *folks that need to be
convinced* (which is primarily Guido, but if enough of the other core
devs think something is a questionable idea, we can often talk him out
of it - that's not the case here though).

Attempting to condense that preceding 10+ years of history *into the
PEP itself* wouldn't be a good return on investment - the links to the
earlier PEPs are there, as are the links to these discussion threads.

Cheers,
Nick.

P.S. We've been relatively successful in getting a similar trust based
dynamic off the ground for the packaging and distribution community
over the past year or so. The next big challenge in trust based
delegation for the core development team will likely be around a
Python 3.5+ only WSGI 2.0 (that can assume the Python 3 text model,
the restoration of binary interpolation, the availability of asyncio,
etc), but most of the likely principals there are still burned out
from the WSGI 1.1 debate and the Python 3 transition in general :(

>
>
> --
> mvh/best regards Björn Lindqvist
> _______________________________________________
> Python-Dev mailing list
> Python-Dev <at> python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com

--
Nick Coghlan   |   ncoghlan <at> gmail.com   |   Brisbane, Australia
_______________________________________________
Python-Dev mailing list
Python-Dev <at> python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/njs%40pobox.com

--

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
_______________________________________________
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-dev
Warren Weckesser | 8 Apr 14:00 2014
Picon

wiki.scipy.org is down.

The wiki site, wiki.scipy.org, appears to be down.

Warren

Gmane