Fix: SciPy Not Working — Sparse Matrix Errors, Optimization Convergence, and Import Issues
Quick Answer
How to fix SciPy errors — ImportError cannot import name, sparse matrix deprecation warning, optimize.minimize did not converge, integrate.quad divergence, interpolate extrapolation, scipy.stats API changes, and Fortran compiler missing.
The Error
You import a SciPy function and the symbol isn’t there:
ImportError: cannot import name 'lena' from 'scipy.misc'
ImportError: cannot import name 'imread' from 'scipy.misc'Or you build a sparse matrix and get a deprecation warning:
SparseEfficiencyWarning: The matrix subclass is not the recommended way to
represent matrices. Please use scipy.sparse.csr_array instead.Or your optimization runs to the iteration limit without converging:
scipy.optimize.OptimizeResult:
success: False
message: 'Maximum number of function evaluations has been exceeded.'
nit: 1000Or scipy.integrate.quad returns garbage:
IntegrationWarning: The integral is probably divergent, or slowly convergent.
result: (42.0, 1.5e+08) # Huge error bar — result is meaninglessOr you install SciPy and pip builds from source for an hour, then fails:
error: Fortran compiler not foundSciPy is a massive library — optimization, integration, statistics, signal processing, sparse matrices, linear algebra, interpolation, and more. Each submodule has its own evolution and deprecations. The migration from scipy.misc (mostly deprecated), the rise of _array types alongside _matrix types, and the 1.11+ API cleanup have changed a lot of commonly-copied StackOverflow answers.
Why This Happens
SciPy builds on NumPy and adds scientific algorithms. Many parts of SciPy predate NumPy best practices — scipy.misc held helper functions that have moved to better homes (scipy.datasets, scipy.signal, imageio). The scipy.sparse module had csr_matrix (subclass of np.matrix) as its main type for decades, but SciPy 1.8+ is transitioning to csr_array (subclass of np.ndarray) which behaves consistently with the rest of NumPy.
Optimization and integration problems often fail because the default algorithms assume well-behaved inputs. Real-world problems have local minima, steep gradients, or singular points that break defaults. Understanding which solver fits your problem is the difference between convergence and garbage.
Fix 1: scipy.misc Import Errors
ImportError: cannot import name 'lena' from 'scipy.misc'
ImportError: cannot import name 'imread' from 'scipy.misc'
ImportError: cannot import name 'imresize' from 'scipy.misc'Most of scipy.misc was deprecated in 1.0 and removed in 1.10+.
Migration table:
Old (scipy.misc) | New |
|---|---|
scipy.misc.imread | imageio.imread or PIL.Image.open |
scipy.misc.imsave | imageio.imwrite or PIL.Image.save |
scipy.misc.imresize | PIL.Image.resize or skimage.transform.resize |
scipy.misc.imrotate | scipy.ndimage.rotate |
scipy.misc.lena() | Removed. Use scipy.datasets.face() or scipy.datasets.ascent() |
scipy.misc.face() | scipy.datasets.face() |
scipy.misc.ascent() | scipy.datasets.ascent() |
scipy.misc.factorial | scipy.special.factorial |
scipy.misc.comb | scipy.special.comb |
scipy.misc.logsumexp | scipy.special.logsumexp |
scipy.misc.derivative | scipy.misc.derivative removed; use numerical differentiation |
Example — replacing scipy.misc.imread:
# OLD — removed in SciPy 1.3
from scipy.misc import imread
img = imread("photo.jpg")
# NEW — use Pillow
from PIL import Image
import numpy as np
img = np.array(Image.open("photo.jpg"))
# Or imageio
import imageio.v3 as iio
img = iio.imread("photo.jpg")For test images that used to come from scipy.misc:
from scipy.datasets import face, ascent
img = face() # RGB raccoon face, shape (768, 1024, 3)
img = ascent() # Grayscale image, shape (512, 512)scipy.datasets requires pooch which installs automatically with recent SciPy, but may need pip install pooch on older versions.
Fix 2: Sparse Matrix _matrix vs _array
SparseEfficiencyWarning: The matrix subclass is not the recommended way
to represent matrices. Please use csr_array instead.SciPy is migrating from scipy.sparse.csr_matrix (which inherits from np.matrix) to scipy.sparse.csr_array (which inherits from np.ndarray). The _array versions behave like regular NumPy arrays — * is element-wise, @ is matrix multiplication.
Old pattern:
from scipy.sparse import csr_matrix
import numpy as np
# csr_matrix uses np.matrix conventions — * is matrix multiply
A = csr_matrix(np.eye(3))
B = csr_matrix(np.eye(3))
A * B # Matrix multiplication (returns csr_matrix)
A.multiply(B) # Element-wiseNew pattern (SciPy 1.8+):
from scipy.sparse import csr_array
import numpy as np
# csr_array uses ndarray conventions — * is element-wise, @ is matrix multiply
A = csr_array(np.eye(3))
B = csr_array(np.eye(3))
A * B # Element-wise (same as NumPy)
A @ B # Matrix multiplication (same as NumPy)Convert between types:
from scipy.sparse import csr_matrix, csr_array
old_matrix = csr_matrix(some_array)
# Convert to _array form
new_array = csr_array(old_matrix)
# Or go back
back_to_matrix = csr_matrix(new_array)Sparse format reference:
| Format | Best for | Drawback |
|---|---|---|
csr_* | Row operations, matrix-vector multiply | Slow column slicing |
csc_* | Column operations | Slow row slicing |
coo_* | Building matrices from (row, col, value) triplets | No arithmetic |
lil_* | Incremental construction (setting individual elements) | Slow for arithmetic |
dok_* | Dict-based incremental construction | Slow for large data |
bsr_* | Block-sparse matrices | Complex to construct |
Convert sparse to dense (only for small matrices):
sparse_mat = csr_array(np.eye(100))
dense = sparse_mat.toarray() # Convert to numpy ndarray
# WARNING: materializes all zeros too — can OOM on large sparse matricesPro Tip: Always use csr_array for new code. SciPy will eventually deprecate and remove csr_matrix. Converting a codebase mid-project is painful because the multiplication semantics differ — fix it early or maintain compatibility by using .multiply() and @ consistently.
Fix 3: optimize.minimize Did Not Converge
from scipy.optimize import minimize
result = minimize(f, x0=[1, 1])
# message: 'Maximum number of function evaluations has been exceeded.'
# success: FalseThe default algorithm (BFGS) assumes a smooth, unconstrained function with well-behaved gradients. Many real problems break this assumption.
Choose the right algorithm:
from scipy.optimize import minimize
# For smooth unconstrained problems — BFGS (default), L-BFGS-B (memory-efficient)
result = minimize(f, x0, method='L-BFGS-B')
# For non-smooth functions — Nelder-Mead (gradient-free)
result = minimize(f, x0, method='Nelder-Mead')
# For problems with bounds — L-BFGS-B with bounds
result = minimize(f, x0, method='L-BFGS-B', bounds=[(0, 1), (0, 10)])
# For problems with constraints — SLSQP or trust-constr
from scipy.optimize import LinearConstraint, NonlinearConstraint
linear_con = LinearConstraint([1, 1], -np.inf, 1) # x0 + x1 <= 1
result = minimize(f, x0, method='trust-constr', constraints=[linear_con])
# For global optima — differential evolution (no gradient required)
from scipy.optimize import differential_evolution
result = differential_evolution(f, bounds=[(0, 10), (0, 10)])Provide analytical gradients for 10–100x speedup when possible:
from scipy.optimize import minimize
import numpy as np
def f(x):
return x[0]**2 + x[1]**2
def grad_f(x):
return np.array([2*x[0], 2*x[1]])
result = minimize(f, x0=[1, 1], jac=grad_f, method='BFGS') # Uses analytical gradientIncrease iteration limits:
result = minimize(
f, x0,
method='L-BFGS-B',
options={
'maxiter': 10000, # Maximum iterations
'maxfun': 100000, # Maximum function evaluations
'gtol': 1e-8, # Gradient tolerance (convergence criterion)
'ftol': 1e-10, # Function value tolerance
},
)Scale your variables if they span different orders of magnitude:
# Variables: price in $1000s, count in integers 0-1
# WRONG — BFGS struggles with mismatched scales
def f(x):
price, count = x
return (price - 50)**2 + (count - 0.5)**2
# CORRECT — normalize to similar scales
def f_scaled(x):
price_scaled, count = x
price = price_scaled * 100
return (price - 50)**2 + (count - 0.5)**2Common Mistake: Checking only result.x without checking result.success. A failed optimization still returns values — they’re just wrong. Always check:
result = minimize(f, x0)
if not result.success:
raise RuntimeError(f"Optimization failed: {result.message}")
print(f"Converged at x = {result.x}")Fix 4: scipy.integrate.quad Warnings
IntegrationWarning: The integral is probably divergent, or slowly convergent.quad is adaptive quadrature — it works great for smooth integrands but struggles with singularities, oscillations, and unbounded intervals.
For integrals with singularities, split the interval:
from scipy.integrate import quad
import numpy as np
# Singular at x=0 — poor convergence
def f(x):
return 1 / np.sqrt(x)
# WRONG — quad struggles near the singularity
result, err = quad(f, 0, 1) # Warning
# CORRECT — use the `points` argument to mark singularities
result, err = quad(f, 0, 1, points=[0]) # Treats 0 as a breakpoint
# Or compute analytically near the singular pointFor oscillatory integrals, use a specialized method:
from scipy.integrate import quad
# Oscillating integrand
def g(x):
return np.cos(1000 * x) * np.exp(-x)
# WRONG — default quad misses oscillations
result, err = quad(g, 0, 10)
# CORRECT — weight by oscillatory factor
result, err = quad(lambda x: np.exp(-x), 0, 10, weight='cos', wvar=1000)For infinite intervals:
# Use np.inf and -np.inf explicitly
result, err = quad(lambda x: np.exp(-x**2), -np.inf, np.inf)
# Should return ~sqrt(pi) = 1.7724...Vector-valued integration:
from scipy.integrate import quad_vec
def f_vec(x):
return np.array([np.sin(x), np.cos(x), x**2])
result, err = quad_vec(f_vec, 0, 1)Fix 5: Interpolation and Extrapolation
from scipy.interpolate import interp1d
x = np.array([1, 2, 3, 4, 5])
y = np.array([2, 4, 1, 3, 5])
f = interp1d(x, y)
f(10) # ValueError: A value in x_new is above the interpolation rangeDefault interp1d doesn’t extrapolate. Provide fill_value or bounds_error=False:
from scipy.interpolate import interp1d
# Fill outside with NaN (silent, not an error)
f = interp1d(x, y, bounds_error=False, fill_value=np.nan)
# Fill with specific value
f = interp1d(x, y, bounds_error=False, fill_value=0)
# Extrapolate (linearly beyond the data)
f = interp1d(x, y, fill_value='extrapolate')
# Use different values above and below the range
f = interp1d(x, y, bounds_error=False, fill_value=(0, 100)) # below, aboveinterp1d is deprecated in SciPy 1.14+ — use the newer interfaces:
# OLD — still works but deprecated
from scipy.interpolate import interp1d
f = interp1d(x, y, kind='cubic')
# NEW — recommended replacement
from scipy.interpolate import make_interp_spline, CubicSpline, PchipInterpolator
# For cubic splines
spline = CubicSpline(x, y)
# For monotonic cubic Hermite (overshoot-free)
pchip = PchipInterpolator(x, y)
# For linear interpolation
from scipy.interpolate import RegularGridInterpolator
# (for N-D) or just use np.interp for 1D:
y_new = np.interp(x_new, x, y) # 1D linear, no scipy needed2D interpolation — interp2d is also deprecated:
# OLD
from scipy.interpolate import interp2d
f = interp2d(x, y, z)
# NEW
from scipy.interpolate import RegularGridInterpolator
interp = RegularGridInterpolator((x, y), z)
z_new = interp([[x1, y1], [x2, y2]]) # Points to evaluate atFix 6: scipy.stats API Changes
AttributeError: 'scipy.stats.norm' object has no attribute 'interval'
# Or
TypeError: interval() missing 1 required positional argument: 'confidence'SciPy 1.9+ changed the interval() method signature. The keyword argument was renamed:
from scipy.stats import norm
# OLD
ci = norm.interval(alpha=0.95, loc=0, scale=1)
# NEW
ci = norm.interval(confidence=0.95, loc=0, scale=1)Hypothesis tests in scipy.stats now return named tuples with proper fields:
from scipy.stats import ttest_ind
# Most tests return an object with .statistic and .pvalue
result = ttest_ind(group1, group2)
print(result.statistic) # t statistic
print(result.pvalue) # p-value
print(result.df) # degrees of freedom (SciPy 1.11+)
print(result.confidence_interval(0.95)) # CI methodCommon distributions:
from scipy import stats
# Normal distribution
stats.norm.pdf(x, loc=0, scale=1) # PDF
stats.norm.cdf(x, loc=0, scale=1) # CDF
stats.norm.ppf(0.95) # Inverse CDF (quantile)
stats.norm.rvs(size=100) # Random samples
# Student's t
stats.t.pdf(x, df=10)
# Chi-squared
stats.chi2.pdf(x, df=5)
# Fit a distribution to data
mu, sigma = stats.norm.fit(data)Random number generation — prefer the new Generator API:
# OLD — uses global state
from scipy import stats
np.random.seed(42)
samples = stats.norm.rvs(size=100)
# NEW — explicit random state
rng = np.random.default_rng(seed=42)
samples = stats.norm.rvs(size=100, random_state=rng)For NumPy random number patterns, see NumPy not working.
Fix 7: Installation — Fortran Compiler Missing
error: Fortran compiler not found.
error: library dfftpack has Fortran sources but no Fortran compiler foundSciPy contains Fortran code (LAPACK, FFTPACK, etc.) that needs compilation. Most users get pre-built wheels, but on unusual platforms pip falls back to source builds.
Install from a wheel — the normal path:
pip install --upgrade pip setuptools wheel
pip install scipyFor conda (usually easiest on Windows and older Linux):
conda install -c conda-forge scipyIf you must build from source, install a Fortran compiler:
# Debian/Ubuntu
sudo apt install gfortran liblapack-dev libblas-dev
# macOS
brew install gcc # Provides gfortran
# Windows — much harder, prefer conda or wheelsFor general pip build failures, see pip could not build wheels.
Fix 8: scipy.signal — Filter Design and FFT
from scipy.signal import butter, filtfilt
import numpy as np
# Design a Butterworth lowpass filter
fs = 1000 # Sample rate (Hz)
cutoff = 100 # Cutoff frequency (Hz)
order = 4
# Old API (SciPy <1.12) — returns tuple
# b, a = butter(order, cutoff, btype='low', fs=fs)
# Modern: use second-order sections for numerical stability
sos = butter(order, cutoff, btype='low', fs=fs, output='sos')
# Apply filter
from scipy.signal import sosfiltfilt
filtered = sosfiltfilt(sos, noisy_signal)sosfiltfilt vs filtfilt — SOS (second-order sections) is more numerically stable for high-order filters. Use it by default.
FFT for signal analysis:
from scipy.fft import fft, fftfreq
# Faster than np.fft for large arrays
y_fft = fft(signal)
freqs = fftfreq(len(signal), d=1/fs) # d = sample spacing
# Only positive frequencies (real signal)
mask = freqs >= 0
import matplotlib.pyplot as plt
plt.plot(freqs[mask], np.abs(y_fft[mask]))
plt.xlabel("Frequency (Hz)")For Matplotlib display of scientific plots, see Matplotlib not working.
Still Not Working?
Performance — Use SciPy Functions, Not Python Loops
SciPy routines are C/Fortran under the hood. A loop in Python calling scipy.stats.norm.pdf for each element is slow; call it once on an array:
import numpy as np
from scipy.stats import norm
x = np.linspace(-3, 3, 100000)
# SLOW
y = np.array([norm.pdf(xi) for xi in x])
# FAST — vectorized
y = norm.pdf(x)Linear Algebra — scipy.linalg vs numpy.linalg
scipy.linalg has more functionality (eigenvalue problems, LU decomposition options, specialized solvers) but numpy.linalg is enough for most cases:
import numpy as np
from scipy import linalg
# Both work
x = np.linalg.solve(A, b)
x = linalg.solve(A, b)
# scipy.linalg adds options like assume_a='pos' for positive-definite matrices
x = linalg.solve(A, b, assume_a='pos') # Faster for symmetric positive-definite
# Scipy has specialized decompositions NumPy lacks
L, low = linalg.lu_factor(A)
x = linalg.lu_solve((L, low), b)Sparse Linear Solvers
For sparse systems, use scipy.sparse.linalg:
from scipy.sparse import csr_array
from scipy.sparse.linalg import spsolve, cg
A_sparse = csr_array(A) # Convert to sparse if A has many zeros
x = spsolve(A_sparse, b) # Direct solve (LU)
# Iterative methods for very large systems
x, info = cg(A_sparse, b) # Conjugate gradient (for symmetric positive definite)Integration with scikit-learn
Many scikit-learn internals use SciPy sparse matrices. When fit() returns surprising types, check whether you’re getting a sparse or dense result. For sklearn-specific patterns, see scikit-learn not working.
Scientific Plotting
For histogram and distribution visualization after running scipy.stats analyses, see Seaborn not working for statistical plots and Matplotlib not working for lower-level control.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: NumPy Not Working — Broadcasting Error, dtype Mismatch, and Array Shape Problems
How to fix NumPy errors — ValueError operands could not be broadcast together, setting an array element with a sequence, integer overflow, axis confusion, view vs copy bugs, NaN handling, and NumPy 1.24+ removed type aliases.
Fix: Optuna Not Working — Trial Pruned, Storage Errors, and Search Space Problems
How to fix Optuna errors — TrialPruned stops too early, RDB storage locked or not saving, suggest methods raise ValueError, parallel study workers deadlock, integration callbacks not reporting, and best trial not reproducible.
Fix: Apache Airflow Not Working — DAG Not Found, Task Failures, and Scheduler Issues
How to fix Apache Airflow errors — DAG not appearing in UI, ImportError preventing DAG load, task stuck in running or queued, scheduler not scheduling, XCom too large, connection not found, and database migration errors.
Fix: BeautifulSoup Not Working — Parser Errors, Encoding Issues, and find_all Returns Empty
How to fix BeautifulSoup errors — bs4.FeatureNotFound install lxml, find_all returns empty list, Unicode decode error, JavaScript-rendered content not found, select vs find_all confusion, and slow parsing on large HTML.