3. dataList

dataList contain a list of dataArray for several datasets.

  • list subclass as lists of dataArrays (allowing variable sizes).
  • basic list routines as read/save, appending, selection, filter, sort, prune, interpolate, spline…
  • multidimensional least square fit that uses the attributes of the dataArray elements.
  • read/write in human readable ASCII text of multiple files in one run (gzip possible).

dataList creation can be from read ASCII files or ndarrays as js.dL(‘filename.dat’).

A file may contain several datasets.

See dataList for details.

Example:

p=js.grace()
dlist2=js.dL()
x=np.r_[0:10:0.5]
D,A,q=0.45,0.99,1.2
for q in np.r_[0.1:2:0.2]:
   dlist2.append(js.dA(np.vstack([x,np.exp(-q**2*D*x),np.random.rand(len(x))*0.05])) )
   dlist2[-1].q=q
p.clear()
p.plot(dlist2,legend='Q=$q')
p.legend()
dlist2.save('test.dat.gz')

The dataarray module can be run standalone in a new project.

3.1. Attributes

dataList.attr Returns all attribute names (including commonAttr of elements) of the dataList.
dataList.commonAttr Returns list of attribute names existing in elements.
dataList.dtype return dtype of elements
dataList.names List of element names.
dataList.whoHasAttributes Lists which attribute is found in which element.
dataList.showattr([maxlength, exclude]) Show data specific attributes for all elements.

3.2. Fitting

dataList.fit(model[, freepar, fixpar, …]) Least square fit of model that minimizes chi**2 (uses scipy.optimize.leastsq).
dataList.modelValues(**kwargs) Calculates modelValues of model after a fit.
dataList.setlimit(**kwargs) Set upper and lower limits for parameters in least square fit.
dataList.has_limit Return existing limits
dataList.makeErrPlot([title, showfixpar]) Creates a GracePlot for intermediate output from fit with residuals.
dataList.makeNewErrPlot(**kwargs) Creates a NEW ErrPlot without destroying the last.
dataList.detachErrPlot() Detaches ErrPlot without killing it and returns a reference to it.
dataList.killErrPlot([filename]) Kills ErrPlot
dataList.showlastErrPlot([title, modelValues]) Shows last ErrPlot as created by makeErrPlot with last fit result.
dataList.errPlot(*args, **kwargs) Plot into an existing ErrPlot.
dataList.savelastErrPlot(filename[, format, …]) Saves errplot to file with filename.
dataList.interpolate([func, invfunc, deg]) Interpolates Y at given attribute values for X values.
dataList.polyfit([func, invfunc, xfunc, …]) Inter/Extrapolated values along attribut for all given X values using a polyfit.
dataList.extrapolate([func, invfunc, xfunc, …]) Inter/Extrapolated values along attribut for all given X values using a polyfit.
dataList.bispline([func, invfunc, tx, ta, …]) Weighted least-squares bivariate spline approximation for interpolation of Y at given attribute values for X values.

3.3. Housekeeping

dataList.setColumnIndex(*arg, **kwargs) Set the columnIndex where to find X,Y,Z, eY, eX, eZ…..
dataList.append([objekt, index, usecols, …]) Reads/creates new dataArrays and appends to dataList.
dataList.extend([objekt, index, usecols, …]) Reads/creates new dataArrays and appends to dataList.
dataList.insert(i[, objekt, index, usecols, …]) Reads/creates new dataArrays and inserts in dataList.
dataList.prune(*args, **kwargs) Reduce number of values between upper and lower limits.
dataList.savetxt([name, exclude, fmt]) Saves dataList as ASCII text file, optional compressed (gzip).
dataList.sort([key, reverse]) Sort dataList -> INPLACE!!!
dataList.reverse() Reverse dataList -> INPLACE!!!
dataList.delete(index) Delete element at index
dataList.extractAttribut(parName[, func, …]) Extract a simpler attribute from a complex attribute in each element of dataList.
dataList.filter(filterfunction) Filter elements according to filterfunction.
dataList.index(value[, start, stop]) original doc from list
dataList.merge(indices[, isort]) Merges elements of dataList.
dataList.mergeAttribut(parName[, limit, …]) Merges elements of dataList if attribute values are closer than limit (in place).
dataList.pop([i]) original doc from list
dataList.copyattr2elements([maxndim, exclude]) Copy dataList specific attributes to all elements.
dataList.getfromcomment(attrname) Extract a non number parameter from comment with attrname in front

class jscatter.dataarray.dataList(objekt=None, block=None, usecols=None, delimiter=None, takeline=None, index=slice(None, None, None), replace=None, skiplines=None, ignore='#', XYeYeX=(0, 1, 2), lines2parameter=None)[source]

Bases: jscatter.dataarray.dataListBase

A list of dataArrays with attributes for analysis, fitting and plotting.

  • Allows reading, appending, selection, filter, sort, prune, least square fitting, ….
  • Saves to human readable ASCII text format (possibly gziped). For file format see dataArray.
  • The dataList allows simulteneous fit of all dataArrays dependent on attributes.
  • and with different parameters for the dataArrays (see fit).
  • dataList creation parameters (below) mainly determine how a file is read from file.
Parameters:

objekt : strings, list of array or dataArray

Objects or filename(s) to read.
Filenames with extension ‘.gz’ are decompressed (gzip).
Accepts filenames with asterisk like exda=dataList(objekt=’aa12*’) as input for multiple file input.

usecols : list of integer

Use only given columns and ignore others.

skiplines : boolean function, list of string or single string

Skip line if line meets condition. Function gets the list of words in a line. Examples:

  • lambda words: any(w in words for w in [‘’,’ ‘,’NAN’,’‘*]) #with exact match
  • lambda words: any(float(w)>3.1411 for w in words)
  • lambda words: len(words)==1

If a list is given, the lambda function is generated automatically as in above example. If single string is given, it is tested if string is a substring of a word ( ‘abc’ in ‘12 3abc4 56’)

block : None,list int, string

block separates parts of a file
If block is found a new dataArray is created from a part and appended.
block can be something like “#next”
or the first parameter name of a new block as block=’Temp’
block=slice(2,100,3) slices the lines in file as lines[i:j:k]

index : integer, slice list of integer, default is a slice for all.

Which datablock to use from single read file if multiple blocks are found. Can be integer , list of integer or slice notation.

XYeYeX : list integers, default=[0,1,2,None,None,None]

Columns for X, Y, eY, eX, Z, eZ. Change later by: data.setColumnIndex(3,5,-1).

delimiter : string, default any whitespace

Separator between words (data fields) in a line. E.g. ‘ ‘ tabulator

ignore : string, default ‘#’

Ignore lines starting with string e.g. ‘#’. For more complex lines to ignore use skiplines.

replace : dictionary of string:string

String replacement in read lines as {‘old’:’new’,…}. String pairs in this dictionary are replaced in each line. This is done prior to determining line type and can be used to convert strings to number or ‘,’:’.’.

takeline : string

takeline string is first word in a line with data. E.g. if dataline start with ‘atom’ in PDB files takeline=’atom’ to select specific lines

lines2parameter : list of integer

List of lines i which to prepend with ‘line_i’ to be found as parameter line_i. Used to mark lines with parameters without name (only numbers in a line as in .pdh files in the header). E.g. to skip the first lines.

Returns:

dataList : list of dataArray

Notes

Attribute access as atlist
Attributes of the dataArray elements can be accessed like in dataArrays by .name notation. The difference is that a dataList returns atlist -a subclass of list- with some additional methods as the list of attributes in the dataList elements. This is necessary as it is allowed that dataList elements miss an attribute (indicated as None) or have different type. An numpy ndarray can be retrieved by the array property (as .name.array).
Global attributes
We have to discriminate attributes stored individual in each dataArray and in the dataList as a kind of global attribute. dataArray attributes belong to a dataArray and are saved with the dataArray, while global dataList attributes are only saved with the whole dataList at the beginning of a file. If dataArrays are saved as single files global attributes are lost.

Examples

import jscatter as js
ex=js.dL('aa12*')        #read aa files
ex.extend('bb12*')      #extend with other bb files
ex.sort(...)            #sort by attribute "q"
ex.prune(number=100)    # reduce number of points; default is to calc the mean in an intervall
ex.filter(lambda a:a.Temperature>273)  to filter for an attribute "Temperature" or .X.mean() value
# do linear fit
ex.fit(model=lambda a,b,t:a*t+b,freepar={'a':1,'b':0},mapNames={'t':'X'})
# fit using parameters in example the Temperature stored as parameter.
ex.fit(model=lambda Temperature,b,x:Temperature*x+b,freepar={'b':0},mapNames={'x':'X'})

more Examples

import jscatter as js
import numpy as np
t=np.r_[1:100:5];D=0.05;amp=1
# using list comprehension creating a numpy array
i5=js.dL([np.c_[t,amp*np.exp(-q*q*D*t),np.ones_like(t)*0.05].T for q in np.r_[0.2:2:0.4]])
# calling a function returning dataArrays
i5=js.dL([js.dynamic.simpleDiffusion(q,t,amp,D) for q in np.r_[0.2:2:0.4]])
# define a function and add dataArrays to dataList
ff=lambda q,D,t,amp:np.c_[t,amp*np.exp(-q*q*D*t),np.ones_like(t)*0.05].T
i5=js.dL()  # empty list
for q in np.r_[0.2:2:0.4]:
   i5.append(ff(q,D,t,amp))

Get elements of dataList with specific attribute values.

i5=js.dL([js.dynamic.simpleDiffusion(q,t,amp,D) for q in np.r_[0.2:2:0.4]])
# get q=0.6
i5[i5.q.array==0.6]
# get q > 0.5
i5[i5.q.array > 0.5]
append(objekt=None, index=slice(None, None, None), usecols=None, skiplines=None, replace=None, ignore='#', delimiter=None, takeline=None, lines2parameter=None)

Reads/creates new dataArrays and appends to dataList.

See dataList for description of all keywords. If objekt is dataArray or dataList all options are ignored.

original doc from list L.append(object) – append object to end

aslist

Return as simple list.

attr

Returns all attribute names (including commonAttr of elements) of the dataList.

bispline(func=None, invfunc=None, tx=None, ta=None, deg=[3, 3], eps=None, addErr=False, **kwargs)

Weighted least-squares bivariate spline approximation for interpolation of Y at given attribute values for X values.

Uses scipy.interpolate.LSQBivariateSpline eY values are used as weights (1/eY**2) if present.

Parameters:

kwargs :

Keyword arguments The first keyword argument found as attribute is used for interpolation. E.g. conc=0.12 defines the attribute ‘conc’ to be interpolated to 0.12 Special kwargs see below.

X : array

List of X values were to evaluate. If X not given the .X of first element are used as default.

func : numpy ufunction or lambda

Simple function to be used on Y values before interpolation. see dataArray.polyfit

invfunc : numpy ufunction or lambda

To invert func after extrapolation again.

tx,ta : array like, None, int

Strictly ordered 1-D sequences of knots coordinates for X and attribute. If None the X or attribute values are used. If integer<len(X or attribute) the respective number of equidistante points in the interval between min and max are used.

deg : [int,int], optional

Degrees of the bivariate spline for X and attribute. Default is 3. If single integer given this is used for both.

eps : float, optional

A threshold for determining the effective rank of an over-determined linear system of equations. eps should have a value between 0 and 1, the default is 1e-16.

addErr : bool

If errors are present spline the error colum and add it to the result.

Returns:

dataArray

Notes

  • The spline interpolation results in a good approximation if the data are narrow. Around peaks values are underestimated if the data are not dense enough as the flank values are included in the spline between the maxima. See Examples.
  • Without peaks there should be no artefacts.
  • To estimate new errors for the splined data use .setColimnIndex(iy=ii,iey=None) with ii as index of errors. Then spline the errors and add these as new column.
  • Interpolation can not be as good as fitting with a prior known model and use this for extrapolating.

Examples

import jscatter as js
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax1 = fig.add_subplot(211, projection='3d')
ax2 = fig.add_subplot(212, projection='3d')
i5=js.dL([js.formel.gauss(np.r_[-50:50:5],mean,10) for mean in np.r_[-15:15.1:3]])
i5b=i5.bispline(mean=np.r_[-15:15:1],X=np.r_[-25:25:1],tx=10,ta=5)
fig.suptitle('Spline comparison with different spacing of data')
ax1.set_title("Narrow spacing result in good interpolation")
ax1.scatter3D(i5.X.flatten, np.repeat(i5.mean,[x.shape[0] for x in i5.X]), i5.Y.flatten,s=20,c='red')
ax1.scatter3D(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
ax1.tricontour(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
i5=js.dL([js.formel.gauss(np.r_[-50:50:5],mean,10) for mean in np.r_[-15:15.1:15]])
i5b=i5.bispline(mean=np.r_[-15:15:1],X=np.r_[-25:25:1])
ax2.set_title("Wide spacing result in artefacts between peaks")
ax2.scatter3D(i5.X.flatten, np.repeat(i5.mean,[x.shape[0] for x in i5.X]), i5.Y.flatten,s=20,c='red')
ax2.scatter3D(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
ax2.tricontour(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
plt.show(block=False)
commonAttr

Returns list of attribute names existing in elements.

copy()

Deepcopy of dataList

To make a normal shallow copy use copy.copy

copyattr2elements(maxndim=1, exclude=['comment'])

Copy dataList specific attributes to all elements.

Parameters:

exclude : list of str

list of attr names to exclude from show

maxndim : int, default 2

maximum dimension e.g. to prevent copy of 2d arrays like covariance matrix

Notes

Main use is for copying fit parameters

count(value) → integer -- return number of occurrences of value
delete(index)

Delete element at index

detachErrPlot()[source]

Detaches ErrPlot without killing it and returns a reference to it.

dtype

return dtype of elements

errPlot(*args, **kwargs)[source]

Plot into an existing ErrPlot. See Graceplot.plot for details.

errPlotTitle(title)[source]
extend(objekt=None, index=slice(None, None, None), usecols=None, skiplines=None, replace=None, ignore='#', delimiter=None, takeline=None, lines2parameter=None)

Reads/creates new dataArrays and appends to dataList.

See dataList for description of all keywords. If objekt is dataArray or dataList all options are ignored.

original doc from list L.append(object) – append object to end

extractAttribut(parName, func=None, newParName=None)

Extract a simpler attribute from a complex attribute in each element of dataList.

eg. extract the mean value from a list in an attribute

Parameters:

parName : string

name of the parameter to process

func : function or lambda

a function (eg lambda ) that creates a new content for the parameter from the original content eg lambda a:np.mean(a)*5.123 the function gets the content of parameter whatever it is

newParName :string

if None old parameter is overwritten, otherwise this is the new parname

extrapolate(func=None, invfunc=None, xfunc=None, invxfunc=None, exfunc=None, **kwargs)

Inter/Extrapolated values along attribut for all given X values using a polyfit.

To extrapolate along an attribute using twice a polyfit (first along X then along attribute). E.g. from a concentration series to extrapolate to concentration zero.

Parameters:

**kwargs :

Keyword arguments The first keyword argument found as attribute is used for extrapolation e.g. q=0.01 attribute with values where to extrapolate to Special kwargs see below.

X : arraylike

list of X values were to evaluate

funct : function or lambda

Function to be used in Y values before extrapolating. See Notes.

invfunc : function or lambda

To invert function after extrapolation again.

xfunct : function or lambda

Function to be used for X values before interpolating along X.

invxfunc : function or lambda

To invert xfunction again.

exfunc : function or lambda

Weigth for extrapol along X

degx,degy : integer default degx=0, degy=1

polynom degree for extrapolation in x,y If degx=0 (default) no extrapolation for X is done and values are linear interpolated.

Returns:

dataArray

Notes

funct is used to transfer the data to a simpler smoother or polynominal form.
  • Think about data describing diffusion like I~exp(-q**2*D*t) and we want to interpolate along attribute q. If funct is np.log we interpolate on a simpler parabolic q**2 and linear in t.
  • Same can be done with X axis thin in above case about subdiffusion t**a with a < 1.

Examples

Task: Extrapolate to zero q for 3 X values for an exp decaying function. Here first log(Y) is used (problem linearized), then linear extrapolate and and exp function used for the result. This is like lin extapolation of the exponent:

i5.polyfit(q=0,X=[0,1,11],func=lambda y:np.log(y),invfunc=lambda y:np.exp(y),deg=1)

concentration data with conc and extrapoleate to conc=0

data.polyfit(conc=0,X=data[0].X,deg=1)
filter(filterfunction)

Filter elements according to filterfunction.

Parameters:

filterfunction : function or lambda function returning boolean

Return those items of sequence for which function(item) is true.

Examples

i5=js.dL('exampleData/iqt_1hho.dat')
i1=i5.filter(lambda a:a.q>0.1)
i1=i5.filter(lambda a:(a.q>0.1) )
i5.filter(lambda a:(a.q>0.1) & (a.average[0]>1)).average
i5.filter(lambda a:(max(a.q*a.X)>0.1) & (a.average[0]>1))
fit(model, freepar={}, fixpar={}, mapNames={}, method='leastsq', xslice=slice(None, None, None), condition=None, output=True, **kw)

Least square fit of model that minimizes chi**2 (uses scipy.optimize.leastsq).

  • A least square fit of the .Y values dependent on X (, Z) and attributes (multidimensional fitting).
  • Data attributes are used automatically in model if they have the same name as a parameter.
  • Resulting parameter errors are 1-sigma errors, if the data errors are 1-sigma errors.
  • Results can be simulated with changed parameters in .modelValues or .showlastErrPlot.
Parameters:

model : function or lambda

Model function, should accept arrays as input (use numpy ufunctions in model).

-example: diffusion=lambda A,D,t,wavevector:A * np.exp(-wavevector**2*D*t) - Return value should be dataArray (.Y is used) or only Y values. - Errors in model should return negative integer.

freepar : dictionary

Fit parameter names with startvalues.
  • {‘D’:2.56,..} one common value for all
  • {‘D’:[1,2.3,4.5,…],..} individual parameters for independent fit.
  • [..] is extended with missing values equal to last given value. [1] -> [1,1,1,1,1,1]

fixpar : dictionary

Fixed parameters, overwrites data attributes. (see freepar for syntax)

mapNames : dictionary

Map parameter names from model to attribute names in data e.g. {‘t’:’X’,’wavevector’:’q’,}

method : default ‘leastsq’, ‘differential_evolution’, ‘BFGS’, ‘Nelder-Mead’ or from scipy.optimize.minimize

Type of solver for minimization, for options see scipy.optimize. See last example for a comparison.
  • Only ‘leastsq’ and ‘BFGS’ return errors for the fit parameters.
  • ‘leastsq’ is fastest. ‘leastsq’ is a wrapper around MINPACK’s lmdif and lmder algorithms which are
    a modification of the Levenberg-Marquardt algorithm.
  • All use bounds set in setlimits to allow bounds as described there.
  • ‘differential_evolution’ uses automatic bounds as (x0/10**0.5,x0*10**0.5) if no explicit limits are set for a freepar. x0 is start value from freepar.
  • For some methods the Jacobian is required.

xslice : slice object

Use selected X values by slicing.
  • xslice=slice(2,-3,2) To skip first 2,last 3 and take each second

condition : function or lambda

A lambda function to determine which datapoints to include.
  • The function should evaluate to boolean with dataArray as input and combines with xslice used on full set (first xslice then the condition is used)
  • local operation on numpy arrays as “&”(and), “|”(or), “^”(xor)
    • lambda a:(a.X>1) & (a.Y<1)
    • lambda a:(a.X>1) & (a.X<100)
    • lambda a: a.X>a.q * a.X

output : None,’last’

  • !=None returns best parameters and erros
  • None Returns string
  • ‘last’ returns lastfit

debug : 1,2

debug modus returns:
1 Free and fixed parameters but not mappedNAmes.
2 Parameters in modelValues as dict to call model as model(**kwargs) with mappedNames.
>2 Prints parameters sent to model and returns the output of model without fitting.

kw : additional keyword arguments

Forwarded to minimizer as given in method.

Returns:
  • dependent on output parameter

    • Final results with errors is in .lastfit
    • Fitparameters are additional in dataList object as .parname and corresponding errors as .parname_err.

Notes

  • The concept is to use data attributes as fixed parameters for the fit (multidimesional fit). This is realized by using data attribute with same name as fixed parameters if not given in freepar or fixpar.

  • Fit parameters can be set equal for all elements ‘par’:1 or independent ‘par’:[1] just by writing the start value as a single float or as a list of float. The same is for fixed parameters.

  • Change the fit is easy done by moving ‘par’:[1] between freepar and fixpar.

  • Limits for parameters can be set prior to the fit as .setlimit(D=[1,4,0,10]). The first two numbers (min,max) are softlimits (increase chi2) and second are hardlimits to avoid extreme values (hard set to these values if outside interval and increasing chi2).

  • If errors exist (.eY) and are not zero, weighted chi**2 is minimized. Without error or with single errors equal zero an unweighted chi**2 is minimized (equal weights).

  • The change of parameters can be simulated by .modelValues(D=3) which overides attributes and fit parameters.

  • .makeErrPlot creates an errorplot with residuals prior to the fit for intermediate output.

  • The last errPlot can be recreated after the fit with showlastErrPlot.

  • The simulated data can be shown in errPlot with .showlastErrPlot(D=3).

  • Each dataArray in a dataList can be fit individually (same model function) like this

    # see Examples for dataList creation
    for dat in datlist:
        dat.fit(model,freepar,fixpar,.....)
    

Additinal kwargs for ‘leastsq’

all additional optional arguments passed to leastsq (see scipy.optimize.leastsq)
col_deriv    default  0
ftol         default  1.49012e-08
xtol         default  1.49012e-08
gtol         default  0.0
maxfev       default  200*(N+1).
epsfcn       default  0.0
factor       default  100
diag         default  None

Parameter result by name in lastfit

exda.D                    eg freepar 'D' with errors; same for fixpar but no error
                          use exda.lastfit.attr to see attributes of model
exda.lastfit[i].D         parameter D result of best fit
exda.lastfit[i].D_err     parameter D error as 1-sigma error, if errors of data have also 1-sigma errors in .eY
exda.lastfit.chi2         sum((y-model(x,best))**2)/dof;should be around 1 if  1-sigma errors in .eY
exda.lastfit.cov          hessian**-1 * chi2
exda.lastfit.dof          degrees of freedom   len(y)-len(best)
exda.lastfit.func_name    name of used model
exda.lastfit.func_code    where to find code of used model
exda.lastfit.X            X values in fit
exda.lastfit.Y            Y values in fit
exda.lastfit.eY           Yerrors in fit

If intermediate output is desired (calculation of modeValues in errorplot) use exda.makeErrPlot() to create an output plot and parameter output inside

How to construct a model:
The model function gets .X (.Z, .eY, eX, eZ) as ndarray and parameters (from attributes) as scalar input. It should return an ndarray as output (as Y values) or dataArray (.Y is used). Therefore it is advised to use numpy ufunctions in the model because these use them automatically in the correct way. Instead of math.sin use numpy.sin, which is achieved by import numpy as np and use np.sin see http://docs.scipy.org/doc/numpy/reference/ufuncs.html

A bunch of models as templates can be found in formel.py, formfactor.py, stucturefactor.py.

Examples

Basic examples with synthetic data. Usually data are loaded from a file.

  • An error plot with residuals can be created for intermediate output

    data=js.dL('exampleData/iqt_1hho.dat')
    diffusion=lambda t,wavevector,A,D,b:A*np.exp(-wavevector**2*D*t)+b
    data.setlimit(D=(0,2))               # set a limit for diffusion values
    data.makeErrPlot()                   # create errorplot which is updated
    data.fit(model=diffusion ,
         freepar={'D':0.1,               # one value for all (as a first try)
                  'A':[1,2,3]},          # extended to [1,2,3,3,3,3,...3] independent parameters
         fixpar={'b':0.} ,               # fixed parameters here, [1,2,3] possible
         mapNames= {'t':'X',             # maps time t of the model as .X column for the fit.
                    'wavevector':'q'},   # and map model parameter 'wavevector' to data attribute .q
         condition=lambda a:(a.Y>0.1) )  # set a condition
    
  • Fit sine to simulated data

    import jscatter as js
    import numpy as np
    x=np.r_[0:10:0.1]
    data=js.dA(np.c_[x,np.sin(x)+0.2*np.random.randn(len(x)),x*0+0.2].T)           # simulate data with error
    data.fit(lambda x,A,a,B:A*np.sin(a*x)+B,{'A':1.2,'a':1.2,'B':0},{},{'x':'X'})  # fit data
    data.showlastErrPlot()                                                         # show fit
    print(  data.A,data.A_err)                                                        # access A and error
    
  • Fit sine to simulated data using an attribute in data with same name

    x=np.r_[0:10:0.1]
    data=js.dA(np.c_[x,1.234*np.sin(x)+0.1*np.random.randn(len(x)),x*0+0.1].T)     # create data
    data.A=1.234                                                                   # add attribute
    data.makeErrPlot()                                                             # makes erroplot prior to fit
    data.fit(lambda x,A,a,B:A*np.sin(a*x)+B,{'a':1.2,'B':0},{},{'x':'X'})          # fit using .A
    
  • Fit sine to simulated data using an attribute in data with different name and fixed B

    x=np.r_[0:10:0.1]
    data=js.dA(np.c_[x,1.234*np.sin(x)+0.1*np.random.randn(len(x)),x*0+0.1].T)       # create data
    data.dd=1.234                                                                    # add attribute
    data.fit(lambda x,A,a,B:A*np.sin(a*x)+B,{'a':1.2,},{'B':0},{'x':'X','A':'dd'})   # fit data
    data.showlastErrPlot()                                                           # show fit
    
  • Fit sine to simulated dataList using an attribute in data with different name and fixed B from data. first one common parameter then as parameter list in [].

    x=np.r_[0:10:0.1]
    data=js.dL()
    ef=0.1  # increase this to increase error bars of final result
    for ff in [0.001,0.4,0.8,1.2,1.6]:                                                      # create data
        data.append( js.dA(np.c_[x,(1.234+ff)*np.sin(x+ff)+ef*ff*np.random.randn(len(x)),x*0+ef*ff].T) )
        data[-1].B=0.2*ff/2                                                                 # add attributes
    # fit with a single parameter for all data, obviously wrong result
    data.fit(lambda x,A,a,B,p:A*np.sin(a*x+p)+B,{'a':1.2,'p':0,'A':1.2},{},{'x':'X'})
    data.showlastErrPlot()                                                                 # show fit
    # now allowing multiple p,A,B as indicated by the list starting value
    data.fit(lambda x,A,a,B,p:A*np.sin(a*x+p)+B,{'a':1.2,'p':[0],'B':[0,0.1],'A':[1]},{},{'x':'X'})
    # plot p against A , just as demonstration
    p=js.grace()
    p.plot(data.A,data.p,data.p_err)
    
  • 2D fit data with an X,Z grid data and Y values For 3D fit we calc Y values from X,Z coordinates (only for scalar Y data). For fitting we need data in X,Z,Y column format.

    import matplotlib.pyplot as plt
    from mpl_toolkits.mplot3d import Axes3D
    from matplotlib import cm
    #
    # create 3D data with X,Z axes and Y values as Y=f(X,Z)
    x,z=np.mgrid[-5:5:0.25,-5:5:0.25]
    xyz=js.dA(np.c_[x.flatten(),z.flatten(),0.3*np.sin(x*z/np.pi).flatten()+0.01*np.random.randn(len(x.flatten())),0.01*np.ones_like(x).flatten() ].T)
    # set columns where to find X,Y,Z )
    xyz.setColumnIndex(ix=0,iz=1,iy=2,iey=3)
    #
    ff=lambda x,z,a,b:a*np.sin(b*x*z)
    xyz.fit(ff,{'a':1,'b':1/3.},{},{'x':'X','z':'Z'})
    #
    fig = plt.figure()
    ax = fig.add_subplot(111, projection='3d')
    ax.scatter(xyz.X,xyz.Z,xyz.Y)
    ax.tricontour(xyz.lastfit.X,xyz.lastfit.Z,xyz.lastfit.Y, cmap=cm.coolwarm,linewidth=0, antialiased=False)
    plt.show(block=False)
    
  • Comparison of fit methods

    import numpy as np
    import jscatter as js
    diffusion=lambda A,D,t,elastic,wavevector=0:A*np.exp(-wavevector**2*D*t)+elastic
    
    i5=js.dL(js.examples.datapath+'/iqt_1hho.dat')
    i5.makeErrPlot(title='diffusion model residual plot')
    i5.fit(model=diffusion,freepar={'D':0.2,'A':1}, fixpar={'elastic':0.0},
           mapNames= {'t':'X','wavevector':'q'},  condition=lambda a:a.X>0.01  )
    # 22 evaluations; error YES -> 'leastsq'
    #with D=[0.2]  130 evaluations
    
    i5.fit(model=diffusion,freepar={'D':0.2,'A':1}, fixpar={'elastic':0.0},
           mapNames= {'t':'X','wavevector':'q'},  condition=lambda a:a.X>0.01 ,method='BFGS' )
    # 52 evaluations, error YES
    
    i5.fit(model=diffusion,freepar={'D':0.2,'A':1}, fixpar={'elastic':0.0},
           mapNames= {'t':'X','wavevector':'q'},  condition=lambda a:a.X>0.01 ,method='differential_evolution' )
    # 498 evaluations, error NO ; needs >20000 evaluations using D=[0.2]; use only with low number of parameters
    
    i5.fit(model=diffusion,freepar={'D':0.2,'A':1}, fixpar={'elastic':0.0},
           mapNames= {'t':'X','wavevector':'q'},  condition=lambda a:a.X>0.01 ,method='Powell' )
    # 121 evaluations; error NO
    
    i5.fit(model=diffusion,freepar={'D':0.2,'A':1}, fixpar={'elastic':0.0},
           mapNames= {'t':'X','wavevector':'q'},  condition=lambda a:a.X>0.01 ,method='SLSQP' )
    # 37 evaluations, error NO
    
    i5.fit(model=diffusion,freepar={'D':0.2,'A':1}, fixpar={'elastic':0.0},
           mapNames= {'t':'X','wavevector':'q'},  condition=lambda a:a.X>0.01 ,method='COBYLA' )
    # 308 evaluations, error NO
    
getfromcomment(attrname)

Extract a non number parameter from comment with attrname in front

If multiple names start with parname first one is used. Used comment line is deleted from comments

Parameters:

attrname : string

name of the parameter in first place

has_limit

Return existing limits

without limits returns None

index(value, start=0, stop=-1)

original doc from list L.index(value, [start, [stop]]) -> integer – return first index of value. Raises ValueError if the value is not present.

insert(i, objekt=None, index=0, usecols=None, skiplines=None, replace=None, ignore='#', delimiter=None, takeline=None, lines2parameter=None)

Reads/creates new dataArrays and inserts in dataList.

If objekt is dataArray or dataList all options are ignored.

Parameters:

i : int, default 0

Position where to insert.

objekt,index,usecols,skiplines,replace,ignore,delimiter,takeline,lines2parameter : options

See dataArray or dataList

original doc from list

L.insert(index, object) – insert object before index

interpolate(func=None, invfunc=None, deg=1, **kwargs)

Interpolates Y at given attribute values for X values.

Uses twice a linear interpolation (first along X then along attribute). If X and attributes are equal to existing these datapoints are returned.

Parameters:

**kwargs :

Keyword arguments as float or array-like the first keyword argument found as attribute is used for interpolation. E.g. conc=0.12 defines the attribute ‘conc’ to be interpolated to 0.12 Special kwargs see below.

X : array

List of X values were to evaluate (linear interpolation). If X < or > self.X the corresbonding min/max border is used. If X not given the .X of first element are used as default.

func : function or lambda

Function to be used on Y values before interpolation. See dataArray.polyfit.

invfunc : function or lambda

To invert func after extrapolation again.

deg : integer, default =1

Polynom degree for interpolation along attribute. Outliers result in Nan.

Returns:

dataArray

Notes

  • This interpolation results in a good approximation if the data are narrow. Around peaks values are underestimated if the data are not dense enough. See Examples.
  • To estimate new errors for the splined data use .setColumnIndex(iy=ii,iey=None) with ii as index of errors. Then spline the errors and add these as new column.
  • Interpolation can not be as good as fitting with a prior known model and use this for extrapolating.

Examples

import jscatter as js
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax1 = fig.add_subplot(211, projection='3d')
ax2 = fig.add_subplot(212, projection='3d')
# try different kinds of polynaminal degree
deg=2
i5=js.dL([js.formel.gauss(np.r_[-50:50:5],mean,10) for mean in np.r_[-15:15.1:3]])
i5b=i5.interpolate(mean=np.r_[-15:15:1],X=np.r_[-25:25:1],deg=deg)
fig.suptitle('Interpolation comparison with different spacing of data')
ax1.set_title("Narrow spacing result in good interpolation")
ax1.scatter3D(i5.X.flatten, np.repeat(i5.mean,[x.shape[0] for x in i5.X]), i5.Y.flatten,s=20,c='red')
ax1.scatter3D(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
ax1.tricontour(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
i5=js.dL([js.formel.gauss(np.r_[-50:50:5],mean,10) for mean in np.r_[-15:15.1:15]])
i5b=i5.interpolate(mean=np.r_[-15:15:1],X=np.r_[-25:25:1],deg=deg)
ax2.set_title("Wide spacing result in artefacts between peaks")
ax2.scatter3D(i5.X.flatten, np.repeat(i5.mean,[x.shape[0] for x in i5.X]), i5.Y.flatten,s=20,c='red')
ax2.scatter3D(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
ax2.tricontour(i5b.X.flatten,np.repeat(i5b.mean,[x.shape[0] for x in i5b.X]), i5b.Y.flatten,s=2)
plt.show(block=False)
killErrPlot(filename=None)[source]

Kills ErrPlot

If filename given the plot is saved.

makeErrPlot(title=None, showfixpar=True, **kwargs)[source]

Creates a GracePlot for intermediate output from fit with residuals.

ErrPlot is updated only if consecutive steps need more than 2 seconds.

Parameters:

title : string

title of plot

residuals : string

plot type of residuals ‘absolut’ or ‘a’ absolute residuals ‘relative’ or ‘r’ relative =res/y

showfixpar : boolean (None,False,0 or True,Yes,1)

show the fixed parameters in errplot

yscale,xscale : ‘n’,’l’ for ‘normal’, ‘logarithmic’

y scale, log or normal (linear)

fitlinecolor : int, [int,int,int]

Color for fit lines (or line style as in plot). if not given same color as data.

makeNewErrPlot(**kwargs)[source]

Creates a NEW ErrPlot without destroying the last. See makeErrPlot for details.

Parameters:

**kwargs

keyword arguments passed to makeErrPlot

merge(indices, isort=None)

Merges elements of dataList.

The merged dataArray is stored in the lowest indices. Others are removed.

Parameters:

indices : integer,’all’

list of indices to merge ‘all’ merges all elements into one.

isort : integer

argsort after merge along column eg isort=’X’, ‘Y’, or 0,1,2 None is no sorting as default

Notes

Attributes are copied as lists in the merged dataArray.

mergeAttribut(parName, limit=None, isort=None, func=<function mean>)

Merges elements of dataList if attribute values are closer than limit (in place).

If attribute is list the average is taken for comparison. For special needs create new parameter and merge along this.

Parameters:

parName : string

name of a parameter

limit : float

The relative limit value. If limit is None limit is determined as standardeviation of sorted differences as limit=np.std(np.array(data.q[:-1])-np.array(data.q[1:]))/np.mean(np.array(self.q)

isort : ‘X’, ‘Y’ or 0,1,2…, None, default None

Column for isort. None is no sorting

func : function or lambda, default np.mean

a function to create a new value for parameter see extractAttribut stored as .parName+str(func.func_name)

Examples

i5=js.dL('exampleData/iqt_1hho.dat')
i5.mergeAttribut('q',0.1)
# use qmean instead of q or calc the new value
print(  i5.qmean)
modelValues(**kwargs)

Calculates modelValues of model after a fit.

Model parameters are used from dataArray attributes or last fit parameters. Given arguments overwrite parameters and attributes to simulate modelValues e.g. to extend X range.

Parameters:

**kwargs : parname=value

Overwrite parname with value in the dataList attributes or fit results e.g. to extend the parameter range or simulate changed parameters.

debug : internal usage documented for completes

dictionary passed to model to allow calling model as model(**kwargs) for debugging

Returns:

dataList of modelValues with parameters as attributes.

Notes

Example: extend time range

data=js.dL('iqt_1hho.dat')
diffusion=lambda A,D,t,wavevector: A*np.exp(-wavevector**2*D*t)
data.fit(diffusion,{'D':[2],'amplitude':[1]},{},{'t':'X'})    # do fit
# overwritte t to extend range
newmodelvalues=data.modelValues(t=numpy.r_[0:100])   #with more t

Example: 1-sigma interval for D

data=js.dL('exampleData/iqt_1hho.dat')
diffusion=lambda A,D,t,q: A*np.exp(-q**2*D*t)
data.fit(diffusion,{'D':[0.1],'A':[1]},{},{'t':'X'})    # do fit
# add errors of D for confidence limits
upper=data.modelValues(D=data.D+data.D_err)
lower=data.modelValues(D=data.D-data.D_err)
data.showlastErrPlot()
data.errPlot(upper,sy=0,li=[2,1,1])
data.errPlot(lower,sy=0,li=[2,1,1])
nakedcopy()

Returns copy without attributes, thus only the data.

names

List of element names.

polyfit(func=None, invfunc=None, xfunc=None, invxfunc=None, exfunc=None, **kwargs)

Inter/Extrapolated values along attribut for all given X values using a polyfit.

To extrapolate along an attribute using twice a polyfit (first along X then along attribute). E.g. from a concentration series to extrapolate to concentration zero.

Parameters:

**kwargs :

Keyword arguments The first keyword argument found as attribute is used for extrapolation e.g. q=0.01 attribute with values where to extrapolate to Special kwargs see below.

X : arraylike

list of X values were to evaluate

funct : function or lambda

Function to be used in Y values before extrapolating. See Notes.

invfunc : function or lambda

To invert function after extrapolation again.

xfunct : function or lambda

Function to be used for X values before interpolating along X.

invxfunc : function or lambda

To invert xfunction again.

exfunc : function or lambda

Weigth for extrapol along X

degx,degy : integer default degx=0, degy=1

polynom degree for extrapolation in x,y If degx=0 (default) no extrapolation for X is done and values are linear interpolated.

Returns:

dataArray

Notes

funct is used to transfer the data to a simpler smoother or polynominal form.
  • Think about data describing diffusion like I~exp(-q**2*D*t) and we want to interpolate along attribute q. If funct is np.log we interpolate on a simpler parabolic q**2 and linear in t.
  • Same can be done with X axis thin in above case about subdiffusion t**a with a < 1.

Examples

Task: Extrapolate to zero q for 3 X values for an exp decaying function. Here first log(Y) is used (problem linearized), then linear extrapolate and and exp function used for the result. This is like lin extapolation of the exponent:

i5.polyfit(q=0,X=[0,1,11],func=lambda y:np.log(y),invfunc=lambda y:np.exp(y),deg=1)

concentration data with conc and extrapoleate to conc=0

data.polyfit(conc=0,X=data[0].X,deg=1)
pop(i=-1)

original doc from list L.pop([index]) -> item – remove and return item at index (default last). Raises IndexError if list is empty or index is out of range.

prune(*args, **kwargs)

Reduce number of values between upper and lower limits.

Prune reduces a dataset to reduced number of data points in an interval between lower and upper by selection or by averaging including errors.

Parameters:

*args,**kwargs :

arguments and keyword arguments see below

lower : float

lower bound min is min of data

upper : float

upper bound max is max of data

number : int

number of values in result

kind : {‘log’,’lin’} default ‘lin’

type of the new point distrubution
‘log’ closest values in log distribution with number points in [lower,upper]
‘lin’ closest values in lin distribution with number points in [lower,upper]
if number==None all points between min,max are used

type : {None,’mean’,’error’,’mean+error’} default ‘mean’

how to determine the value for a point
None original Y value of X closest to new X value
‘mean’ mean values in interval between 2 X points;
weight==None -> equal weight
if weight!=None with weight=1/col[weight]**2
weight column will get values according to error propagation
‘mean+std’ calcs mean and adds error columns with standard deviation from intervals
can be used if no errors are present
for single values the error is interpolated from neighbouring values
! for less pruned data error may be bad defined if only a few points are averaged

col : ‘X’,’Y’….., or int, default ‘X’

column to prune along X,Y,Z or index of column

weight : None, ‘eX’, ‘eY’ or int

column for weight as 1/err**2 in ‘mean’ calculation, None is equal weight
weight columne gets new error sqrt(1/sum_i(1/err_i**2))
if None or not existing equal weights are used

keep : list of int

list of indices to keep in any case

Returns:

dataArray with values pruned to number of values

Notes

Attention !!!!
dependent on the distribution of original data a lower number of points can be the result
eg think of noisy data between 4 and 5 and a lin distribution from 1 to 10 of 9 points
as there are no data between 5 and 10 these will all result in 5 and be set to 5 to be unique

Examples

i5.prune(number=13,col='X',type='mean',weight='eY')
i5.prune(number=13)
remove()

L.remove(value) – remove first occurrence of value. Raises ValueError if the value is not present.

reverse()

Reverse dataList -> INPLACE!!! original doc from list L.reverse() – reverse IN PLACE

sCI(*arg, **kwargs)

Set the columnIndex where to find X,Y,Z, eY, eX, eZ…..

Default is ix=0,iy=1,iey=2,iz=None,iex=None,iez=None as it is the most used. There is no limitation and each dataArray can have different ones.

Parameters:

ix,iy,iey,iz,iex,iez: integer, None; default ix=0,iy=1,iey=2,iz=None,iex=None,iez=None

usability wins iey=2!!
if columnIndex differs in dataArrays set them individually

Notes

A list of all X in the dataArray is dataArray.X
integer column index as 0,1,2,-1 , should be in range
None as not used eg iex=None -> no errors for x
anything else does not change

Shortcut sCI

save(name=None, exclude=['comment', 'lastfit'], fmt='%.5e')

Saves dataList as ASCII text file, optional compressed (gzip).

Saves dataList with attributes to one file that can be reread. Dynamic created attributes as e.g. X, Y, eY, are not saved. If name extension is ‘.gz’ the file is compressed (gzip).

Parameters:

name : string

filename

exclude : list of str, default [‘comment’,’lastfit’]

List of dataList attribut names to exclude from being saved.

fmt : string, default ‘%.5e’

Format specifier for writing float as e.g. ‘%.5e’ is exponential with 5 digits precision.

Notes

Saves a sequence of the dataArray elements.

Format rules:

Dataset consists of tabulated data with optional attributes and comments. Datasets are separated by empty lines, attributes and comments come before data.

First two strings decide for a line:
  • string + value -> attribute as attribute name + list of values
  • string + string -> comment line
  • value + value -> data (line of an array; in sequence without break)
  • single words -> are appended to comments
optional:
  • string + @name -> as attribute but links to other dataArray with .name=”name” stored in the same file after this dataset.
  • internal parameters starting with underscore (‘_’) are ignored for writing, also X,Y,Z,eX,eY,eZ,
  • only ndarray content is stored; no dictionaries in parameters.
  • @name is used as identifier or filename can be accessed as name.
  • attributes of dataList are saved as common attributes marked with a line “@name header_of_common_parameters”
savelastErrPlot(filename, format='agr', size=(1012, 760), dpi=300, **kwargs)[source]

Saves errplot to file with filename.

savetext(name=None, exclude=['comment', 'lastfit'], fmt='%.5e')

Saves dataList as ASCII text file, optional compressed (gzip).

Saves dataList with attributes to one file that can be reread. Dynamic created attributes as e.g. X, Y, eY, are not saved. If name extension is ‘.gz’ the file is compressed (gzip).

Parameters:

name : string

filename

exclude : list of str, default [‘comment’,’lastfit’]

List of dataList attribut names to exclude from being saved.

fmt : string, default ‘%.5e’

Format specifier for writing float as e.g. ‘%.5e’ is exponential with 5 digits precision.

Notes

Saves a sequence of the dataArray elements.

Format rules:

Dataset consists of tabulated data with optional attributes and comments. Datasets are separated by empty lines, attributes and comments come before data.

First two strings decide for a line:
  • string + value -> attribute as attribute name + list of values
  • string + string -> comment line
  • value + value -> data (line of an array; in sequence without break)
  • single words -> are appended to comments
optional:
  • string + @name -> as attribute but links to other dataArray with .name=”name” stored in the same file after this dataset.
  • internal parameters starting with underscore (‘_’) are ignored for writing, also X,Y,Z,eX,eY,eZ,
  • only ndarray content is stored; no dictionaries in parameters.
  • @name is used as identifier or filename can be accessed as name.
  • attributes of dataList are saved as common attributes marked with a line “@name header_of_common_parameters”
savetxt(name=None, exclude=['comment', 'lastfit'], fmt='%.5e')

Saves dataList as ASCII text file, optional compressed (gzip).

Saves dataList with attributes to one file that can be reread. Dynamic created attributes as e.g. X, Y, eY, are not saved. If name extension is ‘.gz’ the file is compressed (gzip).

Parameters:

name : string

filename

exclude : list of str, default [‘comment’,’lastfit’]

List of dataList attribut names to exclude from being saved.

fmt : string, default ‘%.5e’

Format specifier for writing float as e.g. ‘%.5e’ is exponential with 5 digits precision.

Notes

Saves a sequence of the dataArray elements.

Format rules:

Dataset consists of tabulated data with optional attributes and comments. Datasets are separated by empty lines, attributes and comments come before data.

First two strings decide for a line:
  • string + value -> attribute as attribute name + list of values
  • string + string -> comment line
  • value + value -> data (line of an array; in sequence without break)
  • single words -> are appended to comments
optional:
  • string + @name -> as attribute but links to other dataArray with .name=”name” stored in the same file after this dataset.
  • internal parameters starting with underscore (‘_’) are ignored for writing, also X,Y,Z,eX,eY,eZ,
  • only ndarray content is stored; no dictionaries in parameters.
  • @name is used as identifier or filename can be accessed as name.
  • attributes of dataList are saved as common attributes marked with a line “@name header_of_common_parameters”
setColumnIndex(*arg, **kwargs)

Set the columnIndex where to find X,Y,Z, eY, eX, eZ…..

Default is ix=0,iy=1,iey=2,iz=None,iex=None,iez=None as it is the most used. There is no limitation and each dataArray can have different ones.

Parameters:

ix,iy,iey,iz,iex,iez: integer, None; default ix=0,iy=1,iey=2,iz=None,iex=None,iez=None

usability wins iey=2!!
if columnIndex differs in dataArrays set them individually

Notes

A list of all X in the dataArray is dataArray.X
integer column index as 0,1,2,-1 , should be in range
None as not used eg iex=None -> no errors for x
anything else does not change

Shortcut sCI

setlimit(**kwargs)

Set upper and lower limits for parameters in least square fit.

Parameters:

parname : [value x 4] , list of 4 x (float/None), default None

Use as setlimit(parname=(lowerlimit, upperlimit,lowerhardlimit, upperhardlimit)) - lowerlimit, upperlimit : float, default None

soft limit: chi2 increased with distance from limit, nonfloat resets limit

  • lowerhardlimit, upperhardlimit: hardlimit float, None values are set to border , chi2 is increased strongly

Notes

Penalty methods are a certain class of algorithms for solving constrained optimization problems. Here the penalty function increases chi2 by a factor chi*f_conststrain - no limit overrun : 1 - softlimits : + 1+abs(val-limit)*10 per limit - hardlimits : +10+abs(val-limit)*10 per limit

Examples

setlimit(D=(1,100),A=(0.2,0.8,0.0001)) to set lower=1 and upper=100
A with a hard limit to avoid zero

setlimit(D=(None,100)) to reset lower and set upper=100 setlimit(D=(1,’thisisnotfloat’,’‘,)) to set lower=1 and reset upper

shape

Tuple with shapes of dataList elements.

showattr(maxlength=75, exclude=['comment', 'lastfit'])

Show data specific attributes for all elements.

Parameters:

maxlength : integer

truncate string representation

exclude : list of str

list of attribute names to exclude from show

showlastErrPlot(title=None, modelValues=None, **kwargs)[source]

Shows last ErrPlot as created by makeErrPlot with last fit result.

Same arguments as in makeErrPlot.

Additional keyword arguments are passed as in modelValues and simulate changes in the parameters. Without parameters the last fit is retrieved.

sort(key=None, reverse=False)

Sort dataList -> INPLACE!!!

Parameters:

key : function

A function that is applied to all elements and the output is used for sorting. e.g. ‘Temp’ or lambda a:a.Temp convenience: If key is attribut name this attribute is used

reverse : True, False

Normal or reverse order.

Examples

dlist.sort('q',True)
dlist.sort(key=lambda ee:ee.X.mean() )
dlist.sort(key=lambda ee:ee.temperatur )
dlist.sort(key=lambda ee:ee.Y.mean() )
dlist.sort(key=lambda ee:ee[:,0].sum() )
dlist.sort(key=lambda ee:getattr(ee,parname))
dlist.sort(key='parname')
whoHasAttributes

Lists which attribute is found in which element.

Returns:

dictionary of attributes names: list of indices

keys are the attribute names values are indices of dataList where attr is existent