Reference

aIQ(net_efficiency, accuracy, weight)

Calculate the artificial intelligence quotient

The artificial intelligence quotient (aIQ) is a simple metric to report a balance of neural network efficiency and task performance. Although not required, it is assumed that the accuracy argument is a float ranging from 0.0-1.0, with 1.0 meaning more accurate.

aIQ = (net_efficiency * accuracy ** weight) ** (1/(weight+1))

The weight argument is an integer, with higher values giving more weight to the accuracy of the model.

Parameters:
  • net_efficiency ([float]) – A float ranging from 0.0-1.0
  • accuracy ([float]) – A float ranging from 0.0-1.0
  • weight ([int]) – An integer with value >=1
Raises:

ValueError – Raised if weight <= 0

Returns:

The artificial intelligence quotient

Return type:

[float]

build_efficiency_model(model, attach_to, exclude=[], method='after', storage_path=None)

Attach state capture methods to a neural network

This method takes an existing neural network model and attaches either layers or hooks to the model to capture the states of neural network layers.

For Tensorflow, only keras.Model networks can serve as inputs to this function. When a Tensorflow model is fed into this function, a new network is returned where StateCapture layers are inserted into the network at the designated locations.

For PyTorch, a neural network that implements the Module class will have hooks added to the layers. A new network is not generated, but for consistency the model is returned from this function.

Parameters:
  • model ([keras.Model, torch.nn.Module]) – A Keras model
  • attach_to (list, optional) – List of strings indicating the types of layers to attach to. Names of layers can also be specified to attach StateCapture to specific layers
  • exclude (list, optional) – List of strings indicating the names of layers to not attach StateCapture layers to. This will override the attach_to keyword, so that a Conv2D layer with the name specified by exclude will not have a StateCapture layer attached to it. Defaults to [].
  • method (str, optional) – The location to attach the StateCapture layer to. Must be one of [‘before’,’after’,’both’]. Defaults to ‘after’.
  • storage_path ([str, pathlib.Path], optional) – Path on disk to store states in zarr format. If None, states are stored in memory. Defaults to None.
Returns:

A model of the same type as the input model

Return type:

model

entropy(counts, alpha=1)

Calculate the Renyi entropy

The Renyi entropy is a general definition of entropy that encompasses Shannon’s entropy, Hartley (maximum) entropy, and min-entropy. It is defined as:

(1-alpha)**-1 * log2( sum(p**alpha) )

By default, this method sets alpha=1, which is Shannon’s entropy.

Parameters:
  • counts (numpy.ndarray) – Array of counts representing number of times a state is observed.
  • alpha ([int,float], optional) – Entropy order. Defaults to 1.
Returns:

The entropy of the count data.

Return type:

[float]

network_efficiency(efficiencies)

Calculate the network efficiency

This method calculates the neural network efficiency, defined as the geometric mean of the efficiency values calculated for the network.

Parameters:efficiencies ([list,keras.Model,torch.nn.Module]) – A list of efficiency values (floats) or a keras.Model
Returns:The network efficiency
Return type:[float]
reset_efficiency_model(model)

Reset all efficiency layers/hooks in a model

This method resets all efficiency layers or hooks in a model, setting the state_count=0. This is useful for repeated evaluation of a model during a single session.

Parameters:model ([keras.Model, torch.nn.Module]) – Model to reset
class AbstractStateCapture(name, disk_path=None, **kwargs)

Bases: abc.ABC

Base class for capturing state space information in a neural network.

This class implements the infrastructure used to capture, quantize, and process state space information. For Tensorflow, a subclass is constructed to inherit these methods as a layer to be inserted into the network. For PyTorch, a subclass is constructed to implement these methods as layer hooks.

This class captures state information and quantizes layer outputs as firing or not firing based on whether the values are >0 or <=0 respectively. Although this layer is intended to be attached before or after a neural layer, this can actually be attached to any layer type. After recording the firing state of all neurons, the original input is returned unaltered. Thus, this layer can be thought of as a “probe”, since it does not add or subtract from the function of a network.

Layer states are stored in a zarr array, which permits compressed storage of data in memory or on disk. Only blosc compression is used to ensure fast compression/decompression speeds. By default, data is stored in memory, but data can be stored on disk to reduce memory consumption by using the disk_path keyword.

NOTE: This layer currently only works within Tensorflow Keras models and PyTorch models.

counts()

Layer state counts

This method returns a numpy.array of integers, where each integer is the number of times a state is observed. The identity of the states can be obtained by calling the state_ids method.

NOTE: The list only contains counts for observed states, so all values will be >0

Returns:Counts of stat occurences
Return type:[list of int]
efficiency(alpha1=1, alpha2=None)

Calculate the efficiency of the layer

This method returns the efficiency of the layer. Originally, the efficiency was defined as the ratio of Shannon’s entropy to the theoretical maximum entropy based on the number of neurons in the layer. This method with no inputs will return that value. However, this method will also now permit defining the alpha value for the Renyi entropy, so that the efficiency will be calculated as the Renyi entropy of order alpha1 divided by the maximum theoretical entropy.

Parameters:
  • alpha1 ([float, int], optional) – Order of Renyi entropy in numerator
  • alpha2 ([float, int, None], optional) – Order of Renyi entropy in denominator
Returns:

The efficiency of the layer

Return type:

[float]

entropy(alpha=1)

Calculate the entropy of the layer

Calculate the entropy from the observed states. The alpha value is the order of entropy calculated using the formula for Renyi entropy. When alpha=1, this returns Shannon’s entropy.

Parameters:alpha (int, None) – Order of entropy to calculate. If None, then use max_entropy()
Returns:The entropy of the layer
Return type:[float]
max_entropy()

Theoretical maximum entropy for the layer

The maximum entropy for the layer is equal to the number of neurons in the layer. This is different than the maximum entropy value that would be returned from the TensorState.entropy method with alpha=0, which is a count of the observed states.

Returns:Theoretical maximum entropy value
Return type:[float]
reset_states(input_shape=None)

Initialize the state space

This method initializes the layer and resets any previously held data. The zarr array is initialized in this method.

Parameters:input_shape (TensorShape,tuple, list) – Shape of the input.
state_count

The total number of observed states, including repeats.

state_ids()

Identity of observed states

This method returns a list of byte arrays. Each byte array corresponds to a unique observed state, where each bit in the byte array corresponds to a neuron. The list returned by this method matches the list returned by counts, so that the value in state_ids at position i is associated with the counts value at position i.

For example, if the StateCapture layer is attached to a convolutional layer with 8 neurons, then each item in the list will be a byte array of length 1. If one of the bytes is \x00 (a null byte), then the state has no firing neurons.

NOTE: Only observed states are contained in the list.

Returns:Unique states observed by the layer
Return type:[list of Bytes]
class StateCapture(name, disk_path=None, **kwargs)

Bases: TensorState.Layers.AbstractStateCapture

Tensorflow keras layer to capture states in keras models

This class is designed to be used in a Tensorflow keras model to automate the capturing of neurons states as data is passed through the network.

build(input_shape)

Build the StateCapture Keras Layer

This method initializes the layer and resets any previously held data. The zarr array is initialized in this method.

Parameters:input_shape (TensorShape) – Either a TensorShape or list of TensorShape instances.
call(inputs)
class StateCaptureHook(name, disk_path=None, **kwargs)

Bases: TensorState.Layers.AbstractStateCapture

StateCapture hook for PyTorch

This class implements all methods in AbstractStateCapture, but is designed to be a pre or post hook for a layer.