This searches for the best hyperparameters (learning rate, number of training cycles, loss function, optimazation method, and layers) that results in the most accurate model. It will run 'do when a trial finishes' with the results of each experiment and after 'number of experiments' it will run 'do with parameters' with the best parameters found. The parameters can then be passed along to the "Create and train a neural net named ... base upon ..." block.
The named model must already exist and its training data (and optionally its validation data) already made available.Loads training or validataion data previously saved using the "tensorflow.js" page opened using the "Open support panel ..." block.trainingtraining
validationtrueall modelsReports a list of 'n' between 1 and 'maximum'.
Multiplying and dividing by 1000 produces numbers with 3 decimal places.100Train the model using 'n' learning steps. Each step will change weights by 'learning rate'. If it is too big the learning will fail. If it is too small it will go very slowly. So that the learning doesn't rely upon the order of the data it can be 'shuffled' each step. If validation data hasn't been created then an alternative is to use 'validation split' fraction of the data for validation.true0de:wende _ an auf _
1Train the model using 'n' learning steps.
There is a full-featured version of this block.10training statisticserror messageCreates a neural with a 'name' that is used in other blocks for training and prediction. 'Layers' is a list of positive numbers determine the size of each layer. 'Input size' should be a number or a list of numbers that describes what the input dimensions are. You don't need to provide this if you have already sent or loaded training data.
There is another version of this block with more inputs.error messageCreates a neural with a 'name' that is used in other blocks for training and prediction. 'Layers' is a list of positive numbers determine the size of each layer. The 'optimizer' is the name of the method that will be used during training. 'loss function' is used to measure the difference between predictions and outputs during training. 'Input size' should be a number or a list of numbers that describes what the input dimensions are. You don't need to provide this if you have already sent or loaded training data.Stochastic Gradient DescentStochastic Gradient Descent
Adaptive Stochastic Gradient Descent
Adaptive Learning Rate Gradiant Descent
Adaptive Moment Estimation
Adaptive Moment Estimation Max
Momentum
Root Mean Squared PropMean Squared ErrorAbsolute Distance
Compute Weighted Loss
Cosine Distance
Hinge Loss
Huber Loss
Log Loss
Mean Squared Error
Sigmoid Cross Entropy
Softmax Cross EntropyAsk the trained model to make predictions for each value in 'inputs'. Can also average the predictions from a list of models.This will replace the model named by the best model found by the 'Search for good neural net model ..." command.Open an interface page for different machine learning models.training using cameratraining using camera
training using microphone
posenet
tensorflow.jsLoads a model previously saved using the interface on the 'tensorflow.js' page which can be opened using the 'Open support panel ...' block.Sends either training or validation data to the support panel.trainingtraining
validation
testfalseall modelsWill display 'message' in a dialog box with 'title'. User needs to click 'OK' to remove it.
Can supress duplicate messages.trueCreates a neural network with a 'name' that is used in other blocks for training and prediction. 'Layers' is a list of positive numbers determine the size of each layer. The 'optimizer' is the name of the method that will be used during training. 'loss function' is used to measure the difference between predictions and outputs during training. The 'activation function' is applied after the weighted inputs are summed. 'dropout rate' adds some noise to training that helps prevent overfitting to the training data. 'Input size' should be a number or a list of numbers that describes what the input dimensions are. You don't need to provide this if you have already sent or loaded training data.Stochastic Gradient DescentStochastic Gradient Descent
Adaptive Stochastic Gradient Descent
Adaptive Learning Rate Gradiant Descent
Adaptive Moment Estimation
Adaptive Moment Estimation Max
Momentum
Root Mean Squared PropMean Squared ErrorAbsolute Difference
Log Loss
Mean Squared Errorreluelu
hardSigmoid
linear
relu
relu6
selu
sigmoid
softmax
softplus
softsign
tanh
swish
mish0.5Ask the trained model to predict what the output should be for the 'input'. There is also a block for computing many predictions all at once. 'Model names' can be a list of names or a single name. Categories should only be provided if the prediction is for labelling the input.predictionsThis block customises what is seen if the 'Open support panel tensorflow.js" block is run. If 'display graphs' is true then a graph of the loss over each iteration is displayed. You can control its dimensions and the range of values. If 'display layers' is true then statistics about each layer is displayed. If 'display confusion matrices' is true on the output are labels (not numbers) then confusion matrices are displayed.48030005truetruetrueWill display 'message' in a dialog box with 'title'. User needs to click 'OK' to remove it.responseReports the value of the 'key' in a table that is a list of pairs of keys and values.3resultsbest parameterstruefalsefalsefalsetruefalsefalsetruetrue15112.00120.2trueThis searches for the best hyperparameters (learning rate, number of training cycles, loss function, optimazation method, and layers) that results in the most accurate model. It will run 'do when a trial finishes' with the results of each experiment. Each experiment will average the results of 'number of samples' runs with the same hyperparameters. After 'number of experiments' it will run 'do with parameters' with the best parameters found. The parameters can then be passed along to the "Create and train a neural net named ... base upon ..." block.
The named model must already exist and its training data (and optionally its validation data) already made available. It scores each experiment by adding the weighted values of the loss, accuracy, time to train, number of parameter, and degree of variability in the outcome scores.103100truetruetruetruetruetruetruetruetrue1010123.0010trueNo longer needed in Snap! 7+ but retained to avoid error from blocks that expect it.Train the model using 'n' learning steps. If three is no progress for 'stop after' cycles it will stop early. Each step will change weights by 'learning rate'. If it is too big the learning will fail. If it is too small it will go very slowly. So that the learning doesn't rely upon the order of the data it can be 'shuffled' each step. If validation data hasn't been created then an alternative is to use 'validation split' fraction of the data for validation. The training will process 'batch size' data items all at once.true032Sends either training or validation data to be used by training blocks. Note that if the output are labels of categories then if you are also providing validation data then provide the same list of labels. Note all data whose output is not one of the labels is automatically removed. trainingtraining
validation
testfalseall modelsNormalizing involves computing the mean and variance and then subtracting the mean from all data items and dividing by square root of the variance (i.e. the standard deviation). So the mean of the normalized data should be very near zero and the variance one.
Works with lists, lists of lists, lists of lists of lists, etc.Normalizing involves computing the mean and variance and then subtracting the mean from all data items and dividing by square root of the variance (i.e. the standard deviation). So the mean of the normalized data should be very near zero and the variance one.
Works with lists, lists of lists, lists of lists of lists, etc.
Reports normalized data, mean, and variance so result can be 'unnormalized'.normalizedmeanvarianceerror messagefalseUndoes normalization by multiplying by the standard deviation and adding the mean.Normalizes the data by subtracting the mean and dividing by the standard deviation.