Title: | Interpretable Neural Network Based on Generalized Additive Models |
---|---|
Description: | Neural network framework based on Generalized Additive Models from Hastie & Tibshirani (1990, ISBN:9780412343902), which trains a different neural network to estimate the contribution of each feature to the response variable. The networks are trained independently leveraging the local scoring and backfitting algorithms to ensure that the Generalized Additive Model converges and it is additive. The resultant Neural Network is a highly accurate and interpretable deep learning model, which can be used for high-risk AI practices where decision-making should be based on accountable and interpretable algorithms. |
Authors: | Ines Ortega-Fernandez [aut, cre, cph] , Marta Sestelo [aut, cph] |
Maintainer: | Ines Ortega-Fernandez <[email protected]> |
License: | MPL-2.0 |
Version: | 1.1.1 |
Built: | 2024-11-12 06:12:53 UTC |
Source: | https://github.com/inesortega/neuralgam |
neuralGAM
visualization with ggplot2 libraryAdvanced neuralGAM
visualization with ggplot2 library
## S3 method for class 'neuralGAM' autoplot(object, select, xlab = NULL, ylab = NULL, ...)
## S3 method for class 'neuralGAM' autoplot(object, select, xlab = NULL, ylab = NULL, ...)
object |
a fitted |
select |
selects the term to be plotted. |
xlab |
A title for the |
ylab |
A title for the |
... |
other graphics parameters to pass on to plotting commands. See details for ggplot2::geom_line options |
A ggplot object, so you can use common features from the ggplot2 package to manipulate the plot.
Ines Ortega-Fernandez, Marta Sestelo.
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <- x1 ** 2 f2 <- 2 * x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) autoplot(ngam, select="x1") # add custom title autoplot(ngam, select="x1") + ggplot2::ggtitle("Main Title") # add labels autoplot(ngam, select="x1") + ggplot2::xlab("test") + ggplot2::ylab("my y lab") # plot multiple terms: plots <- lapply(c("x1", "x2", "x3"), function(x) autoplot(ngam, select = x)) gridExtra::grid.arrange(grobs = plots, ncol = 3, nrow = 1) ## End(Not run)
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <- x1 ** 2 f2 <- 2 * x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) autoplot(ngam, select="x1") # add custom title autoplot(ngam, select="x1") + ggplot2::ggtitle("Main Title") # add labels autoplot(ngam, select="x1") + ggplot2::xlab("test") + ggplot2::ylab("my y lab") # plot multiple terms: plots <- lapply(c("x1", "x2", "x3"), function(x) autoplot(ngam, select = x)) gridExtra::grid.arrange(grobs = plots, ncol = 3, nrow = 1) ## End(Not run)
Creates a conda environment (installing miniconda if required) and set ups the Python requirements to run neuralGAM (Tensorflow and Keras).
Miniconda and related environments are generated in the user's cache directory given by:
tools::R_user_dir('neuralGAM', 'cache')
install_neuralGAM()
install_neuralGAM()
neuralGAM
modelFits a neuralGAM
model by building a neural network to attend to each covariate.
neuralGAM( formula, data, num_units, family = "gaussian", learning_rate = 0.001, activation = "relu", kernel_initializer = "glorot_normal", kernel_regularizer = NULL, bias_regularizer = NULL, bias_initializer = "zeros", activity_regularizer = NULL, loss = "mse", w_train = NULL, bf_threshold = 0.001, ls_threshold = 0.1, max_iter_backfitting = 10, max_iter_ls = 10, seed = NULL, verbose = 1, ... )
neuralGAM( formula, data, num_units, family = "gaussian", learning_rate = 0.001, activation = "relu", kernel_initializer = "glorot_normal", kernel_regularizer = NULL, bias_regularizer = NULL, bias_initializer = "zeros", activity_regularizer = NULL, loss = "mse", w_train = NULL, bf_threshold = 0.001, ls_threshold = 0.1, max_iter_backfitting = 10, max_iter_ls = 10, seed = NULL, verbose = 1, ... )
formula |
An object of class "formula": a description of the model to be fitted. You can add smooth terms using |
data |
A data frame containing the model response variable and covariates required by the formula. Additional terms not present in the formula will be ignored. |
num_units |
Defines the architecture of each neural network. If a scalar value is provided, a single hidden layer neural network with that number of units is used. If a vector of values is provided, a multi-layer neural network with each element of the vector defining the number of hidden units on each hidden layer is used. |
family |
This is a family object specifying the distribution and link to use for fitting.
By default, it is |
learning_rate |
Learning rate for the neural network optimizer. |
activation |
Activation function of the neural network. Defaults to |
kernel_initializer |
Kernel initializer for the Dense layers.
Defaults to Xavier Initializer ( |
kernel_regularizer |
Optional regularizer function applied to the kernel weights matrix. |
bias_regularizer |
Optional regularizer function applied to the bias vector. |
bias_initializer |
Optional initializer for the bias vector. |
activity_regularizer |
Optional regularizer function applied to the output of the layer |
loss |
Loss function to use during neural network training. Defaults to the mean squared error. |
w_train |
Optional sample weights |
bf_threshold |
Convergence criterion of the backfitting algorithm.
Defaults to |
ls_threshold |
Convergence criterion of the local scoring algorithm.
Defaults to |
max_iter_backfitting |
An integer with the maximum number of iterations
of the backfitting algorithm. Defaults to |
max_iter_ls |
An integer with the maximum number of iterations of the
local scoring Algorithm. Defaults to |
seed |
A positive integer which specifies the random number generator seed for algorithms dependent on randomization. |
verbose |
Verbosity mode (0 = silent, 1 = print messages). Defaults to 1. |
... |
Additional parameters for the Adam optimizer (see ?keras::optimizer_adam) |
The function builds one neural network to attend to each feature in x,
using the backfitting and local scoring algorithms to fit a weighted additive model
using neural networks as function approximators. The adjustment of the
dependent variable and the weights is determined by the distribution of the
response y
, adjusted by the family
parameter.
A trained neuralGAM
object. Use summary(ngam)
to see details.
Ines Ortega-Fernandez, Marta Sestelo.
Hastie, T., & Tibshirani, R. (1990). Generalized Additive Models. London: Chapman and Hall, 1931(11), 683–741.
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <- x1 ** 2 f2 <- 2 * x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) ngam ## End(Not run)
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <- x1 ** 2 f2 <- 2 * x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) ngam ## End(Not run)
neuralGAM
object with base graphicsVisualization of neuralGAM
object. Plots the learned partial effects by the neuralGAM object.
## S3 method for class 'neuralGAM' plot(x, select = NULL, xlab = NULL, ylab = NULL, ...)
## S3 method for class 'neuralGAM' plot(x, select = NULL, xlab = NULL, ylab = NULL, ...)
x |
a fitted |
select |
allows to plot a set of selected terms. e.g. if you just want to plot the first term, select="X0" |
xlab |
if supplied, this value will be used as the |
ylab |
if supplied, this value will be used as the |
... |
other graphics parameters to pass on to plotting commands. |
Returns the partial effects plot.
Ines Ortega-Fernandez, Marta Sestelo.
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) plot(ngam) ## End(Not run)
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) plot(ngam) ## End(Not run)
neuralGAM
objectTakes a fitted neuralGAM
object produced by
neuralGAM()
and produces predictions given a new set of values for the model covariates.
## S3 method for class 'neuralGAM' predict(object, newdata = NULL, type = "link", terms = NULL, verbose = 1, ...)
## S3 method for class 'neuralGAM' predict(object, newdata = NULL, type = "link", terms = NULL, verbose = 1, ...)
object |
a fitted 'neuralGAM' object |
newdata |
A data frame or list containing the values of covariates at which predictions are required. If not provided, the function returns the predictions for the original training data. |
type |
when |
terms |
If |
verbose |
Verbosity mode (0 = silent, 1 = print messages). Defaults to 1. |
... |
Other options. |
Predicted values according to type
parameter.
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) n <- 5000 x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) test <- data.frame(x1, x2, x3) # Obtain linear predictor eta <- predict(ngam, test, type = "link") # Obtain predicted response yhat <- predict(ngam, test, type = "response") # Obtain each component of the linear predictor terms <- predict(ngam, test, type = "terms") # Obtain only certain terms: terms <- predict(ngam, test, type = "terms", terms = c("x1", "x2")) ## End(Not run)
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) n <- 5000 x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) test <- data.frame(x1, x2, x3) # Obtain linear predictor eta <- predict(ngam, test, type = "link") # Obtain predicted response yhat <- predict(ngam, test, type = "response") # Obtain each component of the linear predictor terms <- predict(ngam, test, type = "terms") # Obtain only certain terms: terms <- predict(ngam, test, type = "terms", terms = c("x1", "x2")) ## End(Not run)
neuralGAM
summaryDefault print statement for a neuralGAM object.
## S3 method for class 'neuralGAM' print(x, ...)
## S3 method for class 'neuralGAM' print(x, ...)
x |
|
... |
Other arguments. |
The printed output of the object:
Distribution family
Formula
Intercept value
Mean Squared Error (MSE)
Training sample size
Ines Ortega-Fernandez, Marta Sestelo.
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) print(ngam) ## End(Not run)
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) print(ngam) ## End(Not run)
neuralGAM
summarySummary of a fitted neuralGAM
object. Prints
the distribution family, model formula, intercept value, sample size,
as well as neural network architecture and training history.
## S3 method for class 'neuralGAM' summary(object, ...)
## S3 method for class 'neuralGAM' summary(object, ...)
object |
|
... |
Other options. |
The summary of the object:
Distribution family
Formula
Intercept value
Mean Squared Error (MSE)
Training sample size
Training History
Model Architecture
Ines Ortega-Fernandez, Marta Sestelo.
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) summary(ngam) ## End(Not run)
## Not run: n <- 24500 seed <- 42 set.seed(seed) x1 <- runif(n, -2.5, 2.5) x2 <- runif(n, -2.5, 2.5) x3 <- runif(n, -2.5, 2.5) f1 <-x1**2 f2 <- 2*x2 f3 <- sin(x3) f1 <- f1 - mean(f1) f2 <- f2 - mean(f2) f3 <- f3 - mean(f3) eta0 <- 2 + f1 + f2 + f3 epsilon <- rnorm(n, 0.25) y <- eta0 + epsilon train <- data.frame(x1, x2, x3, y) library(neuralGAM) ngam <- neuralGAM(y ~ s(x1) + x2 + s(x3), data = train, num_units = 1024, family = "gaussian", activation = "relu", learning_rate = 0.001, bf_threshold = 0.001, max_iter_backfitting = 10, max_iter_ls = 10, seed = seed ) summary(ngam) ## End(Not run)