Title: | Robust Variable Selection with Exponential Squared Loss |
---|---|
Description: | Computationally efficient tool for performing variable selection and obtaining robust estimates, which implements robust variable selection procedure proposed by Wang, X., Jiang, Y., Wang, S., Zhang, H. (2013) <doi:10.1080/01621459.2013.766613>. Users can enjoy the near optimal, consistent, and oracle properties of the procedures. |
Authors: | Jin Zhu [cre, aut] |
Maintainer: | Jin Zhu <[email protected]> |
License: | GPL-3 |
Version: | 0.1.0 |
Built: | 2025-02-22 05:40:29 UTC |
Source: | https://github.com/cran/robustlm |
This function provides estimated
coefficients from a fitted "robustlm
" object.
## S3 method for class 'robustlm' coef(object, ...)
## S3 method for class 'robustlm' coef(object, ...)
object |
An " |
... |
Other arguments. |
A list consisting of the intercept and regression coefficients of the fitted model.
Returns predictions from a fitted
"robustlm
" object.
## S3 method for class 'robustlm' predict(object, newx, ...)
## S3 method for class 'robustlm' predict(object, newx, ...)
object |
Output from the |
newx |
New data used for prediction |
... |
Additional arguments affecting the predictions produced. |
The predicted responses.
Print the primary elements of the "robustlm
" object.
## S3 method for class 'robustlm' print(x, ...)
## S3 method for class 'robustlm' print(x, ...)
x |
A " |
... |
Additional print arguments. |
print a robustlm
object.
robustlm
carries out robust variable selection with exponential squared loss.
A block coordinate gradient descent algorithm is used to minimize the loss function.
robustlm(x, y, gamma = NULL, weight = NULL, intercept = TRUE)
robustlm(x, y, gamma = NULL, weight = NULL, intercept = TRUE)
x |
Input matrix, of dimension nobs * nvars; each row is an observation vector. Should be in matrix format. |
y |
Response variable. Should be a numerical vector or matrix with a single column. |
gamma |
Tuning parameter in the loss function, which controls the degree of robustness and efficiency of the regression estimators. The loss function is defined as
When |
weight |
Weight in the penalty. The penalty is given by
|
intercept |
Should intercepts be fitted (TRUE) or set to zero (FALSE) |
robustlm
solves the following optimization problem to obtain robust estimators of regression coefficients:
where is the adaptive LASSO penalty. Block coordinate gradient descent algorithm is used to efficiently solve the optimization problem.
The tuning parameter
gamma
and regularization parameter weight
are chosen adaptively by default, while they can be supplied by the user.
Specifically, the default weight
meets the following BIC-type criterion:
An object with S3 class "robustlm", which is a list
with the following components:
beta |
The regression coefficients. |
alpha |
The intercept. |
gamma |
The tuning parameter used in the loss. |
weight |
The regularization parameters. |
loss |
Value of the loss function calculated on the training set. |
Borui Tang, Jin Zhu, Xueqin Wang
Xueqin Wang, Yunlu Jiang, Mian Huang & Heping Zhang (2013) Robust Variable Selection With Exponential Squared Loss, Journal of the American Statistical Association, 108:502, 632-643, DOI: 10.1080/01621459.2013.766613
Tseng, P., Yun, S. A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387-423 (2009). https://doi.org/10.1007/s10107-007-0170-0
library(MASS) N <- 100 p <- 8 rho <- 0.2 mu <- rep(0, p) Sigma <- rho * outer(rep(1, p), rep(1, p)) + (1 - rho) * diag(p) ind <- 1:p beta <- (-1)^ind * exp(-2 * (ind - 1) / 20) lambda_seq <- seq(0.05, 5, length.out = 100) X <- mvrnorm(N, mu, Sigma) Z <- rnorm(N, 0, 1) k <- sqrt(var(X %*% beta) / (3 * var(Z))) Y <- X %*% beta + drop(k) * Z robustlm(X, Y)
library(MASS) N <- 100 p <- 8 rho <- 0.2 mu <- rep(0, p) Sigma <- rho * outer(rep(1, p), rep(1, p)) + (1 - rho) * diag(p) ind <- 1:p beta <- (-1)^ind * exp(-2 * (ind - 1) / 20) lambda_seq <- seq(0.05, 5, length.out = 100) X <- mvrnorm(N, mu, Sigma) Z <- rnorm(N, 0, 1) k <- sqrt(var(X %*% beta) / (3 * var(Z))) Y <- X %*% beta + drop(k) * Z robustlm(X, Y)