Enable JavaScript to see more content
1 |
Related: TFIDF
[1906.06847] Structured Pruning of Recurrent Neural Networks through Neuron Selection[1802.06944] DeepThin: A Self-Compressing Library for Deep Neural Networks[1909.11556] Reducing Transformer Depth on Demand with Structured Dropout[1707.01662] An Embedded Deep Learning based Word Prediction[1902.00918] MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression[1804.09461] Structured Pruning for Efficient ConvNets via Incremental Regularization[1707.06342] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression[1910.01740] AntMan: Sparse Low-Rank Compression to Accelerate RNN inference[1910.06720] Distilled embedding: non-linear embedding factorization using knowledge distillation[1907.06288v2] Learning Neural Networks with Adaptive Regularization
Related: Semantic Math
[1910.04732] Structured Pruning of Large Language Models[1910.04732] Structured Pruning of Large Language Models[1905.07659] On Selecting Stable Predictors in Time Series Models[1906.03717] Argument Generation with Retrieval, Planning, and Realization[1612.06061] Mixing Times and Structural Inference for Bernoulli Autoregressive Processes[1611.06906] Multi-Scale Anisotropic Fourth-Order Diffusion Improves Ridge and Valley Localization[1810.09519] Adversarial Risk Bounds via Function Transformation[1807.08229] Optimal Continuous State POMDP Planning with Semantic Observations: A Variational Approach[1811.06687] Deep Knockoffs[1811.02506] Variational Bayes Inference in Digital Receivers
|
|