Next meeting:
Total votes:
4
(last vote was 2 years ago)
|
|
Total votes:
2
(last vote was 1 year ago)
Title:
|
|
Total votes:
2
(last vote was 1 year ago)
Author:
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 3 months ago)
Author:
J. François, L. Ravera
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 5 months ago)
Author:
Gerardo Aldazabal, Eduardo Andrés, Anamaría Font, Kumar Narain, Ida G. Zadeh
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 1 year ago)
Author:
Claudio Andrea Manzari, Yujin Park, Benjamin R. Safdi, Inbar Savoray
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 1 year ago)
Title:
Author:
David Alesini, Danilo Babusci, Paolo Beltrame, Fabio Bossi, Paolo Ciambrone, Alessandro D'Elia, Daniele Di Gioacchino, Giampiero Di Pirro et al.
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 1 year ago)
Title:
|
|
Total votes:
1
(last vote was 1 year ago)
Author:
Florian Goertz, Álvaro Pastor-Gutiérrez
Discussion leader:
Nobody volunteered yet
|
|
|
5 years ago
Title: Trans-Planckian Censorship and the Swampland
Link: https://arxiv.org/pdf/1909.11063.pdf
Description:
|
|
5 years ago
Title: Trans-Planckian Censorship and Inflationary Cosmology
Link: https://arxiv.org/pdf/1909.11106.pdf
Description:
|
|
5 years ago
Title: Black Hole Shadows, Photon Rings, and Lensing Rings
Link: https://arxiv.org/pdf/1906.00873.pdf
Description:
|
|
6 years ago
Title: Effective field theory for black holes with induced scalar charges
Link: https://arxiv.org/abs/1903.07080
Description:
|
|
6 years ago
Title: A new type of dark compact objects in massive tensor-multi-scalar theories of gravity
Link: https://arxiv.org/pdf/1901.06379.pdf
Description:
|
|
6 years ago
Title: “Why Should I Trust You?” Explaining the Predictions of Any Classifier
Link: https://arxiv.org/pdf/1602.04938.pdf
Description: Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification.
|
|
6 years ago
Title: Dark, Cold, and Noisy: Constraining Secluded Hidden Sectors with Gravitational Waves
Link: https://arxiv.org/pdf/1811.11175.pdf
Description:
|
|
6 years ago
Title: GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo
Link: https://dcc.ligo.org/public/0156/P1800307/005/o2catalog.pdf?fbclid=IwAR1YZvVG4Kt78gzMvoDVfvYeLM5F1w2
Description:
|
|
6 years ago
Title: Electromagnetic emission from axionic clouds and the quenching of superradiant instabilities
Link: https://arxiv.org/pdf/1811.04950.pdf
Description:
|
|
6 years ago
Title: Quantized Back-Propagation: Training Binarized Neural Networks with Quantized Gradients
Link: https://openreview.net/pdf?id=Bye10KkwG
Description: Binarized Neural networks (BNNs) have been shown to be effective in improving network efficiency during the inference phase, after the network has been trained. However, BNNs only binarize the model parameters and activations during propagations.
We show there is no inherent difficulty in training BNNs using "Quantized BackPropagation" (QBP), in which we also quantized the error gradients and in the extreme case ternarize them. To avoid significant degradation in test accuracy, we apply stochastic ternarization and increase the number of filter maps in a each convolution layer. Using QBP has the potential to significantly improve the execution efficiency (\emph{e.g.}, reduce dynamic memory footprint and computational energy and speed up the training process, even after such an increase in network size.
|