Video interview from the Heidelberg Laureate Forum. I tell the story of my life, from larval stage. Interviewed by Marc Pachter (frmr director Natnl Portrait Gallery) about my childhood, parents, schools, mentors, studies, and how I became a scientist. [youtu.be]

[Project] Visualize papers from arXiv as a graph[reddit]/r/MachineLearning

Hello all,

I am often in a situation where a read couple very similar papers, and I always want to understand how the papers are related. To make that easy, I worked on a project that built a citation maps from arXiv links.

[R] [P] Resources to learn to implement Knowledge Graph[reddit]/r/MachineLearning

Could someone please suggest resources that will give me a direction to implementing Knowledge Graphs?

I have understood what knowledge graphs are and its subproblems(Entity Extraction, Entity Linking, Entity Resolution, Coreference Resolution, etc). However, I don't know how to solve these sub-problems (except Entity Extraction) and then connect all these solutions to build the knowledge graph.

I would appreciate it if someone could guide me on how to implement this or point to resources(blog, videos, research papers, etc) that could get me started.

2020-01-21: Deep Depth Prior for Multi-View Stereo https://arxiv.org/abs/2001.07791v1We leverage the recently proposed idea of utilizing neural network as a prior for natural color image, and introduced three new loss terms that reconstruct clean and complete depth image

‹›
It was recently shown that the structure of convolutional neural networks
induces a strong prior favoring natural color images, a phenomena referred to
as a deep image prior (DIP), which can be an effective regularizer in inverse
problems such as image denoising, inpainting etc. In this paper, we investigate
a similar idea for depth images, which we call a deep depth prior.
Specifically, given a color image and a noisy and incomplete target depth map
from the same viewpoint, we optimize a randomly initialized CNN model to
reconstruct an RGB-D image where the depth channel gets restored by virtue of
using the network structure as a prior. We propose using deep depth priors for
refining and inpainting noisy depth maps within a multi-view stereo pipeline.
We optimize the network parameters to minimize two losses 1) a RGB-D
reconstruction loss based on the noisy depth map and 2) a multi-view
photoconsistency-based loss, which is computed using images from a
geometrically calibrated camera from nearby viewpoints. Our quantitative and
qualitative evaluation shows that our refined depth maps are more accurate and
complete, and after fusion, produces dense 3D models of higher quality.

2020-01-17: Up to two billion times acceleration of scientific simulations with deep neural architecture search https://arxiv.org/abs/2001.08055v1The combined update steps from equations (??) and (??), and the use of a ranking function in assigning rewards, make DENSE a robust algorithm to simultaneously learn the weights and find the right architecture for a given problem

‹›
Computer simulations are invaluable tools for scientific discovery. However,
accurate simulations are often slow to execute, which limits their
applicability to extensive parameter exploration, large-scale data analysis,
and uncertainty quantification. A promising route to accelerate simulations by
building fast emulators with machine learning requires large training datasets,
which can be prohibitively expensive to obtain with slow simulations. Here we
present a method based on neural architecture search to build accurate
emulators even with a limited number of training data. The method successfully
accelerates simulations by up to 2 billion times in 10 scientific cases
including astrophysics, climate science, biogeochemistry, high energy density
physics, fusion energy, and seismology, using the same super-architecture,
algorithm, and hyperparameters. Our approach also inherently provides emulator
uncertainty estimation, adding further confidence in their use. We anticipate
this work will accelerate research involving expensive simulations, allow more
extensive parameters exploration, and enable new, previously unfeasible
computational discovery.

Does anyone know where to find a dataset of insider transactions for public companies?[reddit]/r/datasets

The SEC has an online database (EDGAR) that has millions of insider trading disclosures logged as txt files. However, it would take me weeks to scrape all of this stuff. Is anyone familiar with a publicly available dataset for this information? Either for free or for a reasonable cost

Side-note: Apparently Thomson Reuters charges like $15,000 for this info, which blows my mind. Where do they get off charging this obscene sum of money for publicly available information. But I digress...

On the obstacles to deploying AI in radiology. Authors' prediction: "The future of diagnostic radiology is remote AI-augmented reporting" [hardianhealth.com]

[News] Hallo! Hallo! KU Leuven & TU Berlin Introduce ‘RobBERT,’ a SOTA Dutch BERT[reddit]/r/MachineLearning

A group of researchers from The Katholieke Universiteit Leuven and The Technical University of Berlin recently introduced a Dutch RoBERTa-based language model, RobBERT.

2020-01-17: Gradient descent with momentum --- to accelerate or to super-accelerate? https://arxiv.org/abs/2001.06472v1In this Section, we provided details and derivations related to the message presented in the Introduction (Figure ??): that super-accelerating momentum-based gradient descent is beneficial for minimization in the one-dimensional parabolic case

‹›
We consider gradient descent with `momentum', a widely used method for loss
function minimization in machine learning. This method is often used with
`Nesterov acceleration', meaning that the gradient is evaluated not at the
current position in parameter space, but at the estimated position after one
step. In this work, we show that the algorithm can be improved by extending
this `acceleration' --- by using the gradient at an estimated position several
steps ahead rather than just one step ahead. How far one looks ahead in this
`super-acceleration' algorithm is determined by a new hyperparameter.
Considering a one-parameter quadratic loss function, the optimal value of the
super-acceleration can be exactly calculated and analytically estimated. We
show explicitly that super-accelerating the momentum algorithm is beneficial,
not only for this idealized problem, but also for several synthetic loss
landscapes and for the MNIST classification task with neural networks.
Super-acceleration is also easy to incorporate into adaptive algorithms like
RMSProp or Adam, and is shown to improve these algorithms.