Ad

Wednesday, May 29, 2019

Applying for jobs at the Lending Club

We tried to figure out Lending Club 's tech stack for 2019. Our analysis shows Lending Club asks for skills in Python, Tableau, SQL and also R. Here's a sample job posting that we looked at:

Credit Risk Manager, Lending
Strong analytical skills and problem solving skills.
Strong communication skills to work with various teams at various levels.
Participated or worked in cross team and cross function projects (eg: worked with Engineering teams before)
Be able to lead conversations in meetings to gather requirements and/or to participate in technical design discussions
Be able to produce BI reports (eg: tableau)
SQL is a must
Bachelor of Computer Science or Statistics or equivalent
4+ years work experience in an analytics role with 1+ years project/product exposure.
Nice to have skills but not necessary:

Tableau/SAS experience
Product analytics experience
Data warehouse experience

Full List of Udacity Nanodegrees Offered 2018-2019

A
AI For Trading
B
AI Programming with Python
C
Android Basics
D
Android Developer
E
Artificial Intelligence
F
Blockchain
G
Business Analytics
H
C++
I
Cloud DevOps
J
Cloud Engineer
K
Computer Vision
L
Data Analyst
M
Data Engineer
N
Data Scientist
O
Data Structures and Algorithms
P
Deep Learning
Q
Deep Reinforcement Learning
R
Design Sprint
S
Digital Marketing
T
Flying Car
U
Front End Web Developer
V
Full Stack Web Developer
W
Google Ads
X
Google Analytics
Y
Intro to Machine Learning
Z
Intro to Programming
AA
Intro to Self Driving Car
BB
iOS Developer
CC
Machine Learning Engineer
DD
Marketing Analytics
EE
Mobile Web Specialist
FF
Natural Language Processing
GG
Predictive Analytics for Business
HH
Programming for Data Science
II
Programming for DS - R
JJ
React
KK
Robotics ND
LL
Self-Driving Car Engineer
MM
VR Foundations
NN
VR High Immersion
OO
VR Mobile 360

Tuesday, May 14, 2019

Understand the Softmax Function in Minutes

Reposted from Uniqtech's Medium publication with permission. This is retrieved on May 14 2019. Uniqtech may have a newer version.

Understand the Softmax Function in Minutes

Understanding Softmax in Minutes by Uniqtech
Learning machine learning? Specifically trying out neural networks for deep learning? You likely have run into the Softmax function, a wonderfulactivation function that turns numbers aka logits into probabilities thatsum to one. Softmax function outputs a vector that represents the probability distributions of a list of potential outcomes. It’s also a core element used in deep learning classification tasks (more on this soon). We will help you understand the Softmax function in a beginner friendly manner by showing you exactly how it works — by coding your very own Softmax function in python.
This article has gotten really popular — 1800 claps and counting and it is updated constantly. Latest update April 4, 2019: updated word choice, listed out assumptions and added advanced usage of Softmax function in Bahtanau Attention for neural machine translation. See full list of updates below. You are welcome to translate it. We would appreciate it if the English version is not reposted elsewhere. If you want a free read, just use incognito mode in your browser. A link back is always appreciated. Comment below and share your links so that we can link to you in this article. Claps on Medium help us earn $$$. Thank you in advance for your support!
Skill perquisites: the demonstrative codes are written with Python list comprehension (scroll down to see an entire section explaining list comprehension). The math operations demonstrated are intuitive and code agnostic: it comes down to taking exponentials, sums and division aka the normalization step.
Udacity Deep Learning Slide on Softmax
The above Udacity lecture slide shows that Softmax function turns logits [2.0, 1.0, 0.1] into probabilities [0.7, 0.2, 0.1], and the probabilities sum to 1.
In deep learning, the term logits layer is popularly used for the last neuron layer of neural network for classification task which produces raw prediction values as real numbers ranging from [-infinity, +infinity ]. — Wikipedia
Logits are the raw scores output by the last layer of a neural network. Before activation takes place.
Softmax is not a black box. It has two components: special number e to some power divide by a sum of some sort.
y_i refers to each element in the logits vector y. Python and Numpy code will be used in this article to demonstrate math operations. Let’s see it in code:
logits = [2.0, 1.0, 0.1]
import numpy as npexps = [np.exp(i) for i in logits]
We use numpy.exp(power) to take the special number eto any power we want. We use python list comprehension to iterate through each i of the logits, and compute np.exp(i)Logit is another name for a numeric score.The result is stored in a list called exps. The variable name is short for exponentials.
Replacing i with logit is another verbose way to write outexps = [np.exp(logit) for logit in logits] . Note the use of plural and singular nouns. It’s intentional.
We just computed the top part of the Softmax function. For each logit, we took it to an exponential power of eEach transformed logit j needs to be normalized by another number in order for all the final outputs, which are probabilities, to sum to one. Again, this normalization gives us nice probabilities that sum to one!
We compute the sum of all the transformed logits and store the sum in a single variable sum_of_exps, which we will use to normalize each of the transformed logits.
sum_of_exps = sum(exps)
Now we are ready to write the final part of our Softmax function: each transformed logit jneeds to be normalized by sum_of_exps , which is the sum of all the logits including itself.
softmax = [j/sum_of_exps for j in exps]
Again, we use python list comprehension: we grab each transformed logit using [j for j in exps]divide each j by the sum_of_exps.
List comprehension gives us a list back. When we print the list we get
>>> softmax
[0.6590011388859679, 0.2424329707047139, 0.09856589040931818]
>>> sum(softmax)
1.0
The output rounds to [0.7, 0.2, 0.1] as seen on the slide at the beginning of this article. They sum nicely to one!

Extra — Understanding List Comprehension

This post uses a lot of Python list comprehension which is more concise than Python loops. If you need help understanding Python list comprehension type the following code into your interactive python console (on Mac launch terminal and type python after the dollar sign $ to launch).
sample_list = [1,2,3,4,5] 
# console returns None
sample_list # console returns [1,2,3,4,5]
#print the sample list using list comprehension
[i for i in sample_list] # console returns [1,2,3,4,5]
# note anything before the keyword 'for' will be evaluated
# in this case we just display 'i' each item in the list as is
# for i in sample_list is a short hand for 
# Python for loop used in list comprehension
[i+1 for i in sample_list] # returns [2,3,4,5,6]
# can you guess what the above code does?
# yes, 1) it will iterate through each element of the sample_list
# that is the second half of the list comprehension
# we are reading the second half first
# what do we do with each item in the list?
# 2) we add one to it and then display the value
# 1 becomes 2, 2 becomes 3
# note the entire expression 1st half & 2nd half are wrapped in []
# so the final return type of this expression is also a list
# hence the name list comprehension
# my tip to understand list comprehension is
# read the 2nd half of the expression first
# understand what kind of list we are iterating through
# what is the individual item aka 'each'
# then read the 1st half
# what do we do with each item
# can you guess the list comprehension for 
# squaring each item in the list?
[i*i     for i in sample_list] #returns [1, 4, 9, 16, 25]

Intuition and Behaviors of Softmax Function

If we hard code our label data to the vectors below, in a format typically used to turn categorical data into numbers, the data will look like this format below.
[[1,0,0], #cat
 [0,1,0], #dog
 [0,0,1],] #bird
Optional Reading: FYI, this is an identity matrix in linear algebra. Note that only the diagonal positions have the value 1 the rest are all zero. This format is useful when the data is not numerical, the data is categorical, each category is independent from others. For example, 1 star yelp review, 2 stars, 3 stars, 4 stars, 5 starscan be one hot coded but note the five are related. They may be better encoded as 1 2 3 4 5 . We can infer that 4 stars is twice as good as 2 stars. Can we say the same about name of dogs? Ginger, Mochi, Sushi, Bacon, Max , is Macon 2x better than Mochi? There’s no such relationship. In this particular encoding, the first column represent cat, second column dog, the third column bird.
The output probabilities are saying 70% sure it is a cat, 20% a dog, 10% a bird. One can see that the initial differences are adjusted to percentages. logits = [2.0, 1.0, 0.1]. It’s not 2:1:0.1. Previously, we cannot say that it’s 2x more likely to be a cat, because the results were not normalized to sum to one.
The output probability vector is [0.7, 0.2, 0.1] . Can we compare this with the ground truth of cat [1,0,0] as in one hot encoding? Yes! That’s what is commonly used in cross entropy loss (We have a cool trick to understand cross entropy loss and will write a tutorial about it. Writing here soon). In fact cross entropy loss the “best friend” of Softmax. It is the most commonly used cost function, aka loss function, aka criterion that is used with Softmax in classification problems. More on that in a different article.
Why do we still need fancy machine learning libraries with fancy Softmax function? The nature of machine learning training requires ten of thousands of samples of training data. Something as concise as the Softmax function needs to be optimized to process each element. Some say that Tensorflow broadcasting is not necessarily faster than numpy’s matrix broadcasting though.

Watch this Softmax tutorial on Youtube

Visual learner? Prefer watching a YouTube video instead? See our tutorial below.

Deeper Dive into Softmax

Softmax is an activation function. Other activation functions include RELU and SigmoidIt is frequently used in classifications. Softmax output is large if the score (input called logit) is large. Its output is small if the score is small. The proportion is not uniform. Softmax is exponential and enlarges differences - push one result closer to 1 while another closer to 0. It turns scores aka logits into probabilities. Cross entropy (cost function) is often computed for output of softmax and true labels (encoded in one hot encoding). Here’s an example of Tensorflow cross entropy computing function. It computes softmax cross entropy between logits and labels. Softmax outputs sum to 1 makes great probability analysis. Sigmoid outputs don’t sum to 1. Remember the takeaway is: the essential goal of softmax is to turn numbers into probabilities.
Thanks. I can now deploy this to production. Uh no. Hold on! Our implementation is meant to help everyone understand what the Softmax function does. It uses for loops and list comprehensions, which are not efficient operations for production environment. That’s why top machine learning frameworks are implemented in C++, such as Tensorflow and Pytorch. These frameworks can offer much faster and efficient computations especially when dimensions of data get large, and can leverage parallel processing. So no, you cannot use this code in production. However, technically if you train on a few thousand examples (generally ML needs more than 10K records), your machine can still handle it, and inference is possible even on mobile devices! Thanks Apple Core ML. Can I use this softmax on imagenet data? Uh definitely no, there are millions of images. Use Sklearn if you want to prototype. Tensorflow for production. Pytorch 1.0 added support for production as well. For research Pytorch and Sklearn softmax implementations are great.
Best Loss Function / Cost Function / Criterion to Use with Softmax
You have decided to choose Softmax as the final function for classifying your data. What loss function and cost function should you use with Softmax? The theoretical answer is Cross Entropy Loss (let us know if you want an article on that. We have a full pipeline of topics waiting for your vote).
Tell me more about Cross Entropy Loss. Sure thing! Cross Entropy Loss in this case measures how similar your predictions are to the actual labels. For example if the probabilities are supposed to be [0.7, 0.2, 0.1] but you predicted during the first try [0.3, 0.3, 0.4], during the second try [0.6, 0.2, 0.2] . You can expect the cross entropy loss of the first try , which is totally inaccurate, almost like a random guess to have higher loss than the second scenario where you aren’t too far off from the expected.

Deep Dive Softmax

Coming soon… deep dive of softmax, how it is used in practice, in deep learning models. How to graph Softmax function? Is there a more efficient way to calculate Softmax for big datasets? Stay tuned. Get alerts subscribe@uniqtech.co — — — May 11 2019 In Progress: Softmax source code Softmax Beyond the Basics (post under construction): implementation of Softmax in Pytorch Tensorflow, Softmax in practice, in production.
Bahtanau Attention for Neural Machine Translation — Softmax Function in Real Time
In the neural machine translation architecture, outlined by Dimitry Bahtanau in Neural machine translation by jointly learn to align and translate (2014), uses Softmax output as weights to weigh each of the hidden states right before producing the final output.
Softmax Function BehaviorBecause Softmax function outputs numbers that represent probabilities, each number’s value is between 0 and 1 valid value range of probabilities. The range is denoted as [0,1]. The numbers are zero or positive. The entire output vector sums to 1. That is to say when all probabilities are accounted for, that’s 100%.
— — — (Need more citation and details. Section under construction) Logits are useful too
Logits, aka the scores before Softmax activation, are useful too. Is there a reason to delay activation with Softmax? Softmax turn logits into numbers between zero and one. In deep learning, where there are many multiplication operations, a small number subsequently multiplied by more small numbers will result in tiny numbers that are hard to compute on. Hint: this sounds like the vanishing gradient problem. Sometimes logits are used in numerically stable loss calculation before using activation (Need more citation and details. Section under construction). — — -

Update History

  • May 11 2019 In Progress: a deep dive on Softmax source code Softmax Beyond the Basics (post under construction): implementation of Softmax in Pytorch Tensorflow, Softmax in practice, in production.
  • Coming soon: a discussion on graphing Softmax function
  • Coming soon: a discussion on cross entropy evaluation of Softmax
  • InProgress May 11 2019: Softmax Beyond the Basics, Softmax in textbooks and university lecture slides
  • Coming soon: cross entropy loss tutorial
  • April 16 2019 added explanation for one hot encoding.
  • April 12 2019 added additional wording explaining the outputs of Softmax function: a probability distribution of potential outcomes. In other words, a vector or a list of probabilities associated with each outcome. The higher the probability the more likely the outcome. The highest probability wins — used to classify the final result.
  • April 4 2019 updated word choices, advanced use of Softmax Bahtanau attention, assumptions, clarifications, 1800 claps. Logits are useful too.
  • Jan 2019 best loss function cost function criterion function to go with Softmax

Wednesday, April 3, 2019

Getting Started with Automated Data Pipelines, Day 2: Validation and URL...







  • Data validation creating data from URL
  • When do you need data from URL? Maps, getting shapes for maps

Kaggle Challenge (LIVE)





  • Architecture: UNet
  • Use Google Colab to avoid dependent
  • Salt correlated with oil and gas where salt is heavy
  • !pip install imageio
  • for image processing
  • !pip install torch






Kaggle Live-Coding: Code Reviews! | Kaggle







  • Make code robust and reproducible, if column names change later can you still handle it. 
  • Use R functions for column querying starts_with(), ends_with(), contains() makes the query more robust, harder to break downstream. 
  • Avoid using numeric column indexing as order of columns may change
  • Avoid redundancy in code and comments
  • If want to make file a bit shorter, can avoid inline images, use script to generate images instead. 
  • Make sure the logic matches the coding comment and function signature

Sunday, March 31, 2019

Django Girls - friendly events that teach women build websites using Django

ng''



Two amazing ladies from Poland teamed up with coaches around the world to teach girls and women how to use Web Development Framework Django

Sunday, March 17, 2019

Machine Learning with No Code





AutoML machine learning deep learning without code by Uber, Ludwig allows users to train and make inference deep learning model without coding (caveat you still have to use command line code). Previously, it is an internal tool at Uber now open sourced to gather contribution. It's a python library.



Sraj Raval gives this tutorial using Ludwig in Google Colab.

Sraj Raval expression quote, "Don't hate. Copy & Paste." To install Ludwig copy and paste installation code from Uber github page.

Wednesday, March 13, 2019

Flatten, Reshape, and Squeeze Explained - Tensors for Deep Learning with...





Matrix is a rank 2 tensor. There are two axis one is an array, one is individual numbers.
Check the dimensions of tensors using .size() or .shape()
Obtain the rank of the tensor by checking the length of its shape
len(tensor.shape) #returns 2 for matrix
number of elements in the tensor, is the product of the component values in the shape torch.tensor(my_tensor.shape).prod()
my_tensor.numel() #number of elements
number of elements is important in reshaping
reshaping does not change underlining data just change the shape

Sunday, March 10, 2019

How to Build a Compelling Data Science Portfolio & Resume | Kaggle Quora







  • Make every single line of the resume count because recruiters and hiring managers can only spend 20 seconds on it.
  • Quora engineer's advice for Kaggle Career Con attendees. 
  • Resume basics: one page, one column, clean, simple. Best resumes are probably easy-to-skim. Remove distractions. Have bullet points that reviewer can deep dive. The less busy the better.
There are even LaTex templates.
Need to make it easier to skim.
For tech and data science, hiring managers potentially care more about the skillset than the cover letter or the objectives.
Relevant course works include: Machine Learning, Linear Algebra, Data Analysis, Statistics, Statistical modeling, NLP ... Order by most relevant to the resume.

Relevant Courseworks that William Chen of Quora recommends. He also recommends order them by what is most relevant to the technical job. List Python, R first. SaaS or Excel has different connotations.


Word Embeddings Word2Vec LSTM, Recurrent Neural Network, GRU Review and Notes - Udacity Deep Learning Nanodegree Part 2


Word embedding can use math to represent relations between words such as man and woman, work and worked

Embedding Weights will be learned while training
Embedding lookup finding the corresponding row in the embedding layer

Embedding dimensions is the number of hidden units
Encode each word as an Integer

Embedding matrix is a weight matrix
Embedding layer is a hidden layer 


Each row of the learned embedding matrix is a vector representation of the input word
The column of the embedding matrix is the number of stacked hidden units? Usually in the hundreds?

Words in similar context, expected to have similar embeddings, such as I drink water throughout the day, I drink coffee in the morning, I drink tea in the afternoon.

such as water, coffee, and tea
such as morning, throughout the day, afternoon

Vector artihmetic 

map a verb A from present to past
map a verb B from present to past
should be the same embedding weights, or vector transformation


Saturday, March 9, 2019

Kaggle Earthquake Prediction Challenge





Objective:

Think like a data scientist

Categorical Gradient Boosting. Cat Boost Algorithm

Support Vector Machine for regression (it is more commonly known for classification)

Syllabus
Earthquake prediction background & helpful resources
Step 1 - installing dependencies
Step 2 - importing dataset
Step 3 - Exploratory data analysis
Step 4 - Feature engineering (statistical features added)
Step 5 - Implement Catboost model
Step 6 - Implement support vector machine + radial basis functional model
Step 7 - Future Directions (Genetic programming, recurrent networks etc.)




Comment: may be we can use advanced RNN for earthquake prediction since it has a time series element

Install important libraries. Installations & Dependencies
!pip install kaggle
!pip install numpy==1.15.0
!pip install catboost 
import pandas as pd
import numpy as np
from catboost import CatBoostRegressor, Pool
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV
from sklearn.svm import NuSVR, SVR
#kernel ridge model for SVM
from sklearn.kernel_ridge import KernelRidge
"Kernel methods are a way of improving Support Vector Machine Predictions. Make sure we can create a classifier line or regression line in a feature space we can visualize. You know? A lower dimension feature space"
#data visualization
import matplotlib.pyplot as plt

# Google Colab file access feature
# allows Colab to import data directly into colab
from google.colab import files
# retrieve uploaded file
uploaded = files.upload()
# move kaggle.json into thfolder where APIs  expects to finds the json file
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/kaggle/kaggle.json
#we will upload the kaggle.json file here so that colab knows our kaggle authentication
#Go to my account create new API token, which will be downloaded as a JSON file
now we can access the kaggle competition list
!kaggle competition list


Advanced Udacity Deep Learning Nanodegree

I have noticed that some of my classmates - Facebook Pytorch scholars, they went fast and far beyond what's required of the nanodegree. Here are some of the impressive things they did.

  • Train model from scratch rather than using a pretrained model. "Try using more convolution layers, increasing the depth, decreasing learning rate, and keep the final fc layer simple, I used only one fc layer after the convolution layers" "It depends on how many epochs to train. I just did 35 epochs. Similar to VGG."
  • How long did it take for you all to train your cnn from scratch model? GPU use. Some move the training to Google Colab. 

Codecademy tutorials and classes


New Codecademy Pro offers some great classes. 
  • Python3
  • Minimax
  • Machine Learning Neural Networks
  • Recursion - Python
  • Web Design
  • Test Driven Development
  • C++ Vectors and Functions
  • Build Projects Using C++
  • Additional Codecademy Offerings https://pro.codecademy.com/offerings/

Intermediate Machine Learning Deep Learning Cheat Sheet


  • Traditional machine learning algorithms are mostly not designed for sequential data. Do B after A, then C. The kind of step wise output can not be comfortably generated by traditional machine learning algorithms. 

Deep Learning Deployment - Udacity Deep Learning Nanodegree Part 6

Note on Udacity Deep Learning nanodegree deployment in the machine learning workflow. Tool: Amazon Sagemaker service https://aws.amazon.com/sagemaker/

Problem Introduction: Kaggle Boston Housing competition, trying to predict the median housing data based on features such as no. of rooms. Makes sense the house is more expensive if there are more no. of rooms. However, there are always variances, noises in the data cause the result to fluctuate from the true trend.

Kaggle Intermediate Cheat Sheet


  • Intermediate Concepts. Source: Kaggle Live Coding
    • Bloom filter (a data structure) looking at overlapping in data. Checking if there's any overlap or cross over between train and test data. Test if element is an element of a set.
    • Use in NLP, in n-grams, 8-grams arbitrary, 20-gram typical because sentences are 20ish words. 7-grams, human memory span around seven words. Average spoken language may be 7-grams. Can do both to see the amount of overlaps. Look at all sets of n grams. Pair wise comparison: what number of n-grams already exist in the set. Empty bloom filter is a bit set of m bits, all set to 0 (wikipedia). k hash functions look at the input, each map or hashes some element to m bits. k is much smaller than m. 
  • Kaggle competition with Google Cloud New York Taxi Fare Competition https://www.kaggle.com/c/new-york-city-taxi-fare-prediction
  • Playground competition in partnership with Google Cloud, Coursera and Kaggle

Using Kaggle on Google Colab
Install Kaggle, and also install catboost
!pip install kaggle


# Google Colab file access feature
# allows Colab to import data directly into colab
from google.colab import files
# retrieve uploaded file
uploaded = files.upload()
# move kaggle.json into thfolder where APIs  expects to finds the json file
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/kaggle/kaggle.json
#we will upload the kaggle.json file here so that colab knows our kaggle authentication 
#Go to my account create new API token, which will be downloaded as a JSON file
now we can access the kaggle competition list
!kaggle competition list

Friday, March 8, 2019

Google Cloud Next 2019 Conference - Extremely Exciting Sessions

Here I highlight a list of amazing sessions mostly in DATA ENGINEERING and unique GOOGLE CLOUD CLIENT SPACE, GOOGLE CLOUD USE CASES.

Google Cloud Next 2019
Interesting Data Engineer Sessions at Google Cloud:
Moving from Cassandra to Auto-Scaling Bigtable at Spotify
Data Management: The New Best Practice for Incident Response
Google Cloud Platform from 1 to 100 Million Users
Google Cloud: Data Protection and Regulatory Compliance
Organizing Your Resources for Cost Management on GCP
TensorFlow 2.0 on Google Cloud Platform
Chatbots Will Empower Students and Teachers
Deploy Your Next Application to Google Kubernetes Engine
Fast and Lean Data Science With TPUs
Creating Interactive Cost and KPI Dashboards Using BigQuery
From Blobs to Tables: Where and How to Store Your Stuff
Data Processing in Google Cloud: Hadoop, Spark, and Dataflow
Enabling Healthcare in the Cloud: Mitigating Risks and Addressing Security and Compliance Requirements with GCP
An Insider's Look: Google's Data Centers
Data Integration at Google Cloud
G Suite Data Controls and Transparency
How AI Computer Vision and IoT Is Transforming Businesses
Smart Pallets for a Smart Warehouse: Building Advanced Computer Vision Systems Using Google Cloud IoT
30 Ways Google Sheets Can Help Your Company Uncover and Share Data Insights
Data for Good: Driving Social & Environmental Impact with Big Data Solutions
Data Warehousing With BigQuery: Best Practices
Extracting Value With a Cloud Clinical Data Warehouse
Future of Google Sites
Best Practices for Storage Classes, Reliability, Performance, and Scalability
Best Practices in Building a Cloud-Based SaaS Application
Building a Global Data Presence
End-to-end Training of a Model and Prediction Generation Using BigQuery ML
How to Secure and Protect Your Data in Cloud Storage
Rethinking Business: Data Analytics With Google Cloud
The Future of Health. Powered by Google
Accelerating Machine Learning App Development with Kubeflow Pipelines
Backup, Disaster Recovery and Archival in the Cloud
Bringing the Cloud to You AMA (Ask-Me-Anything)
Building and Securing a Data Lake on Google Cloud Platform
Deploy and Manage Virtual Workstations on GCP
Google Cloud DevOps: Speed With Reliability and Security
Understanding Google Cloud IoT: Connectivity Options and Examples
Unlocking the Power of Google BigQuery
Data Analytics
Building AI-Powered Customer Service Virtual Agents for Healthcare
Case Study: Using GCP to Measure Package Sizes in 3D Images
Cloud Native Application Development, Delivery and Persistent Storage
ow to Run Millions of Self-Driving Car Simulations on GCP
Cruise Automation is a leading developer of autonomous vehicle technology. In this session, we will dive into the infrastructure which allows us to run hundreds of thousands of autonomous simulations every day and analyze the results quickly and efficiently. Cruise runs the vast majority of our testing on Google Cloud, taking advantage of high scalability of compute and GPU resources for our diverse workloads. Our simulation frameworks allow us to replay data gathered from road testing or generate complex variations
Integrating Smart Devices With the Google Assistant and Google Cloud IoT
Kaggle: Where 2 Million+ Data Scientists Learn, Compete, and Collaborate on AI Projects
Kaggle's the world's largest community of data scientists and AI engineers. You'll learn how 2 million+ users leverage Kaggle to learn AI, sharpen their skills on public competitions, incorporate 10,000's of public datasets into their projects, and analyze data in hosted Jupyter notebooks.
Ben Hamner

CTO,

Kaggle
Migrating Data Analytics Solutions to Google Cloud
Take Care of Data Privacy in a Serverless World with Firebase
Machine Learning with TensorFlow and PyTorch on Apache Hadoop using Cloud Dataproc
Medical Imaging 2.0

Medical imaging is one of the largest sources of healthcare data. Join us in this talk to learn how cloud technologies and artificial intelligence enable new applications in the medical imaging domain, improving patient care and reducing physician burnout.
Python 3 and Me: Upgrading Your Python 2 Application
The Path From Cloud AutoML to Custom Model
Transforming Healthcare With Machine Learning
With the wealth of medical imaging and text data available, there’s a big opportunity for machine learning to optimize healthcare workflows. In this talk, we’ll provide an overview of the Cloud ML products that can help with healthcare scenarios, including AutoML Vision, Natural Language, and BQML. Then we’ll hear from IDEXX, a veterinary diagnostics company using AutoML Vision to classify radiology images.
How to Grow a Spreadsheet into an Application
Integrate Firebase into Your Existing Infrastructure
Customer Case for Anomaly Detection in MMORPG
Genomic Analyses on Google Cloud Platform
Description
Using Google Cloud Platform and other open source tools such as GATK Best Practices and DeepVariant, learn how to perform end-to-end analysis of genomic data. Starting with raw files from a sequencer, progress through variant calling, importing to BigQuery, variant annotation, quality control, BigQuery analysis and visualization with phenotypic data. All the datasets will be publicly available and all the work done will be provided for participants to explore on their own.







Saving Even More Money on Compute Engine

Notable Clients of Google Cloud:
Journey to the Cloud Confidently With Citrix and Google Cloud
Square's Move to Cloud Spanner
Forbes' Road to the Cloud
Why Small and Medium Businesses are Going Google
Clorox Data Cleanup Using Advanced Cloud Dataprep Techniques
How Gordon Food Service Reimagined Collaboration Using G Suite
How Airbnb Secured Access to Their Cloud With Context-Aware Access


ow Twitter Is Migrating 300 PB of Hadoop Data to GCP

Twitter has been migrating their complex Hadoop workload to Google Cloud. In this session, we deep dive into how Twitter's components use Cloud Storage Connector and describe our initial usage, features we implemented, and how Google helped us build those features in open source. We describe how Cloud Storage fits into our ecosystems and the experience and features which have helped us. We'll also talk about unique challenges we discovered in data management at scale.

Optimizing File Storage for Your Use Case



Music Recommendations at Scale with Cloud Bigtable
Spotify serves personalized music recommendations to hundreds of millions of happy customers worldwide, and powers a lot of this infrastructure with Google Cloud Bigtable. In this talk, we'll go into detail about how Cloud Bigtable allows us to deliver recommendations at scale, roll out experiments quickly, and ingest terabytes every day via Cloud Dataflow. We'll discuss a number of challenges we overcame when designing our recommendations infrastructure on top of Cloud Bigtable, including tips about how to design a good schema, how to avoid latency when ingesting new data, and effective caching strategies to scale to tens of millions of data points per second.
Real-Time, Serverless Predictions With Google Cloud Healthcare API
Target's Application Platform (TAP)


Google Cloud for Its Business Partners, Use Case Showcase
Automate Cancer MCA using Cloud Vision API and GCMLE
Learn how Pluto7 built a model to extract the text from Clinical protocols using Cloud Vision API and automatically predicted whether clinical treatments, based on their criteria, were classified, covered by researcher of clinical trial, or by the patient's insurance. We used Cancer Clinical trial protocols by the customer to train word-embeddings and we constructed a dataset of short free-text labeled R or S (Researcher or Sponser).
GitLab's Journey from Azure to GCP and How We Made it Happen
How Booking.com Uses BigQuery ML to Assess Data Quality and Other Features
How News Corp Transformed into a Data-Driven Organisation
Future of Work With Cisco and Google
How Schlumberger is Building Enterprise Solutions for the Future with Google
Kaiser Permanente's Journey Towards an API-First IT Strategy
Everyone Flies Faster When BigQuery Fuels the BI Engines at AirAsia
How Pandora is Migrating It’s On-Premises BI & Analytics to GCP
Composing Pandora's Move to GCP With Composer
A Glimpse Into CBS Interactive’s AI/ML Group
State of the Art: SAP on Google Cloud
What Did the Doctor Say? Mining Clinical Notes With GCP
Marianne Slight

Product Manager, Google Cloud Healthcare & Life Sciences,

Google Cloud


How Equifax Accelerates Time-to-Market with Microservices and APIs
How Macy's Executes DevOps at Scale on GCP

How HSBC Leverages GCP For Regulatory Reporting
Using Google's Data and AI Technologies with Kaggle

HSBC Invents New Technology as They Migrate to BigQuery
Learn How Cardinal Health Migrated Thousands of VMs to GCP
Using AI to Transform Your Fleet Operations


What Did the Doctor Say? Mining Clinical Notes With GCP
Data Analytics
April 11 | 2:35–3:25 PM
Reserve
share
Share

bookmark
Saved
Description
With note bloat now at 80%, it has become harder than ever to trace medical decision-making in the electronic medical record. But the physician's clinical notes provide that context along with nuggets of gold that aren't easily documented in the structured EMR. Join this session to discover how to mine clinical concepts from the physician notes, map them to standard vocabularies, augment the EHR data with them, and use them in your CDW analysis or FHIR applications.

Breakout
Intermediate
Healthcare
Speakers
Marianne Slight
Marianne Slight

Product Manager, Google Cloud Healthcare & Life Sciences,

Google Cloud

Applying for jobs at the Lending Club

We tried to figure out Lending Club 's tech stack for 2019. Our analysis shows Lending Club asks for skills in Python, Tableau, SQL and ...