You Can Learn By Teaching

When you cannot teach someone about something, that means you do not understand it. One way to understand something better is to start teaching it to people, learn from it and improve on it.

Someone (Einstein, Richard Feynman or somebody) put it well,

“If you can’t explain it simply, you don’t understand it well enough.”

Contrary to that idea, to get better at anything we need to repeat it and rinse. So, instead of waiting to fully understand a subject before teaching it, one can start teaching it and the process of teaching someone clarifies things in our own head and improve our understanding of the subject.

If You Want To Do Something, Don’t Think About It, Go And Do It

I have come to this point many times. Sometimes, many times in a single day. I have a good idea and then think and think and think about it, imagine all the things that could go wrong and all the things that could go right. Weigh the pros and the cons in a rational way and of course the idea is to come to a reasonable decision. The problem here is that the original idea doesn’t actually get done, lot of thinking usually makes the idea not that interesting anymore and I lose the excitement I had in the beginning.

This line “If you want to do something, don’t think about it, go and do it” from Dyson vacuum founder in a podcast with Guy Raz made me think about this a bit. He has a point, why should you spend time thinking about the idea, what’s the fun in that, all the fun is in doing the idea and not thinking more about it.

#8 Karthish Manthiram

On Creating Fuels and Chemicals For The Next 100 Years and The Importance Of Accessible Role Models

This image has an empty alt attribute; its file name is karthish-profile-1-432x400.jpg

Karthish Manthiram is an Assistant Professor in Chemical Engineering at MIT. The Manthiram Lab at MIT is focused on the molecular engineering of electrocatalysts for the synthesis of organic molecules, including pharmaceuticals, fuels, and commodity chemicals, using renewable feedstocks. Karthish’s research and teaching have been recognized with several awards, including Forbes 30 Under 30 in Science.



Our work should break us out of what we are used to, it should surprise us, disturb us –Karthish Manthiram

Lessons Learned From Ben and Jerry

We didn’t think of it as a business, we thought of it as a venture. We wanted to do it for a couple of years and then become cross country truck drivers.

A lot of our problems would be solved if we made the ice creams shittier.

Business is essentially “busyness”, it’s mostly commonsense and lot of work.

We ran a promotion in winter called “Penny Off Per Celsius degree Below Zero Winter Extravaganza (POPCBZWE)”.

Ben and Jerry are two great friends and amazing citizens of the world delivering happiness by the pint, under $5

Random tweet (credit: Marche’ Lawton) about Ben and Jerry, haven’t fact checked but if it is true I’m definitely buying more chunky monkeys as well

Jerry Greenfield, left, and Bennett Cohen, the founders of Ben and Jerry Homemade Inc., pose in front of their “Cowmobile” in Burlington, Vt., on June 15, 1987.
( Toby Talbot / AP Images )
Source: https://www.wnycstudios.org/podcasts/heresthething/episodes/ben-and-jerry-warm

Notes and Links

https://www.amazon.com/Ice-Cream-Wendall-S-Arbuckle/dp/1461380677

#3 Sumona Karjee Mishra

On a mission to eliminate pregnancy related disorders in India and around the world, starting with early diagnosis of Preeclampsia.

This image has an empty alt attribute; its file name is sumona.jpg

Sumona Karjee Mishra is a scientist turned entrepreneur. She co-founded Prantae Solutions along with her husband to disrupt treatment of pregnancy related disorders, with an initial focus on Preeclampsia which affects 5-8% of all  pregnancies worldwide. She received her PhD from the International Center for Genetic Engineering and Biotechnology, New Delhi.

“You can’t let the pregnant women die” and “Love yourself”


0 to 1 Product Manager

“0 to 1 product management is simple but very hard to get right, billions of dollars and months of time and effort could be saved if PMs follow a few basic principles”

Does product management vary between before and after product-market-fit (PMF)? This is the fundamental question that led me to research this idea of 0 to 1 product manager and how his/her role differs from a PM that is trying to grow an established product.

What is the key objective for a pre-PMF product? It’s to find and solve a customer problem in a large market.

What is the key objective for a post-PMF product? It’s to grow the business. For example, you are an e-commerce company selling shoes online, you found your customers, you are able to serve them well (sales, marketing, support). Now, how can you grow this business? There are a few options –

  1. Sell other products augmenting the shoe product line e.g clothing, caps
  2. Sell shoes to more people (expand to other markets, e.g. international)
  3. Sell analytics to healthcare companies e.g. training data

Product management can focus on creating new products to increase ARPU or introduce existing products to new markets (other countries, segments etc). When you don’t know that “shoe” is your product or “lack of shoe” is the problem to solve, does the PM role change in that context? i.e. before product market fit.

On a typical day, a PM makes many decisions, tradeoffs and prioritizes product features while keeping ROI at the center of it all. ROI framework works very well for a post-PMF product. For a pre-PMF product, if we try to optimize ROI, we will end up building a local optima. Yet we cannot ignore ROI, I have made this mistake a few times, “build it and they will come” may work but the question is “will they just come or will they also buy?”. What do I mean by that? Would I be able to convert the users to paid users? Am I providing value? Do they perceive value in the product that is worth spending money on? My point is, for a product that has not found PMF, while ROI metric is important, it alone cannot be the metric that’s driving decisions to help us find product-market-fit.

For example, at Akamai we developed a product that can be integrated into streaming apps to enable downloads for offline consumption. This is a great product, has a good market fit and a positive ROI. However, there are other factors that needed to be weighed in, competitive offerings, business model, long term company strategy, margin, customer delight. If ROI is the only metric we cared about, we would have pushed the product ahead but we decided to not pursue that product even though we had 10 paying customers and close to one million dollars in annual recurring license revenue in the first year of launch.

So how exactly is product manager’s role different before PMF? PM’s main objectives before PMF are to –

  1. Identify a customer problem in a large market, and go solve that problem. This takes building, iterating, learning, listening to the customer, there is no overnight eureka moment here. The most crucial superpower any successful pre-PMF product team can posses is experimenting, learning from data (qualitative and quantitative) and quickly iterating to reach PMF.
  2. Ensure that the customer is willing to pay for your product.
  3. Be clear on who the user and customer are for your product, sometimes they are the same people and sometimes not.

When you are before PMF, focus obsessively on getting to product/market fit. Do whatever is required to get to product/market fit. Including changing out people, rewriting your product, moving into a different market, telling customers no when you don’t want to, telling customers yes when you don’t want to, raising that fourth round of highly dilutive venture capital — whatever is required. When you get right down to it, you can ignore almost everything else. I’m not suggesting that you do ignore everything else — just that judging from what I’ve seen in successful startups, you can.

Marc Andreessen – https://a16z.com/2017/02/18/12-things-about-product-market-fit/

Let’s unpack that to get a sense of what frameworks, metrics and tools a PM could use to get to PMF.

First, identify a customer problem in a large market, how to do that? Through build-measure-iterate loops. Running experiments, be willing to fail and learn what the market is telling us. What value we think we are creating and does the customer see the value as well? This is validating your value-hypothesis, I’m paraphrasing Andy Rachleff.

Second, ensure that customer is going to pay for the product otherwise it’s a nice hobby but not a business. Again test-measure-learn what pricing works, what business model works the best. Subscriptions? Transactional? Bundling with other services? Giving the product away for free in order to grow another north star metric?

Lastly, be clear on user and customer. Are you selling to the end user? Are you giving the product for free to users but charging your partners? Is your user the customer e.g. Netflix subscriber or is your user just a user and your customer is a small business e.g. EventBrite. Understanding this and setting the right priorities to delight both the customer and the user in different ways is important, ignore any one of them at your own peril.

Do Your Thing

“Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma – which is living with the results of other people’s thinking. Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition.” __Steve Jobs

What’s your thing? Not your dad’s thing, not your neighbors thing, not your kids thing, not your grandma’s thing, not your manager’s thing but your thing, what is it? My sense is you can’t put a finger on it, so are about 7.5 billion others on this planet, you are not alone. I will take a wild guess and say that if you are asking this question, you are definitely in the 99.99 percentile of the world’s population (one of the 0.75 million people) who are likely asking themselves this question.

It’s easy to say “just be yourself”, “do your thing” but not easy to do it, otherwise the whole world would’ve already done it. The problem is not really in doing but really in developing that conviction about your thing. For example, I love writing, art, design, coding and running, a few things I really enjoy and wouldn’t mind spending a big chunk of my time in a day doing one or all of them. The challenge is two-fold, one believing that any of these things is going to pay my bills and take care of the family and even if I believe, how do I figure out which of these things is my thing? May be it’s a new thing that combines two or more of my interests?

Is the thing already within me and not something external that is magically going to appear one fine day? I think it’s latent and my job is to surface it, support it, like protecting a small fire and kindling it until it grows to become a huge fire. It’s a marathon. It’s a process, day in and day out.

Once I figure out what the thing is, then the next step is to develop conviction/faith in it. How does one develop conviction in something, isn’t it through experimentation? Can faith develop overnight? Is it a flip of a switch? I don’t think it is, faith develops over time, as we see small progress being made each day. For that kind of experimentation to happen, one must be ready to experiment, fail, learn and try something new the next day. How do you know if you are failing or succeeding? It can’t be done inside your own mind, your thing has to come in contact with the people you are trying to serve. They have to provide that feedback and that feedback becomes the life blood in refining your thing. All of this takes time, practice, discipline, daily hustle, daily communication with your users.

Someone put it like this, “How much suffering are you willing to take”, may be that’s it. How much sacrifice and suffering are you willing to go through to accomplish your thing? Embarrassment, cold-calling people, falling flat on your face, feeling uncertain, feeling fear and yet moving on, what a way to keep it real, that’s why I love entrepreneur’s journeys. It’s the ups and downs and facing harsh realities, facing one’s self, facing the dark abyss, the feeling of eating glass every day for breakfast, lunch and dinner.

I digress, so how do you surface “your thing” to the forefront of your day, how do you make it your thing everyday and live it, breathe it and do it? Are there rituals, practices and tools to help you work on your thing daily? As Stephen Covey put it, “The main thing is to keep the main thing the main thing.” and to make your thing the main thing is the main thing.

With Love, A.I: TensorFlow, Keras, PyTorch And A Hodgepodge Of Other Libraries

Hodgepodge of AI Libraries

In the beginning there was FORTRAN one of the first widely spread high-level programming language. Then came Algol, PL/1, Pascal, COBOL, BASIC, C, Lisp and others and then came javascript, Python, PHP, Perl, Ruby and the more widely adopted object oriented programming languages C++ and Java. It took nearly 50 years to go from FORTRAN to Java.

In contrast, AI research has been in the works for many decades starting in the 1950s, there have been as many if not more libraries created in the past 10 years as there were programming languages created in the last 50 years.

Here is a chart of AI libraries and how many people “follow” them on github, interestingly, newest of the libraries, TensorFlow seems to be a few orders of magnitude more popular than the others. That doesn’t mean it’s the best AI library, in fact there is no such thing as an AI library (general purpose). My sense is that there are libraries that assist in developing AI applications and some are better suited for an application than others, depending on the problem being solved e.g. computer vision, natural language processing

Let’s take a quick look at what each of these libraries are suited for

TensorFlow – provides a collection of workflows to develop and train models using Python, JavaScript, or Swift, and to easily deploy in the cloud, on-prem, in the browser, or on-device no matter what language you use.

scikit-learn – machine learning library of algorithms for data analysis and regression in Python

BVLC/Caffe – Berkley Vision and Learning Centre’s Caffe is a deep learning library for processing images. Caffe can process over 60M images per day with a single NVIDIA K40 GPU

Keras – a deep learning Python library that runs on top of TensorFlow, Theano or CNTK. Primarily an experimentation framework assists in fast experimentation with models.

CNTK – Microsoft Cognitive Toolkit is a deep learning library that can be included in Python, C# or C++ code. It describes neural networks as a series of computational steps in a directed graph.

mxnet – A flexible and efficient library for deep learning.

“Deep learning denotes the modern incarnation of neural networks, and it’s the technology behind recent breakthroughs in self-driving cars, machine translation, speech recognition and more. While widespread interest in deep learning took off in 2012, deep learning has become an indispensable tool for countless industries.”

source: https://mxnet.apache.org/versions/master/faq/why_mxnet.html

PyTorch – is an open-source machine learning library for Python, based on Torch, used for applications such as natural language processing.

PyTorch is a Python package that provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd system

Theano – is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. 

Caffe2 – aims to provide an easy and straightforward way for you to experiment with deep learning and leverage community contributions of new models and algorithms. 

Torch7 – is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT

Ok great, this is a list of a few of the hundreds of AI libraries, so what? I can google them myself, what’s the point of this blog? I’m as deluded as I was before reading this blog, if I am just starting out in AI, which library should I pick, where should I start? The short answer is pick any library, you will be better off picking one and running with it and developing something than not picking any, you have to do it, might as well start now than later.

Of course a better answer might be, what do you seek to solve? Are you looking to programmatically recognize people’s faces or cars in a photograph of a busy street? You might want to start with the BVLC/Caffe. Here is a good presentation to get you started on Caffe

If you seek to solve parsing and understanding written or spoken word, may be PyTorch is the library to start with. Here is an interesting tutorial for creating a chatbot using PyTorch

Hacking Spirituality

Spirituality is not about religion and religion is not always about spirituality. Spiritual is ritual of the spirit, without spirit, spiritual is just that, a “ritual”.

What is this spirit. It has to do with the core of one’s self without all the jumbo mumbo (“extra layers”) i.e. job, relation, name, skin color, character traits, physical body and so on. Who is Madhav without the name. He is not the product director, not the runner, not the artist, not the son, not the dad, not the brother. If we take all these “extra layers” away, then who is “Madhav”?

I realize that it’s nothing but the same pure consciousness that pervades the universe, it’s neither distinct or distant from it. However, when I add all the “extra layers”, it turns into this thing called “Madhav”. What the heck is pure consciousness, anyway, I don’t know, it’s the pristine state of something? the original state before it was modified. I’m coming from the assumption that I’m something that has been modified and in some ways tainted due to the “extra layers” associated to me but there must be something, a Madhav, that is untainted and pure, right?

If you are with me until this point, then let’s talk hacking spirituality. Spirituality is nothing but getting to and being in this state where there are zero “extra layers”. If activities such as meditation, yoga, prayers, going to temple etc are helping you eliminate the “extra layers” then they are truly spiritual activities and if they are not then they are absolutely not spiritual.

One could spend their entire life inside the sanctum sanctorum of a temple chanting hymns and yet be far away from spirituality. One of the practical ways to bring oneself closer to pure consciousness is to eliminate as many of these “extra layers” as possible. For practical purposes, it’s hard to remove your name, your body, your job position etc although they may be possible I am more interested in the path that is a process of refinement over time.

For example, one of the layers that we could eliminate is the negative emotions i.e. anger, greed, lust, arrogance, desire. These are the lower level animal qualities. Reducing these animal qualities could be the purpose of spiritual practice. As we reduce the animal qualities, there will be room made for divine qualities. We cannot pour anything into an already full vessel, can we?

if you are still with me then you can imagine how to solve a problem that is well defined, eliminating lower level animal qualities. There are many hacks for eliminating each of these qualities, here are some of my thoughts –

ANGER

Anger – use speed bumps (speed breakers) in your head whenever you realize that you are angry

ARROGANCE

Arrogance – develop appreciation for many good things that have come about in this world that didn’t involve you, obviously there are many good people just like you otherwise none of these things would be here. Always, see the good in others, try to be humble

DESIRE

Desire – desire for the worldly objects and pleasure; the only way I have come to manage (not eliminate still working on it) desires is by not getting into them, for example, I don’t drink or smoke, it’s easy to have a drink now and then and tell yourself that it’s okay. Even better is not to start it in the first place.

GREED

Greed – I want it all, it’s all mine mine and mine; it’s okay and important to have money and possessions but when they are in excess the chances are you are going to misuse them, unless you have the mindset where money is not the reason for why you do what you do. Money for personal use should be like the shoe, if the shoe is smaller or bigger than your size, it gets very uncomfortable to walk.

ATTACHMENT

Attachment – I don’t know what the antidote is, trying to figure it out. How do you not be attached to the things around, your family, your accomplishments, your possessions etc. I realize that this could all disappear any minute but still quite hard to let go of them while also living with them, I am not a hermit living under the rock, for gods sake I am a blogger and podcaster trying to make a living in an unconventional way, what do you want from?

With Love, A.I: Radiology

“If AI can recognize disease progression early, then treatments and outcomes will improve.”

Isn’t it fascinating how little we understand about the brain?. A really good case for applying deep learning AI to recognize subtle patterns and changes to neuron activity can help in early diagnosis of Alzheimers disease. Using Positron Emission Tomography (PET) scans researchers are able to measure the amount of glucose a brain cell consumes.

A healthy brain cell consumes glucose to function, the more active a cell is the more glucose it consumes but as the cell deteriorates with disease, the amount of glucose it uses drops and eventually goes to zero. If doctors can diagnose the patterns of drop in glucose consumption levels sooner, they can administer drugs to help patients recover these cells which otherwise would die and cause Alzheimers.

“One of the difficulties with Alzheimer’s disease is that by the time all the clinical symptoms manifest and we can make a definitive diagnosis, too many neurons have died, making it essentially irreversible.”

JAE HO SOHN, MD, MS
the brain of a person with Alzheimer's disease sits next to a normal brain
The brain of a person with Alzheimer’s (left) compared with the brain of a person without the disease. Source: https://www.ucsf.edu/news/2019/01/412946/artificial-intelligence-can-detect-alzheimers-disease-brain-scans-six-years

Human radiologists are really good at detecting a focal point tumor but subtle global changes over time are harder to spot by the naked eye. AI is good at analyzing time series data and identifying micro patterns.

Other areas of research where AI is being applied to improve diagnosis is in osteoporosis detection and progression through bone imaging and comparison of subtle changes in the time series of images.

Stroke management is another area where machine learning has started to assist radiologists and neurologists. For example, here is a picture of how computers are trained with stroke imaging and then that model is used to predict if a “new image” has infarctions or not (it’s an yes or no answer).

Does this new image have infarction Yes/No? Machine says Yes and color codes the area of the brain in red. Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5647643

Furthermore the ML model can identify the exact location of stroke and highlight it for the physicians, saving precious time and helping expedite treatment, in stroke treatment seconds shaved off can mean the difference between life and death.

The areas in which deep learning can be useful in Radiology are lesion or disease detection, classification, quantification, and segmentation. 

“Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes)”. __RSNA

Figure 1.

Convolutional Neural Networks (CNN) algorithms have become popular in identifying patterns in data automatically without any external engineering, especially in image processing. CNNs are developed on the basis of biological neuron structures. Here is an example of how biological neurons detect edges through visual stimuli i.e. seeing.

Figure 5a.
Source: RSNA.org

and here is how a similar structure can be developed using CNNs

Figure 5b.
Source: RSNA.org
CNN representation of biological neurons

The “deep” term in deep learning comes from the fact that there are multiple layers between inputs and outputs as represented in a simplified diagram below

Figure 6.

If we apply the above CNNs structure to radiology images as inputs to detect disease or segment the image we can have an output that might highlight the areas where there is possible disease and/or output that says what the image might represent

Figure 7.

“Many software frameworks are now available for constructing and training multilayer neural networks (including convolutional networks). Frameworks such as Theano, Torch, TensorFlow, CNTK, Caffe, and Keras implement efficient low-level functions from which developers can describe neural network architectures with very few lines of code, allowing them to focus on higher-level architectural issues (3640).”

“Compared with traditional computer vision and machine learning algorithms, deep learning algorithms are data hungry. One of the main challenges faced by the community is the scarcity of labeled medical imaging datasets. While millions of natural images can be tagged using crowd-sourcing (27), acquiring accurately labeled medical images is complex and expensive. Further, assembling balanced and representative training datasets can be daunting given the wide spectrum of pathologic conditions encountered in clinical practice.”

“The creation of these large databases of labeled medical images and many associated challenges (54) will be fundamental to foster future research in deep learning applied to medical images.” __RSNA

300 applications have been identified for deep learning in radiology, check the survey out here

Sources:

  • https://pubs.rsna.org/doi/10.1148/rg.2017170077
  • https://www.ucsf.edu/news/2019/01/412946/artificial-intelligence-can-detect-alzheimers-disease-brain-scans-six-years
  • https://www.rheumatoidarthritis.org/ra/diagnosis/imaging/
  • https://pubs.rsna.org/doi/10.1148/radiol.2019181568
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5647643/
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5789692/