Visualize the learning path: an S-shaped curve.
Break the learning journey into smaller pieces.
Get started on the curve with, "Hello World!"
Keep moving on the path with Goldilocks Challenges.
People often use the term "learning curve" to illustrate progress in learning any particular subject or knowledge domain (for example, physics, cooking, accounting).
What does a learning curve look like?
It looks like a curve in the shape of an S.
If you dig deeper into research on learning, you will find many different shapes of learning curves, with the S-curve among them. For our purposes, we will skip all that detail. We want to visualize a learning path simply to illustrate some strategies for laying out a learning journey. The S-curve will serve well for that.
The S-curve has a related use in demonstrating how innovations grow (called diffusion of an innovation). The growth of an innovation is essentially the learning curve of an entire group of individuals ("a market") as they try a new product (or service, process, experience, etc.), learn its benefits, decide whether to continue using it, and recommend it to others to try.
Let's visualize our learning progress in a particular subject or knowledge domain. Starting to learn something new is hard. At the beginning, we have to learn basic definitions, tools, and frameworks. Progress is slow and sometimes boring ("I don't want to memorize technical words! I want to do something.) We battle procrastination to overcome the inertia of getting going.
As we persist, the process becomes easier. Things start to fit together. We gain understanding. We are taught what kinds of problems can be solved with this new knowledge. We get more confident in our capabilities, and we see they have practical value.
Early success motivates us to continue, and our progress accelerates. We see ourselves doing better than some of our peers. As our knowledge and skills grow, we can get paid, as a consultant or thought leader.
Eventually, as we become experts in our field, our progress begins slows down again. We are near the top edge of the knowledge domain. Learning something new is difficult.
The S-curve represents this entire progression. The first part of the S-curve starts almost flat. It takes a while to reach a point where the curve turns upward. The steep middle part of the curve illustrates our progress as our learning accelerates. Then, we reach the full extent of the knowledge in our selected domain, and our progress slows, rounding out the top of the S.
Like any journey, we plan our learning journey by breaking it into smaller pieces. I often emphasize the benefits of becoming an explorer instead of a tourist. This does not mean I will never be a tourist. If I am visiting a new place for the first time, following the recommendations of a tour guide or the suggestions of those who have been there before me makes perfect sense. Learning a new knowledge domain is the same. We don't know what we are supposed to learn when we are at the beginning. Getting advice from an expert in the field makes sense. Later, we can leave the recommended path and explore some new routes of our own.
Suppose you wake up one morning with the inspiration, "I use ChatGPT all the time. Now it is time to stop being beguiled by the magic. I want to understand what goes on under the hood. I want to know how a neural network works."
You do an initial search on the Internet, which generates a lot of results, or you ask your favorite GenAI tool, which gives you a clear learning pathway, with a step-by-step explanation.
Your initial investigations help you break your general topic into several chunks:
A neural network is composed of artificial neurons inspired by the structure of the brain.
Each neuron works like a switch: "on" and "off". When it "fires", it activates the neurons connected to it.
The neural network comprises different layers of neurons, including an input layer, an output layer, and some hidden layers in-between.
The output of a neuron is determined by its input, and by numerical values of weights and biases which are assigned to it. The weights and biases are adjusted during several learning loops (you might even catch the word "epoch").
In each loop the output of the entire network is compared to test cases (known, or desired, outputs), and an error value is calculated (the difference between the neural network's output and the test data output). Weights and biases are adjusted to decrease the error and tested in the next learning loop.
There are a whole lot of technical terms to learn: "backpropagation", "gradient descent", "activation function", and the aforementioned "epoch", for example.
Whether you got this list directly from a GenAI app, or you "gleaned" it from scanning several articles from an Internet search, the list represents a pretty good list of the major chunks. It does not answer everything. There are a whole bunch of "why?" questions that are notably absent, including "Why do we use a neural network?", but it gives us a general plan for our journey.
We have broken our learning journey into small chunks. It is still hard to get launched.
There might be a burst of excitement before we take that first step. Who hasn't experienced the feeling of euphoria when we buy new school supplies and arrange our pencils in our pencil cases, all stacked upon new notebooks, ready for the first day of class? At that point, anything is possible!
Actual learning, however, invariably runs into immediate barriers. What should we do first? We don't know what we don't know. "Hello, World!" helps us over the first hurdle. "Hello, World!" is the traditional way in hacking culture to launch a learning journey with an immediate win. We are given two or three instructions on how to turn on our machine or to set up a programming environment, followed by an instruction like
print("Hello, World!")
In a few seconds, this simple act yields an immensely satisfying response from the computer. We did something. The computer responded. Success.
When I first did this in BASIC (almost 50 years ago), I immediately set up a goto-loop to run "Hello, World" over and over, forever. I just stared at the screen, savoring my small win. (Alas, both BASIC and unrestricted GOTO statements have largely disappeared in modern programming.)
The "Hello, World" action is simple but powerful for a beginner. It avoids frustrating errors in both syntax and semantics. We are given the correct, error-free code, and if we follow the instructions we achieve a meaningful (yes, simple, but meaningful) outcome. Much later, I learned how to set up a hyperledger to implement a "Hello, World!" blockchain. I ran it. I was just as satisfied as that first, "Hello, World!" running on a TRS-80 microcomputer long again. Yep, I ran it several times, getting the same result every time. Simple pleasures.
We can find the equivalent of "Hello, World!" in other activities, too. Scuba schools, cooking schools, guitar schools, and pilates all offer introductory first lessons to give people a taste of the learning opportunity, without getting stuck in technical snafus or facing the difficult question, "How do I get started?" Good marketers know it is important to get people to use their product or service as quickly as possible and get a positive sense of accomplishment from that first experience. Great instructors and coaches do the same. "Hello, World!", or its equivalent, does this.
We can do the same if we are setting up our own learning journey. We want to understand the needs of people living in small villages in rural Thailand. We want to dig into the performance metrics of people working in a bank. These learning opportunities don't come with "Getting Started" procedures. Before we get too frustrated, just sit down and think: how can we create a fast, quick learning experience--a quick win? This benefits both you and the people you are working with. You will get into heavy-duty research later, but first, find a simple activity that gives you and the people you are working with a quick win. Get moving up the S-curve. Crafting good "Hello, World!"-like learning experiences is a great skill for explorers. In fact, I do this whenever I launch a new exploration project, starting out with a simple design thinking workshop that allows people to start thinking about their situation in a guided, structured way. A half or full day activity is often enough to get people aligned, problem areas clearer, and excitement generated to create a momentum. "Hello, World."
To be clear: "ice breakers" and facilitator games are not "Hello, World!" activities. These common and often necessary training activities "break the ice" and get people comfortable in a meeting or workshop. They don't launch on the path of learning. A "Hello, World" is an actual step on the learning curve.
Now, let's take a look at our neural network learning journey. What would a "Hello World!" activity look like? If we use either of our search tools to look for something like, "first neural network for a beginner," we will quickly find many helpful, generous people out there in the digital world willing to help us. They will tell us that we can use something called Python, we can make it run in something called a Jupyter Notebook, and if we copy the code they give us, we can run our first neural network. The whole process will take us about 15 minutes. If our first win sparks our curiosity, we can play with our code and see how that changes the results of our neural network. We are learning. "Hello, World of neural networks!"
If "Hello World!" gets us moving, Goldilocks Challenges keep us going on our learning journey.
Anybody who has worked with me as a student, coach, instructor, innovator, or educator knows of my great esteem for a video about skateboarding created by Dr. Tae. It teaches us how to learn anything. As Dr. Tae says, it's all about creating Goldilocks Challenges: learning challenges that are neither too easy or too hard. If the challenge is too easy, we learn nothing new. If the challenge is too hard, it can be so demoralizing that we give up.
Wherever you are along your learning journey, seek a Goldilocks Challenge for your next learning goal: something just beyond your current ability, maybe uncomfortably so, but not so difficult that it will break you. To travel up the S-Curve, set up a series of Goldilocks Challenges out in front of you. Succeed in one and then devise the next one. As your capabilities and confidence grow, the increment of each Goldilocks Challenge increases, and you accelerate up the curve.
A good coach or instructor develops the ability to design good Goldilocks Challenges. Of course, each person learns at different speeds, and a course designed for multiple learners will not have ideal Goldilocks Challenges for each person. Some course activities will be too easy for some (and give them a false sense of success) and too hard for others (giving them an unfair sense of despair). This gives us two good reasons to develop our own Goldilocks Challenges. We set our own learning path by designing "Hello, World!" activities and Goldilocks Challenges. Fortunately, our digital world, and the help of numerous collaborators in almost every knowledge area, make this increasingly easy to do.
Back to our example: we are learning about neural networks. We need to set a Goldilocks Challenge for ourselves. That first "Hello, World!" example was fun. It showed us a little how a neural network works, but it didn't help us learn about the benefits of neural networks. It turned out--we learned a little later, as we began to get comfortable with Python--that the data we used in our "Hello, World!" network was just randomly generated, not real-world data. Even in that simple experience, there are several takeaways:
We learned that Python has "libraries" you can import so you don't have to do everything yourself.
We learned that one of the libraries has a whole bunch of mathematical functions for generating random numbers and other random outcomes.
We learned how those random functions were used to create "dummy" data, and from that our minds might start to wander as we think of other things that we might build with that--like an adventure game played with a 10-sided dice.
We can set a first Goldilocks Challenge: build a neural network using real (not randomly generated) data from somebody's data science project, so we can see how a neural network is used to make real-world decisions. And, already, we can imagine another Goldilocks Challenge beyond that: build a neural network using our own data to solve a problem that we are directly working with.
We are moving along our learning path--let's keep going!