Designing for curiosity

Chat2Learn is an AI-native mobile app that helps parents spark curiosity and language in their preschoolers through short, voice-first conversations.

Outcome

A 700-family RCT had already validated Chat2Learn as an SMS product, but text was capping its impact. I led the design of a voice-first mobile app that became the centerpiece of the BIP Lab's MIT Solve application, won the Gates Foundation backed prize, and shipped on iOS and Android in February 2025. Pilot families in Chicago use it today, with a Peru rollout pending.

Example Chat2Learn SMS Prompt

Problem

The SMS version of Chat2Learn worked. A randomized control trial with 700+ families confirmed that daily prompts helped parents start open-ended conversations with their preschoolers. But SMS hit a ceiling. It couldn't track streaks or sessions, support voice, save memories, or give families a reason to come back tomorrow.

So the real design problem wasn't "build an app." It was:

Figuring out what parents would actually open during dinner, on the couch, or in the five quiet minutes before bed.

Research

Working remotely with the BIP Lab team (all parents themselves), I built on the lab's existing 700-family dataset. One finding reframed the whole product. Parents don't have long, focused sessions with their kids. They have short, natural windows. Dinner. Couch. Car. That made voice the default interaction, because typing while multitasking is friction you can't afford.

I studied Duolingo and Headspace for how color, type, and gentle motion can feel welcoming, and WhatsApp for interaction familiarity, since most families in the cohort were already fluent in messaging. We skipped formal usability testing in favor of twice-weekly walkthroughs with the team, reviewing flows live and adjusting fast.

Decisions

The hardest question was what to put at the center of the app. I explored child profile toggles on the home screen, a hands-free voice mode, ephemeral versus saved conversations, and whether to lead with saved memories or a daily prompt.

We landed on a "Question of the Day" as the anchor. It gave families a consistent starting point, removed decision fatigue, and gave the BIP Lab a clean independent variable for research since every family answered the same question. Child profile toggles moved to Settings once we learned most families didn't have overlapping kids in the target age range. The home screen went through three major revisions before the friction felt gone, and a hands-free voice mode was added after watching how parents actually used the app while multitasking.

Final Design

The visual direction was bright, playful, and welcoming. I designed small sprite versions of the Chatty logo for the splash screen, arranged to resemble children curling up around a "mother" character. A quiet emotional cue of calm curiosity. Typography moved from Karla to DM Sans after testing at mobile sizes, where DM Sans held up better at smaller breakpoints and larger weights.

The core flow became open app, see Question of the Day, tap to begin, speak or type a response, optionally save the moment as a memory. Voice was the default with a hold-to-speak button, and the text-mode fallback was modeled closely on messaging apps to reduce cognitive load.

Role & Delivery

I owned product design end-to-end. Research synthesis, interaction model, prototyping, and visual direction were mine, with the BIP Lab as the client and feedback loop. I presented the work in person to Ariel Kalil, the lab's director, in Chicago.

After design sign-off, I moved into a lead frontend role and built the React Native app with a junior developer reporting to me. The rest of the MJV team handled core APIs, including CRUD endpoints and chat initialization. The shipped app holds the design intent of the prototype because I carried it across the handoff myself.

Reflection

The hardest problem was the core interaction itself. How a parent talks, how a child responds, how the app captures both without making the moment feel instrumented. We got it close, but not all the way. If I revisited this, I'd kill the Memories view and pour that time into loading and waiting states, since AI responses could run 4 to 6 seconds and that's where the magic either holds or breaks.

The bigger lesson was about scope discipline. On a 3-month timeline with a research client, the instinct is to give them everything. The work that shipped was the work that survived cuts, and I should have made those cuts earlier and more aggressively.