Post-BCC Thoughts on AI and Entrepreneurship
Preface
Before this weekend’s panel discussion about AI and entrepreneurship at the Black Career Conference hosted by the Black Founders Network and Black Rotman Commerce, I joked we could each speak for hours about the 10 questions on the menu for our 50-minute time slot.
Thank you, Faizah, Tanika, Will and Didier for the conversation!
Here are a few thoughts I shared and a few more I’d have liked to.
My venture and AI
I teach, I code, I teach coders and I code for teachers. I started Edventive, the Teacher Productivity Platform, to help teachers manage their overwhelming administrative workloads and reclaim their evenings, weekends and lives.
We’re expanding on the tools I built for myself while teaching math, science, computer science and special education at the middle and high school levels.
As much as I sprinkle AI into the software I build, my instinct isn’t to reach for AI first to solve every problem. I'll explain why later, but I'll caveat that I use AI extensively to help with my knowledge work and business.
The following tweet I saw yesterday perfectly encapsulates why I started co-working with AI early last January.

Source:
https://x.com/emollick/status/1880325105668223003
My aha moment(s)
I started teaching myself how to code in 2017. When I code, I think of a desired outcome to a problem and a series of steps I need to take to go from input to my desired output, deterministically.
For better of for worse, tasks like marking short answer questions require a bit of “intelligence” to automate because of the near infinite permutations of possible student answers.
This wasn’t as feasible pre-AI APIs, but was a big part of the feedback I received when demoing an earlier version of our platform to a group of American superintendents last year.
The aha moment for me was when I realized I could inject “intelligence” into the stepwise procedure from input to output.
I’ve been blown away many times since. Here’s a tweet I saw today that stopped me in my tracks:

I implore you to go read the whole post:
https://twitter.com/DeryaTR_/status/1880742025428910424
My framework for evaluating AI business impact
Since these APIs have only recently proliferated, there aren’t many experts in the traditional sense. There are, however, people who’ve experimented more than most. We’re still figuring it all out, daily.
As a result, I can’t say I have a set framework for evaluating how I can use AI professionally. The answer I gave Yusuf at eCampus Ontario’s Technology and Education Seminar and Showcase conference back in November still holds:
Don’t get caught up in the hype, think about your current challenges instead. Think about the things you’d like to do, but can’t. The things you can't currently do because there’s not enough time, not enough staff or not enough money. Then, find or build tools or systems to bridge the gap between your ideal outcomes and where you are now.
As you get more comfortable with these tools, you’ll realize they have real - and frustrating - limitations.
Go to AI tools
I experiment a lot with new tools. The space is moving quickly, so I feel like we have to. I keep an updated list of the tools I use for work right here.
Product differentiation
Now that AI is table stakes, here are the 3 ways I think we can differentiate our products:
- Mixing and matching models to accomplish different tasks within your software. You can optimize for quality of results, speed of delivery and cost. The latest frontier models have all passed my threshold for “task-based intelligence”, so I’d suggest picking the model you vibe with the most for a given task. I use the Vercal AI SDK’s playground to compare models.
- Treat time to value or time to aha as a core metric to improve. How quickly can you wow a user who has a goal to accomplish? As much as we need to care about the technical details, your customers likely don’t care if you used Claude 3.5 Haiku instead of GPT 4o or Gemini 2.0 Flash. Why? Because all frontier models pass the initial vibe check when prompted well.
- Design your software tastefully. Can you design your software as an experience? Can you remove the small frictions that get in your users’ way as they work? Can you make your software a joy to use and not a pain to use? Are you making users wait or do things happen quickly?
Key learnings integrating AI
As I will keep mentioning, a lot of what I build comes from extensive experimentation. I don’t know what’s possible, but I have to try things out to learn. With AI prototyping services like v0, Bolt, Create and Lovable, it has never been easier to turn an idea into an MVP. Developers can then take it a step further with AI-powered IDEs like Cursor and Windsurf.
Common pitfalls
Because of how quickly LLMs can generate code, I find organization is key in preventing information overload. If you’re building a React app, I’d suggest sticking to a codebase architecture framework like Bulletproof React. The predictability of the framework will help you steer the LLM more smoothly, and will help you catch it when it goes astray. Compartmentalization and minimizing cognitive load are key.
The biggest takeaway from my month-long experiment of having LLMs do 95% of my coding is that clarity of thought and clarity of organization are non-negotiable when integrating AI into your product development.
When I teach coding, I like to tell my students that a computer is like the perfect employee: it will do exactly what you tell it to, exactly as you’ve written it. Much to their chagrin, this means having to meticulously review their assumptions and code when their intentions and outputs inevitably diverge.
Unlike traditional coding, LLM outputs are not inherently deterministic. You can get 100 different results to the same prompt if you run it back to back. This messes with people, and can mess with your product if you expect consistency. That’s where evals come into play, but that’s a separate topic.
I’ve learned that unclear communication leads to a loop of ever-increasing aggravation. When you fail to provide sufficiently clear instructions, the LLM will fill in the blanks of your thinking. That’s why domain experts can get more out of these tools than novices.
Data management and security
Although many claim to not train on your data, sometimes making users explicitly opt-out, I err on the side of caution when it comes to sensitive information. I’d suggest factoring in companies’ data retention and training policies to your evaluation of possible AI-based solutions.
If you’re managing or processing sensitive information, you’ll likely need enterprise API plans, which might be out of reach for startups in the earliest stages.
Exciting emerging AI capabilities
Smaller, faster and more performant models mean that we can have intelligence on tap for less. Hopefully this spurs more innovation as access is further democratized.
I find projects like Ollama and Fullmoon very exciting, because they allow you to run LLMs on your own devices, meaning your information never leaves your computer. Think of what this means for industries like healthcare where protecting patient data is paramount.
As models get smaller and computer hardware gets more powerful, I wonder how soon we’ll able to run smarter models in people’s browsers like Google Chrome's built in ai and Transformers.js.
Imagine a more customizable implementation of Apple Intelligence’s integration with ChatGPT where users’ data is local by default, but with opt-in cloud-based extensibility for tasks that require more processing power.
Advice for entrepreneurs
-
I think it’s great time to be mildly technical with ideas. If you can’t code yet, please go learn online for free like I did.
-
Start experimenting asap. Figure out what works, what doesn’t and how you can improve on existing systems in real time like everyone else.
-
Stop building chatbots. Most apps don’t need chatbots. Explore new interfaces.
Bonus: Environmental impact of AI
Supposedly, streaming and burgers are worse for the planet than AI. This was an interesting read by Andy Masley on the topic: https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for
Feel free to ask me anything or get in touch for more hands-on guidance about productivity and AI tooling.