• Take the standard image generation workflow in tools like DALL-E or Midjourney. You type a prompt, wait for a result, and if you don’t like it, you start over. It’s a linear, one-shot process that treats design like a slot machine: Input prompt, receive image, and see how lucky you get. But that’s not how design works. Real design is messy, iterative, and non-linear. We explore multiple directions simultaneously. We combine elements from different attempts. We refine and adjust constantly. Most importantly, we need to see our options side by side, comparing and contrasting until we find the right direction.

    Ai tools will be marked by compare and contrast, but based on extrapolations of what we know, operating on our insights in a leveraged way
  • Flora is the closest thing I’ve seen to a tool that emulates this process: It lets me explore multiple iterations simultaneously. Having the freedom to sketch, experiment, and pivot rapidly feels liberating. I can swiftly iterate and iterate again, nudging the results closer to my vision without losing momentum.

    The same principle applies to any tool that uses speculative decoding — Cursor, for instance, enables the same kind of rapid iteration by generating multiple plausible continuations simultaneously. The pattern generalizes: creative tools benefit from speculative output that lets users compare and select rather than generate and restart.
  • The secret sauce is in how closely the Flora team worked with actual creative professionals to understand their workflow. They collaborated with New York University’s Tisch School of the Arts, design firm Pentagram, and other major creative institutions and agencies across the U.S. They’re building it for real creative workflows because they’re embedded in those communities. You can feel that closeness to the end user when you’re using the product.

  • As a designer, I need the ability to: • Make hyper-targeted adjustments to specific elements within generated images, like zooming into the pixel level in Photoshop to refine it manually. • Not just apply design principles like alignment, hierarchy, and spacing to AI outputs, but also infuse my own taste within those parameters.

    This is a prompting challenge that should be solved by something agentic — there is work in identifying alongside creatives what those parameters are
  • This is a complete flip-flop from the consensus just a few years ago. Back then, investors and tech people wouldn’t take you seriously if you were “just building a wrapper” around existing AI models. But look at AI Drive (formerly PDF.ai): The team has been hugely successful doing one thing—letting you interact with your PDF files in new ways—really well. They identified a need and addressed it, without trying to be all things to all people.