
Hyperflow Canvas (2023)
Intro
Hyperflow Canvas was speculative exploration for integrating spatial canvas features on top of Hyperflow Library, allowing users to arrange and directly manipulate media and information in their workspace (as well as new items imported onto the canvas).
At the time (and mostly still today), most whiteboarding tools shared a few major drawbacks: they tend to become chaotic and unwieldy over time; the layout of items on the canvas is rigid and hard to dynamically rearrange; finding and navigating to items on the canvas is limited (which among other things leads to unnecessary asset duplication); they don’t support a wide range of media types.
So the goal was to investigate how we could design a ‘best of all worlds’ approach fusing the structured organization of Library with the freeform nature of Canvas, with the eventual aim of supporting creative workflows from the ideation stage all the way through to presentation.
As with Library, we explored multiple use cases and user flows in Figma to validate demand - the main ones are expanded upon below, and in this demo video (also 100% Figma).
Adding Items to Canvas
Media
In addition to drag and drop, we prototyped several user flows for importing media onto the canvas, with a special attention paid to helping users reuse and repurpose assets already in their Library (and maintaining bidirectional links between the original asset and the instances of it on the canvas(es)).




Collections
Users could also embed Collections on the canvas, and drag items from an embedded Collection onto the canvas.





Web Links and Spatial Browsing
Another direction we went deep on was allowing users to embed live webpages onto the canvas, and both manually drag and drop items from a webpage onto canvas ...



... bulk export all the media from a webpage onto the canvas ...


... as well as open anchor links in a new parallel block, allowing users to see all their open tabs in a single view and display their browsing history in a more visual format.

Manipulation, Transformation, and Visualization
One of the main motivations for exploring a canvas-based UI was to be able to support more creative workflow tasks beyond the ‘capture, organize, annotate, search’ use cases that were covered in Library.
We focused on the early stages of creative workflows (research, ideation, conception, development and preproduction etc) and specific workflow tasks like collecting research materials and creating moodboards, shot lists, and other creative artifacts.
Currently many of these tasks are performed across multiple standalone apps (which usually requires a lot of repetitive importing/exporting and context switching between apps), so the idea was to try and bring some of the capabilities of those apps into Hyperflow and allow users to directly manipulate and transform content without leaving the canvas.
Examples shown below: extracting color palettes from images; separating and extracting audio from videos; extracting frames or clips from videos (individually or in bulk); extracting transcripts from audio; extracting text from images; extracting pages from PDFs etc.









Alignment and Arrangement
As mentioned above, one of the problems inherent with canvas apps is that projects tend to become very messy and chaotic as they grow, and although items on the canvas aren’t fixed in place, it’s almost impossible to quickly rearrange them (which is useful for reorienting and recontextualizing projects and ideas as they evolve).
So we spent a lot of time thinking of ways of making the canvas more dynamic and allowing users to easily cluster, sort, group, and reorganize items, starting with the simple alignment capabilities found in most canvas apps …



... and then developing a more advanced 'Smart Arrange' feature that extended the concept of database views to a canvas UI, allowing users to reorganize items into several different configurations based on properties like creation date or media type ...




... or content-based features like dominant color.

