QR codes and the adjacent possible
The decade the camera becomes the platform.
Quick response (QR) codes have been around for a while, even featuring on the slope of enlightenment in the Gartner Hype Cycle back in 2011. They’ve actually been in existence since ’94, when they were developed for the Japanese automotive industry.
Up until recently though (in the west at least) they’ve been pretty superfluous right? I remember clumsily thumbing my way to a third party camera app on iOS, after being presented with a QR in a printed manual for a kitchen appliance or something. Scanning the QR opened up a web page that linked to a PDF version of the exact same manual. Useful!
Though they’ve always been filled with promise, it feels like it’s taken a global pandemic for them to actually be allowed to provide some sort of widespread utility.
They’re the perfect tech endpoint or execution in the socially distanced era. Quickly transferring data from a physical thing in the built environment, to your phone.
One of the executions that successfully fumbled its way to prominence has been the NHS track and trace process in the UK. You scan a venue’s QR code, more often than not plastered on an entrance window. It opens the app and nudges you to ‘check-in’. Using that data, the service would then notify people of exposure alerts that occurred at places you’ve visited.
The interesting thing is that national adoption of track and trace wouldn’t have been so (subjectively) successful had other advancements not paved the way.
When I notice these things, I’m always drawn back to Steven Johnson’s concept of the ‘adjacent possible’. Innovation tends to happen adjacent to the things that are already available. Better network connections laid the path for streaming services. More efficient silicone chips meant processing chips could be put into almost anything.
In this case, mobile camera enhancements at the software layer afforded us with new ways of reducing the friction between the physical, built environment and the networked digital layer. You no longer need a separate app to read the QR. The camera becomes the bridge in being able to access the information it contains.
This shift, from a visual lens that’s passive and reactive (mirroring the legacy use case for a camera), to a more cognitive active extension of us. Helping us to decipher things. Adding human value to the machine readable.
What else could we do with this? Where could we extend this concept further?
A pocketable lidar that turns every vehicle into an autonomous machine, simply by placing the device on the dashboard.
I guess you only know what’s possible, when you take the time to see what’s already happening in the adjacent spaces.