Google Design's best of 2018
Over the past 12 months, Google has been delving deep into the fascinating world of machine learning with articles on Google Design, plus new open-source projects and products. With AI becoming an increasingly prevalent part of the creative world, this should come as no surprise. Also unsurprisingly, the work it’s carried out in the field is innovative, exciting, and stimulating.
2018’s Review of the Year is supported by Google Design. Google Design, for the uninitiated, is an initiative led by an uber-talented selection of developers, designers, and writers at Google. They’re all about working across teams to create top-notch content and to produce events that champion creativity and showcase the brilliant design work Google does day in and day out. Having celebrated a Milan Design Week debut, amongst other achievements, it feels like Google Design has pushed on from an already exciting position.
We’ve decided to look back on a quartet of projects that turned our head in 2018. For more, check out Google Design’s Best of 2018, where the editorial team highlights all of the year’s noteworthy design projects.
Google Fonts + 한국어 소개
We’d like to think that we know our readers pretty well by this point, which means we’d also like to think that when we get excited about something, you’ll share our sense of childish glee. And we are very excited about this project from the type-fanatics at Google Fonts.
An “innovative new delivery system” for Chinese, Japanese, and Korean (CJK) character sets, it opens up virtual avenues for a “beautiful, fast, and open” internet for everybody. Using machine learning, it “slices and dices” incredibly large font files into smaller “byte-sized” chunks, “delivering only what you need, when you need it.”
Ever wanted to bring the internet’s limitless pool of knowledge into reality? You have? Good, let’s be friends. And let’s both start using Google Lens immediately. Point your phone at something – a dog, say, or a really tasty looking bowl of a hitherto-unidentifiable stew, perhaps – and instantly you’ll learn everything about it you could ever need to know. This is searching IRL and it is brilliant.
Google launched the Lens in 2017, but this year saw it fully realise the capabilities of the service, bringing the technology directly to the Google Pixel’s camera app (and other supported devices), so in Google’s words, “you’ve got state-of-the-art machine learning in real time, wherever you may be.”
If we weren’t at the forefront of telling the world about the infinite number of amazing things that happen in the creative sphere on a daily basis, we’d probably spend most of our days in a shed tinkering about with synthesisers. Until then, we’ll satisfy ourselves with this open-source musical instrument that turns heavy math into music by using neural networks to generate entirely new sounds.
The NSynth grew out of a collaboration between Google Creative Lab and Magenta, a research project using ML to make music and art. While you probably still have to have some innate musical talent to get the most out of it – and our ability stretches as far as being able to play a nearly-there version of Frère Jacques on the piano – we’d still recommend reading more on the project and then giving it a go for yourself.
People + AI Research Collection (PAIR)
If you’ve spent the past year trying to get your head around exactly what machine learning is and what people are talking about when they talk about “the UX of AI” or “neural networks” or “teachable machines”, then People + AI Research’s (PAIR) collection of articles and videos is likely just what you’re after.
Google Design says that “as a resource, it’s one of the only places you can find practical insights on designing with ML, and case studies that unpack the thinking behind real products – like Google Clips and Emoji Scavenger Hunt (️♀️).” They’ve bookmarked it. Which means you’d do well to, too.