Streetscape augmentation using generative adversarial networks: Insights related to health and wellbeing

Jasper S. Wijnands, Kerry A. Nice, Jason Thompson, Haifeng Zhao, Mark Stevenson

Research output: Contribution to journalArticleResearchpeer-review

2 Citations (Scopus)

Abstract

Deep learning using neural networks has provided advances in image style transfer, merging the content of one image (e.g., a photo) with the style of another (e.g., a painting). Our research shows this concept can be extended to analyse the design of streetscapes in relation to health and wellbeing outcomes. An Australian population health survey (n = 34,000) was used to identify the spatial distribution of health and wellbeing outcomes, including general health and social capital. For each outcome, the most and least desirable locations formed two domains. Streetscape design was sampled using around 80,000 Google Street View images per domain. Generative adversarial networks translated these images from one domain to the other, preserving the main structure of the input image, but transforming the ‘style’ from locations where self-reported health was bad to locations where it was good. These translations indicate that areas in Melbourne with good general health are characterised by sufficient green space and compactness of the urban environment, whilst streetscape imagery related to high social capital contained more and wider footpaths, fewer fences and more grass. Beyond identifying relationships, the method is a first step towards computer-generated design interventions that have the potential to improve population health and wellbeing.

Original languageEnglish
Article number101602
Number of pages12
JournalSustainable Cities and Society
Volume49
DOIs
Publication statusPublished - Aug 2019
Externally publishedYes

Keywords

  • Design
  • Generative adversarial network
  • Health
  • Street view
  • Style transfer
  • Wellbeing

Cite this