We combine a machine vision system that recognises emotions and a non-photorealistic rendering (NPR) system to automatically produce portraits which heighten the emotion of the sitter. To do this, the vision system analyses a short video clip of a person expressing an emotion, then tracks the movement of facial features and uses this tracking data to analyse which emotion was expressed and what the temporal dynamics of the expression were. The image where the emotion is expressed strongest, the location of the facial features in that image and a keyword describing the emotion detected are passed to the NPR software. This keyword is used to choose appropriate (simulated) art materials, colour palettes, abstraction methods and painting styles, so that the rendered image may heighten the emotion being expressed. We describe the vision and rendering systems and their combination, and provide examples of portraits produced in this emotionally aware fashion.