janhenning.raff@gmail.com
@j__________r___ <— instagram
prof. for visual communication~~media-university.de
¬ Berlin, Germany
Dear reader~viewer, I am interested how “AI” can help to understand human visual intelligence, especially in relation to design. This leads me to conceive tools for visual communication, e.g. for layout. Here are some experiments.
Here is a composition generator that has embodied spatial categories as input. Participants were asked to evaluate random compositions in an online survey. From the answers, I derived some interactive generators:
I also trained a convolutional neural network (CNN) with the data:
Only 4 edge filters were the input for a fully connected network. The second visualization is an “activation map” representing the amount of horizontals and verticals in a predefined grid.
Training data was scarce. This is one example with a lot of congruence between humans and machine. We can observe that embodied apprehension of a composition (e.g. “this looks heavy”) is relying on just a few features.
But also the not fitting results are of interest – see static-moving:
Read more about my research on spatial arrangement.
I use machine learning to analyze visual communication – I talked and wrote about it: Machine Learning for Basic Visual Research in Graphic Design.
Here is an example: all posters from the competition “100 best Posters” from 2001–2023 sorted by main color:
Check the interactive 3D version: 100 beste Plakate 2001-2024 sorted by main color.
I also explore how vision is conceived in machine learning and deep learning within the MA program AI & societies of Media University.
Now feel free to browse design±research.
… or are you looking for noise?