A Neural Algorithm of Artistic Style, published in 2015 by Leon A. Gatys, et al., describes a way to differentiate between the content and the style of an image. As a result of such differentiation, it is possible to create a new image with content of one image and the style of another.
Such differentiation is possible through a class of Deep Neural Networks known as Convolutional Neural Network(CNN). Each layer of CNN stores information in hierarchial order. In other words, lower layers learn representations like edges and line strokes, and higher layers learn representations that are more real like face of a person. As such, higher layers of the CNN can be used to extract content of an image.
However, to extract style from an image, the paper defines a Gram matrix.
The terms of the Gram matrix are proportional to the covariances of corresponding sets of features, and thus captures information about which features tend to activate together. By only capturing these aggregate statistics across the image, they are blind to the specific arrangement of objects inside the image - https://harishnarayanan.org
A white noise image, which will be the final image, is defined at first. Then, the style loss and the content loss are reduced in the white noise image using gradient descent in a VGG-Network, CNN with 16 Convolutional and 5 pooling layer. The resulting image will the image with content of one image, and the style of another.
Code: Click here
Original paper: https://arxiv.org/abs/1508.06576
Background image https://docs.neptune.ml/get-started/style-transfer/