ML Two
Lecture 06
🤗Style Transfer with CreateML😎
Welcome 👩‍🎤🧑‍🎤👨‍🎤
First of all, don't forget to confirm your attendence on Seats App!
Style Transfer fun demo 01
Style Transfer fun demo 02
A little example of style transfering a video
It's a nice algorithm for human-AI co-creation 🧚
after today's lecture:
-- Neural Style Transfer : how does it work🤖
-- Neural Style Transfer with CreateML 🦈
Neural Style Transfer
"Neural": it leverages deep learning technique (it uses neural networks for producing images)
Style transfer🤔
Style transfer 🪼 - level 1 understanding
- Input: two images
-- one is called "content image", the other one called "style image"
- Output: one image
-- an image that mirrors the content of the "content image" in the style of the "style image"
Style transfer 🪼 - level 2 understanding
- How does a ST system produce the blended output from two input images?
-- Step 1. It initialises a random noise image like this
-- Step 2. It gradually refine the noise image to be more similar to the "content image" in its content and more similar to the "style image" in its style.
Style transfer 🪼 - level 2 understanding
- How does the refinement process work?
-- It uses gradient descent to interatively update the pixel values in the initialized image, to minimise the content loss and style loss calculated.
Style transfer 🪼 - level 3 understanding
- What are the content loss and style loss?
(p.s. this is where the neural network comes into play!)
Convolution neural network recap 👀
- It extracts hierarchical features where the early layers extract low-level features like vertical lines, the deeper layers extract high-level features like more complex shapes.
(relevant slides from ML One )
Style transfer 🪼 - level 3 understanding
- What are the content loss and style loss?
We'll use some pre-trained CNN to extract features of initialised image, content image, and style image at different levels and calculate their distances.
Style transfer 🪼 - level 3 understanding
Content: Two images are similar in content if their high-level features as extracted by a pre-trained image recognition system are similar.
Style transfer 🪼 - level 3 understanding
Style: Two images are similar in style if their low-level features as extracted by a pre-trained image recognition system share the same spatial statistics.
Style transfer 🪼 - level 3 understanding
But what is "a pre-trained image recognition system"?
- We have seen a few already!
- Check out all the image classification models here.
An illustration of NST
Style transfer 🪼 - Summary
How does it work?
- We pick a pre-trained image classification model.
- We specify an input Content image and an input Style image.
- The system initialises a random noise image.
- The system computes the content and style losses between initialised image and input content images, based on the image classification model.
- The system refines the initialised image iteratively using gradient descent to minimise the content and style losses.
An illustration of NST again
A detailed explanation YTB video
CreateML time!
your turn:
--1. summon CreateML
--2. select Style Transfer
--3. drop in your cool Training Style Image
--4. drop in your cool Content Images
--5. Train and see the result
--6. Play with hyperparameters "Iterations", "Style Strength" and "Syle Density" (re-train and see the result!)
🎉
👁️ References
- A Neural Algorithm of Artistic Style
- Exploring the structure of a real-time, arbitrary neural artistic stylization network
Maybe you'd like to have an IOS app running style transfer on camera input?
Here is a related project.
today we talked about:
-- Style transfer: what is it
--- input a content image and a style image, output a blended image
-- Style transfer: how does it work
--- Use pre-trained image classifier to compute content loss and style loss
--- Refine a randomly initialsed noise image via gradient descent for minimising the content and style losses
-- Style transfer with CreateML, done easily 🫡
We'll see you next week same time same place! 🫡