faceswap/plugins/Model_Original/Model.py
Othniel Cundangan 810bd0bce7
Update GAN64 to v2 (#217)
* Clearer requirements for each platform

* Refactoring of old plugins (Model_Original + Extract_Align) + Cleanups

* Adding GAN128

* Update GAN to v2

* Create instance_normalization.py

* Fix decoder output

* Revert "Fix decoder output"

This reverts commit 3a8ecb8957fe65e66282197455d775eb88455a77.

* Fix convert

* Enable all options except perceptual_loss by default

* Disable instance norm

* Update Model.py

* Update Trainer.py

* Match GAN128 to shaoanlu's latest v2

* Add first_order to GAN128

* Disable `use_perceptual_loss`

* Fix call to `self.first_order`

* Switch to average loss in output

* Constrain average to last 100 iterations

* Fix math, constrain average to intervals of 100

* Fix math averaging again

* Remove math and simplify this damn averagin

* Add gan128 conversion

* Update convert.py

* Use non-warped images in masked preview

* Add K.set_learning_phase(1) to gan64

* Add K.set_learning_phase(1) to gan128

* Add missing keras import

* Use non-warped images in masked preview for gan128

* Exclude deleted faces from conversion

* --input-aligned-dir defaults to "{input_dir}/aligned"

* Simplify map operation

* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.

2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor

fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error

if face location not found by CNN, its try to find by HOG.

removed this:
-        if face.landmarks == None:
-            print("Warning! landmarks not found. Switching to crop!")
-            return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks

* Enabled masked converter for GAN models

* Histogram matching, cli option for perceptual loss

* Fix init() positional args error

* Add backwards compatibility for aligned filenames

* Fix masked converter

* Remove GAN converters
2018-03-09 19:43:24 -05:00

65 lines
2.2 KiB
Python

# Based on the original https://www.reddit.com/r/deepfakes/ code sample + contribs
from keras.models import Model as KerasModel
from keras.layers import Input, Dense, Flatten, Reshape
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import Conv2D
from keras.optimizers import Adam
from .AutoEncoder import AutoEncoder
from lib.PixelShuffler import PixelShuffler
IMAGE_SHAPE = (64, 64, 3)
ENCODER_DIM = 1024
class Model(AutoEncoder):
def initModel(self):
optimizer = Adam(lr=5e-5, beta_1=0.5, beta_2=0.999)
x = Input(shape=IMAGE_SHAPE)
self.autoencoder_A = KerasModel(x, self.decoder_A(self.encoder(x)))
self.autoencoder_B = KerasModel(x, self.decoder_B(self.encoder(x)))
self.autoencoder_A.compile(optimizer=optimizer, loss='mean_absolute_error')
self.autoencoder_B.compile(optimizer=optimizer, loss='mean_absolute_error')
def converter(self, swap):
autoencoder = self.autoencoder_B if not swap else self.autoencoder_A
return lambda img: autoencoder.predict(img)
def conv(self, filters):
def block(x):
x = Conv2D(filters, kernel_size=5, strides=2, padding='same')(x)
x = LeakyReLU(0.1)(x)
return x
return block
def upscale(self, filters):
def block(x):
x = Conv2D(filters * 4, kernel_size=3, padding='same')(x)
x = LeakyReLU(0.1)(x)
x = PixelShuffler()(x)
return x
return block
def Encoder(self):
input_ = Input(shape=IMAGE_SHAPE)
x = input_
x = self.conv(128)(x)
x = self.conv(256)(x)
x = self.conv(512)(x)
x = self.conv(1024)(x)
x = Dense(ENCODER_DIM)(Flatten()(x))
x = Dense(4 * 4 * 1024)(x)
x = Reshape((4, 4, 1024))(x)
x = self.upscale(512)(x)
return KerasModel(input_, x)
def Decoder(self):
input_ = Input(shape=(8, 8, 512))
x = input_
x = self.upscale(256)(x)
x = self.upscale(128)(x)
x = self.upscale(64)(x)
x = Conv2D(3, kernel_size=5, padding='same', activation='sigmoid')(x)
return KerasModel(input_, x)