Deepfakes Software For All www.faceswap.dev
Go to file
torzdf cd00859c40
model_refactor (#571) (#572)
* model_refactor (#571)

* original model to new structure

* IAE model to new structure

* OriginalHiRes to new structure

* Fix trainer for different resolutions

* Initial config implementation

* Configparse library added

* improved training data loader

* dfaker model working

* Add logging to training functions

* Non blocking input for cli training

* Add error handling to threads. Add non-mp queues to queue_handler

* Improved Model Building and NNMeta

* refactor lib/models

* training refactor. DFL H128 model Implementation

* Dfaker - use hashes

* Move timelapse. Remove perceptual loss arg

* Update INSTALL.md. Add logger formatting. Update Dfaker training

* DFL h128 partially ported

* Add mask to dfaker (#573)

* Remove old models. Add mask to dfaker

* dfl mask. Make masks selectable in config (#575)

* DFL H128 Mask. Mask type selectable in config.

* remove gan_v2_2

* Creating Input Size config for models

Creating Input Size config for models

Will be used downstream in converters.

Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes)

* Add mask loss options to config

* MTCNN options to config.ini. Remove GAN config. Update USAGE.md

* Add sliders for numerical values in GUI

* Add config plugins menu to gui. Validate config

* Only backup model if loss has dropped. Get training working again

* bugfixes

* Standardise loss printing

* GUI idle cpu fixes. Graph loss fix.

* mutli-gpu logging bugfix

* Merge branch 'staging' into train_refactor

* backup state file

* Crash protection: Only backup if both total losses have dropped

* Port OriginalHiRes_RC4 to train_refactor (OriginalHiRes)

* Load and save model structure with weights

* Slight code update

* Improve config loader. Add subpixel opt to all models. Config to state

* Show samples... wrong input

* Remove AE topology. Add input/output shapes to State

* Port original_villain (birb/VillainGuy) model to faceswap

* Add plugin info to GUI config pages

* Load input shape from state. IAE Config options.

* Fix transform_kwargs.
Coverage to ratio.
Bugfix mask detection

* Suppress keras userwarnings.
Automate zoom.
Coverage_ratio to model def.

* Consolidation of converters & refactor (#574)

* Consolidation of converters & refactor

Initial Upload of alpha

Items
- consolidate convert_mased & convert_adjust into one converter
-add average color adjust to convert_masked
-allow mask transition blur size to be a fixed integer of pixels and a fraction of the facial mask size
-allow erosion/dilation size to be a fixed integer of pixels and a fraction of the facial mask size
-eliminate redundant type conversions to avoid multiple round-off errors
-refactor loops for vectorization/speed
-reorganize for clarity & style changes

TODO
- bug/issues with warping the new face onto a transparent old image...use a cleanup mask for now
- issues with mask border giving black ring at zero erosion .. investigate
- remove GAN ??
- test enlargment factors of umeyama standard face .. match to coverage factor
- make enlargment factor a model parameter
- remove convert_adjusted and referencing code when finished

* Update Convert_Masked.py

default blur size of 2 to match original...
description of enlargement tests
breakout matrxi scaling into def

* Enlargment scale as a cli parameter

* Update cli.py

* dynamic interpolation algorithm

Compute x & y scale factors from the affine matrix on the fly by QR decomp.
Choose interpolation alogrithm for the affine warp based on an upsample or downsample for each image

* input size
input size from config

* fix issues with <1.0 erosion

* Update convert.py

* Update Convert_Adjust.py

more work on the way to merginf

* Clean up help note on sharpen

* cleanup seamless

* Delete Convert_Adjust.py

* Update umeyama.py

* Update training_data.py

* swapping

* segmentation stub

* changes to convert.str

* Update masked.py

* Backwards compatibility fix for models
Get converter running

* Convert:
Move masks to class.
bugfix blur_size
some linting

* mask fix

* convert fixes

- missing facehull_rect re-added
- coverage to %
- corrected coverage logic
- cleanup of gui option ordering

* Update cli.py

* default for blur

* Update masked.py

* added preliminary low_mem version of OriginalHighRes model plugin

* Code cleanup, minor fixes

* Update masked.py

* Update masked.py

* Add dfl mask to convert

* histogram fix & seamless location

* update

* revert

* bugfix: Load actual configuration in gui

* Standardize nn_blocks

* Update cli.py

* Minor code amends

* Fix Original HiRes model

* Add masks to preview output for mask trainers
refactor trainer.__base.py

* Masked trainers converter support

* convert bugfix

* Bugfix: Converter for masked (dfl/dfaker) trainers

* Additional Losses (#592)

* initial upload

* Delete blur.py

* default initializer = He instead of Glorot (#588)

* Allow kernel_initializer to be overridable

* Add ICNR Initializer option for upscale on all models.

* Hopefully fixes RSoDs with original-highres model plugin

* remove debug line

* Original-HighRes model plugin Red Screen of Death fix, take #2

* Move global options to _base. Rename Villain model

* clipnorm and res block biases

* scale the end of res block

* res block

* dfaker pre-activation res

* OHRES pre-activation

* villain pre-activation

* tabs/space in nn_blocks

* fix for histogram with mask all set to zero

* fix to prevent two networks with same name

* GUI: Wider tooltips. Improve TQDM capture

* Fix regex bug

* Convert padding=48 to ratio of image size

* Add size option to alignments tool extract

* Pass through training image size to convert from model

* Convert: Pull training coverage from model

* convert: coverage, blur and erode to percent

* simplify matrix scaling

* ordering of sliders in train

* Add matrix scaling to utils. Use interpolation in lib.aligner transform

* masked.py Import get_matrix_scaling from utils

* fix circular import

* Update masked.py

* quick fix for matrix scaling

* testing thus for now

* tqdm regex capture bugfix

* Minor ammends

* blur size cleanup

* Remove coverage option from convert (Now cascades from model)

* Implement convert for all model types

* Add mask option and coverage option to all existing models

* bugfix for model loading on convert

* debug print removal

* Bugfix for masks in dfl_h128 and iae

* Update preview display. Add preview scaling to cli

* mask notes

* Delete training_data_v2.py

errant file

* training data variables

* Fix timelapse function

* Add new config items to state file for legacy purposes

* Slight GUI tweak

* Raise exception if problem with loaded model

* Add Tensorboard support (Logs stored in model directory)

* ICNR fix

* loss bugfix

* convert bugfix

* Move ini files to config folder. Make TensorBoard optional

* Fix training data for unbalanced inputs/outputs

* Fix config "none" test

* Keep helptext in .ini files when saving config from GUI

* Remove frame_dims from alignments

* Add no-flip and warp-to-landmarks cli options

* Revert OHR to RC4_fix version

* Fix lowmem mode on OHR model

* padding to variable

* Save models in parallel threads

* Speed-up of res_block stability

* Automated Reflection Padding

* Reflect Padding as a training option

Includes auto-calculation of proper padding shapes, input_shapes, output_shapes

Flag included in config now

* rest of reflect padding

* Move TB logging to cli. Session info to state file

* Add session iterations to state file

* Add recent files to menu. GUI code tidy up

* [GUI] Fix recent file list update issue

* Add correct loss names to TensorBoard logs

* Update live graph to use TensorBoard and remove animation

* Fix analysis tab. GUI optimizations

* Analysis Graph popup to Tensorboard Logs

* [GUI] Bug fix for graphing for models with hypens in name

* [GUI] Correctly split loss to tabs during training

* [GUI] Add loss type selection to analysis graph

* Fix store command name in recent files. Switch to correct tab on open

* [GUI] Disable training graph when 'no-logs' is selected

* Fix graphing race condition

* rename original_hires model to unbalanced
2019-02-09 18:35:12 +00:00
.github model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
config model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
lib model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
plugins model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
scripts model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
tools model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
.dockerignore Clearer requirements for each platform (#183) 2018-02-14 16:22:02 +01:00
.gitignore model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
Dockerfile.cpu Fix bug in docker image tf/tf:latest-gpu-py3 (#400) 2018-05-14 18:42:34 +01:00
Dockerfile.gpu Fix bug in docker image tf/tf:latest-gpu-py3 (#400) 2018-05-14 18:42:34 +01:00
faceswap.py Gui v3.0b (#436) 2018-06-20 19:25:31 +02:00
INSTALL.md model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00
LICENSE Add GNU General Public License v3.0 2018-02-25 16:41:32 +00:00
README.md Remove training data from README.md 2019-01-11 11:04:14 +00:00
requirements.txt Fix numpy version (numpy 1.16.0 has memory leaks) 2019-01-24 01:23:15 +00:00
setup.cfg dlib-cnn: Inconsistent image size bugfix 2018-11-12 12:30:08 +00:00
setup.py Cuda path check bugfix in setup.py 2019-01-08 14:04:25 +00:00
tools.py Implement Alignments tool and other minor fixes (#473) 2018-08-20 09:30:36 +01:00
USAGE.md model_refactor (#571) (#572) 2019-02-09 18:35:12 +00:00

Notice: This repository is not operated or maintained by /u/deepfakes. Please read the explanation below for details.

deepfakes_faceswap

Faceswap is a tool that utilizes deep learning to recognize and swap faces in pictures and videos.


Manifesto

Faceswap is not porn.

When faceswaping using an AI was first developed and became published, the technology was groundbreaking, it was a huge step in AI development. It was also completely ignored outside of academia. The code was confusing and fragmentary, it required a thorough understanding of state of the art AI techniques and a lot of effort to get anything out of it. One individual brought it together into one cohesive collection. It ran, it worked, and as is so often the way with new technology emerging on the internet, it was immediately used to create porn. The problem was that this was the first AI code that anyone could download, run and learn by experimentation without becoming a PHD candidate in math, computer theory, psychology, and more. Before "deepfakes" these techniques were like black magic, only practiced by those who could understand all of the inner workings as described in esoteric and endlessly complicated books and papers.

"Deepfakes" changed all that and anyone could participate in AI development. To us developers, the release of this code has opened up a fantastic learning opportunity. To build on ideas developed by others, to collaborate with coders with a huge variety of skills, to experiment with AI whilst learning new skills and ultimately contribute towards an emerging technology which will only see more mainstream use as it progresses.

Are there some out there doing horrible things with similar software? Yes. And because of this, the developers have been following strict ethical standards. Many of us don't even use it to create videos at all, we just tinker with the code to see what it all does. Sadly, the media concentrates only on the unethical uses of this software. That is unfortunately a nature of how it was first exposed to the public, but it is not representative of why it was created, how we use it now, or what we see in it's future. Like any technology, it can be used for good or it can be abused. It is our intention to develop faceswap in a way that it's potential for abuse is minimized whilst maximizing it's potential as a tool for learning, experimenting and, yes, for legitimate faceswaping.

We are not trying to denigrate celebrities or to demean anyone. We are programmers, we are engineers, we are Hollywood VFX artists, we are activists, we are hobbyists, we are human beings. To this end, we feel that it's time to come out with a standard statement of what this software is and isn't as far as us developers are concerned.

  • Faceswap is not for creating porn
  • Faceswap is not for changing faces without consent or with the intent of hiding it's use.
  • Faceswap is not for any illicit, unethical, or questionable purposes.
  • Faceswap exists to experiment and discover AI techniques, for social or political commentary, for movies, and for any number of ethical and reasonable uses.

We are very troubled by the fact that faceswap can be used for unethical and disreputable things. However, we support the development of tools and techniques that can be used ethically as well as provide education and experience in AI for anyone who wants to learn it hands-on. We will take a zero tolerance approach to anyone using this software for any unethical purposes and will actively discourage any such uses.

How To setup and run the project

Faceswap is a Python program that will run on multiple Operating Systems including Windows, Linux and MacOS.

See INSTALL.md for full installation instructions. You will need a modern GPU with CUDA support for best performance.

Overview

The project has multiple entry points. You will have to:

  • Gather photos (or use the one provided in the training data provided below)
  • Extract faces from your raw photos
  • Train a model on your photos (or use the one provided in the training data provided below)
  • Convert your sources with the model

Check out USAGE.md for more detailed instructions.

Extract

From your setup folder, run python faceswap.py extract. This will take photos from src folder and extract faces into extract folder.

Train

From your setup folder, run python faceswap.py train. This will take photos from two folders containing pictures of both faces and train a model that will be saved inside the models folder.

Convert

From your setup folder, run python faceswap.py convert. This will take photos from original folder and apply new faces into modified folder.

GUI

Alternatively you can run the GUI by running python faceswap.py gui

General notes:

  • All of the scripts mentioned have -h/--help options with arguments that they will accept. You're smart, you can figure out how this works, right?!

NB: there is a conversion tool for video. This can be accessed by running python tools.py effmpeg -h. Alternatively you can use ffmpeg to convert video into photos, process images, and convert images back to video.

Some tips:

Reusing existing models will train much faster than starting from nothing.
If there is not enough training data, start with someone who looks similar, then switch the data.

Help I need support!

Discord Server

Your best bet is to join the Faceswap Discord server where there are plenty of users willing to help. Please note that, like this repo, this is a SFW Server!

Faceswap-Playground

Alternatively you can post questions in the Faceswap Playground. Please do not post general support questions in this repo.

How to contribute

For people interested in the generative models

  • Go to the 'faceswap-model' to discuss/suggest/commit alternatives to the current algorithm.

For devs

  • Read this README entirely
  • Fork the repo
  • Download the data with the link provided above
  • Play with it
  • Check issues with the 'dev' tag
  • For devs more interested in computer vision and openCV, look at issues with the 'opencv' tag. Also feel free to add your own alternatives/improvments

For non-dev advanced users

  • Read this README entirely
  • Clone the repo
  • Download the data with the link provided above
  • Play with it
  • Check issues with the 'advuser' tag
  • Also go to the 'faceswap-playground' repo and help others.

For end-users

  • Get the code here and play with it if you can
  • You can also go to the 'faceswap-playground' repo and help or get help from others.
  • Be patient. This is relatively new technology for developers as well. Much effort is already being put into making this program easy to use for the average user. It just takes time!
  • Notice Any issue related to running the code has to be open in the 'faceswap-playground' project!

For haters

Sorry, no time for that.

About github.com/deepfakes

What is this repo?

It is a community repository for active users.

Why this repo?

The joshua-wu repo seems not active. Simple bugs like missing http:// in front of urls have not been solved since days.

Why is it named 'deepfakes' if it is not /u/deepfakes?

  1. Because a typosquat would have happened sooner or later as project grows
  2. Because we wanted to recognize the original author
  3. Because it will better federate contributors and users

What if /u/deepfakes feels bad about that?

This is a friendly typosquat, and it is fully dedicated to the project. If /u/deepfakes wants to take over this repo/user and drive the project, he is welcomed to do so (Raise an issue, and he will be contacted on Reddit). Please do not send /u/deepfakes messages for help with the code you find here.

About machine learning

How does a computer know how to recognise/shape a faces? How does machine learning work? What is a neural network?

It's complicated. Here's a good video that makes the process understandable: How Machines Learn

Here's a slightly more in depth video that tries to explain the basic functioning of a neural network: How Machines Learn

tl;dr: training data + trial and error