mirror of
https://github.com/zebrajr/faceswap.git
synced 2025-12-06 12:20:27 +01:00
* Core Updates
- Remove lib.utils.keras_backend_quiet and replace with get_backend() where relevant
- Document lib.gpu_stats and lib.sys_info
- Remove call to GPUStats.is_plaidml from convert and replace with get_backend()
- lib.gui.menu - typofix
* Update Dependencies
Bump Tensorflow Version Check
* Port extraction to tf2
* Add custom import finder for loading Keras or tf.keras depending on backend
* Add `tensorflow` to KerasFinder search path
* Basic TF2 training running
* model.initializers - docstring fix
* Fix and pass tests for tf2
* Replace Keras backend tests with faceswap backend tests
* Initial optimizers update
* Monkey patch tf.keras optimizer
* Remove custom Adam Optimizers and Memory Saving Gradients
* Remove multi-gpu option. Add Distribution to cli
* plugins.train.model._base: Add Mirror, Central and Default distribution strategies
* Update tensorboard kwargs for tf2
* Penalized Loss - Fix for TF2 and AMD
* Fix syntax for tf2.1
* requirements typo fix
* Explicit None for clipnorm if using a distribution strategy
* Fix penalized loss for distribution strategies
* Update Dlight
* typo fix
* Pin to TF2.2
* setup.py - Install tensorflow from pip if not available in Conda
* Add reduction options and set default for mirrored distribution strategy
* Explicitly use default strategy rather than nullcontext
* lib.model.backup_restore documentation
* Remove mirrored strategy reduction method and default based on OS
* Initial restructure - training
* Remove PingPong
Start model.base refactor
* Model saving and resuming enabled
* More tidying up of model.base
* Enable backup and snapshotting
* Re-enable state file
Remove loss names from state file
Fix print loss function
Set snapshot iterations correctly
* Revert original model to Keras Model structure rather than custom layer
Output full model and sub model summary
Change NNBlocks to callables rather than custom keras layers
* Apply custom Conv2D layer
* Finalize NNBlock restructure
Update Dfaker blocks
* Fix reloading model under a different distribution strategy
* Pass command line arguments through to trainer
* Remove training_opts from model and reference params directly
* Tidy up model __init__
* Re-enable tensorboard logging
Suppress "Model Not Compiled" warning
* Fix timelapse
* lib.model.nnblocks - Bugfix residual block
Port dfaker
bugfix original
* dfl-h128 ported
* DFL SAE ported
* IAE Ported
* dlight ported
* port lightweight
* realface ported
* unbalanced ported
* villain ported
* lib.cli.args - Update Batchsize + move allow_growth to config
* Remove output shape definition
Get image sizes per side rather than globally
* Strip mask input from encoder
* Fix learn mask and output learned mask to preview
* Trigger Allow Growth prior to setting strategy
* Fix GUI Graphing
* GUI - Display batchsize correctly + fix training graphs
* Fix penalized loss
* Enable mixed precision training
* Update analysis displayed batch to match input
* Penalized Loss - Multi-GPU Fix
* Fix all losses for TF2
* Fix Reflect Padding
* Allow different input size for each side of the model
* Fix conv-aware initialization on reload
* Switch allow_growth order
* Move mixed_precision to cli
* Remove distrubution strategies
* Compile penalized loss sub-function into LossContainer
* Bump default save interval to 250
Generate preview on first iteration but don't save
Fix iterations to start at 1 instead of 0
Remove training deprecation warnings
Bump some scripts.train loglevels
* Add ability to refresh preview on demand on pop-up window
* Enable refresh of training preview from GUI
* Fix Convert
Debug logging in Initializers
* Fix Preview Tool
* Update Legacy TF1 weights to TF2
Catch stats error on loading stats with missing logs
* lib.gui.popup_configure - Make more responsive + document
* Multiple Outputs supported in trainer
Original Model - Mask output bugfix
* Make universal inference model for convert
Remove scaling from penalized mask loss (now handled at input to y_true)
* Fix inference model to work properly with all models
* Fix multi-scale output for convert
* Fix clipnorm issue with distribution strategies
Edit error message on OOM
* Update plaidml losses
* Add missing file
* Disable gmsd loss for plaidnl
* PlaidML - Basic training working
* clipnorm rewriting for mixed-precision
* Inference model creation bugfixes
* Remove debug code
* Bugfix: Default clipnorm to 1.0
* Remove all mask inputs from training code
* Remove mask inputs from convert
* GUI - Analysis Tab - Docstrings
* Fix rate in totals row
* lib.gui - Only update display pages if they have focus
* Save the model on first iteration
* plaidml - Fix SSIM loss with penalized loss
* tools.alignments - Remove manual and fix jobs
* GUI - Remove case formatting on help text
* gui MultiSelect custom widget - Set default values on init
* vgg_face2 - Move to plugins.extract.recognition and use plugins._base base class
cli - Add global GPU Exclude Option
tools.sort - Use global GPU Exlude option for backend
lib.model.session - Exclude all GPUs when running in CPU mode
lib.cli.launcher - Set backend to CPU mode when all GPUs excluded
* Cascade excluded devices to GPU Stats
* Explicit GPU selection for Train and Convert
* Reduce Tensorflow Min GPU Multiprocessor Count to 4
* remove compat.v1 code from extract
* Force TF to skip mixed precision compatibility check if GPUs have been filtered
* Add notes to config for non-working AMD losses
* Rasie error if forcing extract to CPU mode
* Fix loading of legace dfl-sae weights + dfl-sae typo fix
* Remove unused requirements
Update sphinx requirements
Fix broken rst file locations
* docs: lib.gui.display
* clipnorm amd condition check
* documentation - gui.display_analysis
* Documentation - gui.popup_configure
* Documentation - lib.logger
* Documentation - lib.model.initializers
* Documentation - lib.model.layers
* Documentation - lib.model.losses
* Documentation - lib.model.nn_blocks
* Documetation - lib.model.normalization
* Documentation - lib.model.session
* Documentation - lib.plaidml_stats
* Documentation: lib.training_data
* Documentation: lib.utils
* Documentation: plugins.train.model._base
* GUI Stats: prevent stats from using GPU
* Documentation - Original Model
* Documentation: plugins.model.trainer._base
* linting
* unit tests: initializers + losses
* unit tests: nn_blocks
* bugfix - Exclude gpu devices in train, not include
* Enable Exclude-Gpus in Extract
* Enable exclude gpus in tools
* Disallow multiple plugin types in a single model folder
* Automatically add exclude_gpus argument in for cpu backends
* Cpu backend fixes
* Relax optimizer test threshold
* Default Train settings - Set mask to Extended
* Update Extractor cli help text
Update to Python 3.8
* Fix FAN to run on CPU
* lib.plaidml_tools - typofix
* Linux installer - check for curl
* linux installer - typo fix
126 lines
4.7 KiB
Python
126 lines
4.7 KiB
Python
#!/usr/bin/env python3
|
|
""" Utils imported from Keras as their location changes between Tensorflow Keras and standard
|
|
Keras. Also ensures testing consistency """
|
|
import inspect
|
|
import sys
|
|
|
|
import numpy as np
|
|
|
|
|
|
def generate_test_data(num_train=1000, num_test=500, input_shape=(10,),
|
|
output_shape=(2,),
|
|
classification=True, num_classes=2):
|
|
"""Generates test data to train a model on. classification=True overrides output_shape (i.e.
|
|
output_shape is set to (1,)) and the output consists in integers in [0, num_classes-1].
|
|
|
|
Otherwise: float output with shape output_shape.
|
|
"""
|
|
samples = num_train + num_test
|
|
if classification:
|
|
var_y = np.random.randint(0, num_classes, size=(samples,))
|
|
var_x = np.zeros((samples,) + input_shape, dtype=np.float32)
|
|
for i in range(samples):
|
|
var_x[i] = np.random.normal(loc=var_y[i], scale=0.7, size=input_shape)
|
|
else:
|
|
y_loc = np.random.random((samples,))
|
|
var_x = np.zeros((samples,) + input_shape, dtype=np.float32)
|
|
var_y = np.zeros((samples,) + output_shape, dtype=np.float32)
|
|
for i in range(samples):
|
|
var_x[i] = np.random.normal(loc=y_loc[i], scale=0.7, size=input_shape)
|
|
var_y[i] = np.random.normal(loc=y_loc[i], scale=0.7, size=output_shape)
|
|
|
|
return (var_x[:num_train], var_y[:num_train]), (var_x[num_train:], var_y[num_train:])
|
|
|
|
|
|
def to_categorical(var_y, num_classes=None, dtype='float32'):
|
|
"""Converts a class vector (integers) to binary class matrix.
|
|
E.g. for use with categorical_crossentropy.
|
|
|
|
Parameters
|
|
----------
|
|
var_y: int
|
|
Class vector to be converted into a matrix (integers from 0 to num_classes).
|
|
num_classes: int
|
|
Total number of classes.
|
|
dtype: str
|
|
The data type expected by the input, as a string (`float32`, `float64`, `int32`...)
|
|
|
|
Returns
|
|
-------
|
|
tensor
|
|
A binary matrix representation of the input. The classes axis is placed last.
|
|
|
|
Example
|
|
-------
|
|
>>> # Consider an array of 5 labels out of a set of 3 classes {0, 1, 2}:
|
|
>>> labels
|
|
>>> array([0, 2, 1, 2, 0])
|
|
>>> # `to_categorical` converts this into a matrix with as many columns as there are classes.
|
|
>>> # The number of rows stays the same.
|
|
>>> to_categorical(labels)
|
|
>>> array([[ 1., 0., 0.],
|
|
>>> [ 0., 0., 1.],
|
|
>>> [ 0., 1., 0.],
|
|
>>> [ 0., 0., 1.],
|
|
>>> [ 1., 0., 0.]], dtype=float32)
|
|
"""
|
|
var_y = np.array(var_y, dtype='int')
|
|
input_shape = var_y.shape
|
|
if input_shape and input_shape[-1] == 1 and len(input_shape) > 1:
|
|
input_shape = tuple(input_shape[:-1])
|
|
var_y = var_y.ravel()
|
|
if not num_classes:
|
|
num_classes = np.max(var_y) + 1
|
|
var_n = var_y.shape[0]
|
|
categorical = np.zeros((var_n, num_classes), dtype=dtype)
|
|
categorical[np.arange(var_n), var_y] = 1
|
|
output_shape = input_shape + (num_classes,)
|
|
categorical = np.reshape(categorical, output_shape)
|
|
return categorical
|
|
|
|
|
|
def has_arg(func, name, accept_all=False):
|
|
"""Checks if a callable accepts a given keyword argument.
|
|
|
|
For Python 2, checks if there is an argument with the given name.
|
|
For Python 3, checks if there is an argument with the given name, and also whether this
|
|
argument can be called with a keyword (i.e. if it is not a positional-only argument).
|
|
|
|
Parameters
|
|
----------
|
|
func: object
|
|
Callable to inspect.
|
|
name: str
|
|
Check if `func` can be called with `name` as a keyword argument.
|
|
accept_all: bool, optional
|
|
What to return if there is no parameter called `name` but the function accepts a
|
|
`**kwargs` argument. Default: ``False``
|
|
|
|
Returns
|
|
-------
|
|
bool
|
|
Whether `func` accepts a `name` keyword argument.
|
|
"""
|
|
if sys.version_info < (3,):
|
|
arg_spec = inspect.getargspec(func)
|
|
if accept_all and arg_spec.keywords is not None:
|
|
return True
|
|
return (name in arg_spec.args)
|
|
elif sys.version_info < (3, 3):
|
|
arg_spec = inspect.getfullargspec(func)
|
|
if accept_all and arg_spec.varkw is not None:
|
|
return True
|
|
return (name in arg_spec.args or
|
|
name in arg_spec.kwonlyargs)
|
|
else:
|
|
signature = inspect.signature(func)
|
|
parameter = signature.parameters.get(name)
|
|
if parameter is None:
|
|
if accept_all:
|
|
for param in signature.parameters.values():
|
|
if param.kind == inspect.Parameter.VAR_KEYWORD:
|
|
return True
|
|
return False
|
|
return (parameter.kind in (inspect.Parameter.POSITIONAL_OR_KEYWORD,
|
|
inspect.Parameter.KEYWORD_ONLY))
|