| Package | Description |
|---|---|
| org.deeplearning4j.parallelism | |
| org.deeplearning4j.parallelism.factory | |
| org.deeplearning4j.parallelism.trainer |
| Modifier and Type | Field and Description |
|---|---|
protected Trainer[] |
ParallelWrapper.zoo |
| Modifier and Type | Method and Description |
|---|---|
Trainer |
DefaultTrainerContext.create(int threadId,
org.deeplearning4j.nn.api.Model model,
int rootDevice,
boolean useMDS,
ParallelWrapper wrapper,
org.deeplearning4j.nn.conf.WorkspaceMode mode,
int averagingFrequency)
Create a
Trainer
based on the given parameters |
Trainer |
SymmetricTrainerContext.create(int threadId,
org.deeplearning4j.nn.api.Model model,
int rootDevice,
boolean useMDS,
ParallelWrapper wrapper,
org.deeplearning4j.nn.conf.WorkspaceMode mode,
int averagingFrequency)
Create a
Trainer
based on the given parameters |
Trainer |
TrainerContext.create(int threadId,
org.deeplearning4j.nn.api.Model model,
int rootDevice,
boolean useMDS,
ParallelWrapper wrapper,
org.deeplearning4j.nn.conf.WorkspaceMode workspaceMode,
int averagingFrequency)
Create a
Trainer
based on the given parameters |
| Modifier and Type | Interface and Description |
|---|---|
interface |
CommunicativeTrainer |
| Modifier and Type | Class and Description |
|---|---|
class |
DefaultTrainer
Trains datasets using a standard in memory
parameter averaging technique.
|
class |
SymmetricTrainer
This trainer implementation does parallel training via gradients broadcasts.
|
Copyright © 2017. All rights reserved.