Commit Graph

93 Commits

Author SHA1 Message Date
Andrey Malevich
03711e9ab8 Handle bool's correctly in net.Const
Summary: As desc.

Reviewed By: volkhin

Differential Revision: D5745310

fbshipit-source-id: 66c3da37a42cf98bae05cead58f3f694eae19e0d
2017-08-31 12:02:58 -07:00
Jiyan Yang
33ef5f38a0 Fixed cuda loss op
Summary:
Currently the loss ops are still not on GPU even though ALL strategy is selected.
This diff is to enable it.

Reviewed By: xianjiec

Differential Revision: D5671255

fbshipit-source-id: 033863f171e1f89c8d75430d3af6a1e6d0d2eff2
2017-08-30 17:02:23 -07:00
Ilia Cherniavskii
a0204331a8 Control flow operators
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
 - Added If operator that executes then/else subnet
 - Branch subnet is executed in a separate isolated workspace, with some of the blobs transparently forwarded from the outer workspace
 - Adding a new NetBuilder subclass to construct nets using new operator
 - NetBuilder also keeps track of outer blob names and automatically sets blob bindings between outer and inner workspace, implementing generic convention on handling local/global variables in blocks

Reviewed By: volkhin

Differential Revision: D5720644

fbshipit-source-id: a674cde0c789f6a6ffdcd9d80159d1e42e49133f
2017-08-28 20:04:43 -07:00
Artem Volkhin
d3c8e68004 Revert D5641588: [caffe2] Control flow operators
Summary:
This reverts commit f9e04429961c3da7da4ebca3e8163bfcc2a09ec9

bypass-lint

Differential Revision: D5641588

fbshipit-source-id: bb23b213d08e9c3ea509216fce9367625943d007
2017-08-26 00:07:58 -07:00
Lei Chen
432cba6c05 Set up run_every_ms when constructing ExecutionStep
Summary: same as title.

Differential Revision: D5709274

fbshipit-source-id: f88b1325f3e6b948b836cc90f4d9c38a27be28ab
2017-08-25 15:58:29 -07:00
Alisson Gusatti Azzolini
ae0c4c8e66 Respect inplace blobs in InjectCrossDeviceCopies
Summary:
Before this diff, we were not respecting in-place blobs. E.g. if we had:

  with DeviceOption(CPU):
      blob = net.MyOpA([])
  with DeviceOption(CUDA):
      net.MyOpB([blob], [blob])

After the InjectCrossDevicesCopies we would have:

  blob = net.MyOpA([], device=CPU)
  blob_cuda0 = net.Copy([blob], [blob_cuda0], device=CUDA)
  net.MyOpB([blob_cuda0], [blob], device=CUDA)

Basically, we were not respecting inplace blobs. After this diff, we'll keep the inplace blob.

Reviewed By: harouwu

Differential Revision: D5671867

fbshipit-source-id: 6ad68c612dae19d7e1f45f4988d929644100b4d5
2017-08-25 14:57:58 -07:00
Ilia Cherniavskii
86cc7ace93 Control flow operators
Summary:
This diff adds control flow operators in Caffe2 (starting with If, While):
 - Added If operator that executes then/else subnet
 - Branch subnet is executed in a separate isolated workspace, with some of the
   blobs transparently forwarded from the outer workspace
 - Adding a new NetBuilder subclass to construct nets using new operator
 - NetBuilder also keeps track of outer blob names and automatically sets
   blob bindings between outer and inner workspace, implementing generic
   convention on handling local/global variables in blocks

Reviewed By: azzolini

Differential Revision: D5641588

fbshipit-source-id: f9e04429961c3da7da4ebca3e8163bfcc2a09ec9
2017-08-25 12:31:14 -07:00
Lei Chen
14950a9082 Support session in distributed realtime trainer
Summary:
Convert from PlanDef ProtoBuf into python Plan object by recursively creating
Nets and ExecutionSteps.

Also support running Plan object directly in Session.

Reviewed By: azzolini

Differential Revision: D5608393

fbshipit-source-id: c0ae3b6da743a759af6db3b614a5a3935fe0b34c
2017-08-16 10:28:55 -07:00
Junjie Bai
1ce95090ca Add support for specifying engine preferences
Reviewed By: Yangqing

Differential Revision: D5460994

fbshipit-source-id: 08a8af699eebec37defc070389a8415b3e81ac16
2017-08-09 00:47:18 -07:00
Thomas Dudziak
676bedd298 Fixes for Python 3 in caffe2/caffe2/fb/data
Summary: As title

Reviewed By: MisterTea

Differential Revision: D5532387

fbshipit-source-id: 0a51ca40b93cc2eb5371f0b86f2800354cd1939c
2017-08-01 15:22:55 -07:00
Szymon Piechowicz
3324db447f Caffe2: allow nets that don't use all input in net.ClonePartial
Summary: Caffe2: allow nets that don't use all input in net.ClonePartial

Differential Revision: D5535564

fbshipit-source-id: 0ec8fb3ade4d7d6cd4a702c9c265d9c77f27a627
2017-08-01 10:05:46 -07:00
Yiming Wu
b51e0ec0c2 quick fix inplace blob bug
Summary: fixing the case where the init net will initialize same blob twice. I made an exception by allowing inplace blob among ops if the blob keeps on the same device. This should fix this problem in a generalized way as most of our training is only on CPU now.

Reviewed By: dzhulgakov

Differential Revision: D5450564

fbshipit-source-id: 525c4c9a2e5216a70dbd1229da2d9f8a58b89e47
2017-07-23 02:18:16 -07:00
Yiming Wu
4a256dfc97 save/load/run nets and params with device info correctly
Summary:
===Update log 7/10===

We are now restrained from problem of connection. Will post if this problem does not fix in 2hrs.

===Update 7/6===

Luke is experimenting on the convergence of this diff. Hopefully he could present results next week

Right now this is not affecting our original CPU training pipeline because the loading op is still correct in CPU situation now.

I will need final test to make sure. But that is now blocked by log device issue t19952135

I will do CPU/GPU nets saved in a separate diff.

====Update before 7.4====
It's actually working! Include local run screenshot
{F67959016}

dogscience

Reviewed By: dzhulgakov

Differential Revision: D5307058

fbshipit-source-id: cad5d9324c239419530f4b120392ec2ccbb72280
2017-07-23 02:18:15 -07:00
Alisson Gusatti Azzolini
8e80ef7e6d s/CopyGPUToGPU/Copy
Summary: CopyGPUToGPU does not exist. Copy seems to do the trick. Didn't go into details of how copy works, not sure if it ends up triggering UVA.

Reviewed By: akyrola

Differential Revision: D5471014

fbshipit-source-id: d8bc1aed9b19070c92f3ffc76f5617bdd0054563
2017-07-21 13:51:11 -07:00
Aapo Kyrola
cbb85545ec warn about orphan StopGradient output
Summary: Quite common confusion is how to use StopGradient, and typical bug is to forget to specify input=output. This adds a sanity check to gradient builder that checks if some StopGradient outputs are orphaned.

Reviewed By: dzhulgakov

Differential Revision: D5458341

fbshipit-source-id: 056fef4f0ee53eb10e66e9be0ecb55b55f9cc3d7
2017-07-20 21:41:41 -07:00
Tao Wu
78c4c4f885 handle RecurrentNetwork operator when clone net
Summary: added support of passing remap_funcs to clone_and_bind_net, so that it can pass it to clone method. Added other utils to ensure RecurrentNetwork operator is correctly cloned based on the remap_blob. The reason that RecurrentNetwork operator needs special treatment is that its arguments contain proto and blobs.

Reviewed By: kittipatv

Differential Revision: D5421532

fbshipit-source-id: 5de68365ce97df2de483f02ad260d78c8d35eead
2017-07-17 17:33:21 -07:00
Dmytro Dzhulgakov
b6c1c0ac4e Fix communication_schema decoding
Summary: Allows to override the input/output record as long as the field blobs are the same.

Reviewed By: yangyangyyy

Differential Revision: D5362132

fbshipit-source-id: 3ac2ac22802902b7eed5c226b00a7e1971ad264c
2017-07-02 13:04:20 -07:00
Aapo Kyrola
ab0fe0a5f4 add debug information when there is blob version mismatch
Summary:
It is quite common question when users get some variant of "blob has version 2 but gradient expects version 1" in their backward pass. The error message is completely unhelpful.
To remedy this, I added proper debug information which tells user how the version number of a blob was incremented over time. i.e which ops caused the version to go op. This should help
understand the issue.

Reviewed By: dzhulgakov

Differential Revision: D5358227

fbshipit-source-id: bc09d048ac33200c35d56460e44e86c2f2888f3f
2017-06-30 16:22:46 -07:00
Thomas Dudziak
5355634dac Dict fixes/improvements and unittest targets for Python 3 in caffe2 core
Summary: As title

Reviewed By: salexspb

Differential Revision: D5316104

fbshipit-source-id: aee43819d817842e5ce6ba3d045a55b1a2491c30
2017-06-29 17:05:41 -07:00
Yiming Wu
1fce3eac4e single trainer hybrid device
Summary:
First try of single trainer hybrid device training for sparsenn

Comparison results with CPU training:
https://our.intern.facebook.com/intern/fblearner/run/compare/?compare_to[0]=20016969&compare_to[1]=19660293&baseline_run=19660293&all_runs[0]=20016969&all_runs[1]=19660293

Reviewed By: dzhulgakov

Differential Revision: D5205723

fbshipit-source-id: 4a024324ac2efc3248dd470d4c533cf2ecec2e92
2017-06-27 22:06:30 -07:00
Alexander Sidorov
c8410859d9 Operator python stacktraces, attempt 2
Summary:
Last time I used uuid filled into OperatorDef. And operator_tracebacks was populated using traceback.extract_stack. There were several issues with this approach:

1. A random field in OperatorDef breaks workflows relying on memoization, i.e. when computation is skipped based on already computed result before.
2. Adding one more field revealed RNNs being non forward compatible wrt to new fields in there. prototxt format seems to not allow forward compatibility (thanks jamesr66a for the investigation!). For RNNs we need to swtich them to a more resilient approach. azzolini's proposed change to OperatorDef / NetDef would allow that by just nesting NetDef dirrectly inside OperatorDef without need for extra serialization.
3. traceback.extract_stack is very slow when executable is on a remote filesystem. It does one or more os.stat for each frame on the stack. For some cases it ended up being up to 15 extra minutes on model construction.

In this diff I use a different approach which should fix all those problems above.

1.2. are solved by not adding a new field at all. Instead I report operator idx wrt to a net it runs in. Thanks akyrola and dzhulgakov for the idea. Downside here is that operator list manipulation breaks the logic and separately created ops are not covered at all.
3. I solved this by operating on raw frames without using traceback and inspect modules which end up doing a lot of file system calls. See function extract_stacktace in core.py with additional comments.

Reviewed By: dzhulgakov

Differential Revision: D5286285

fbshipit-source-id: 626dd0f5f6b8b1d86bd6bf519078b122f43ddcaa
2017-06-25 19:32:58 -07:00
Thomas Dudziak
342de07231 Core unit test fixes for Python 3
Summary: As title

Differential Revision: D5291327

fbshipit-source-id: 7dd9279c53ba55d3422c31973ffcec5705787fdf
2017-06-23 13:22:16 -07:00
Alisson Gusatti Azzolini
7d482742fd Allow tasks/execution_steps to be cloned at runtime
Summary:
Advantages of cloning the tasks/execution_steps at runtime:
- Less complexity on the python side: no need to clone nets and add prefixes to blob names
- Faster start-up: we had cases of complex plans that took up to 30min to be created.
- Better isolation: each task cloned at runtime has its own child workspace, preventing false sharing of blobs.
- Opens up possibility for dynamic scheduling: Number of threads per task can be increased on the fly, at runtime.

Reviewed By: dzhulgakov

Differential Revision: D5100730

fbshipit-source-id: 71b83193b135da4e6eaf2536d8fc266528e1fdcc
2017-06-20 22:32:07 -07:00
Alexander Sidorov
83e6a0bec8 Revert uuid change to OperatorDef protobuf
Summary:
a few issues:

1. Randomization hurts memoization
1. Even if we make it non random, then we can get key colisions when loading it back.
2. RNNs use prototxt for step net and apparently its not forward compatible like normal protobuf is

I am thinking of a better less invasive solution now.

Reviewed By: jamesr66a

Differential Revision: D5272118

fbshipit-source-id: ab577fad04fbfc632e1fceffa923377a0d3da1be
2017-06-19 16:47:31 -07:00
Dmytro Dzhulgakov
12094b5114 Add random shuffle through the data to the benchmark workflow
Reviewed By: kdub0

Differential Revision: D5171727

fbshipit-source-id: 1d9182bb820224b479682fc0ca5014f909ba19d5
2017-06-16 13:22:46 -07:00
Aapo Kyrola
7ffd76db51 check operator schema before calling gradient creator
Summary: Hard-to-debug problems arise when a gradient creator fails when the forward op is incorrect itself. Add checking of the schema before callig the creator. Also clarify the error messages

Reviewed By: Yangqing

Differential Revision: D5256016

fbshipit-source-id: 78550f7e2ce5b88e26b69fdae4be0eece52edfea
2017-06-15 13:04:58 -07:00
Alisson Gusatti Azzolini
d03ffb211c Remove WORKER_INIT_CALLS
Summary: This was only needed in order to initialize stateful PythonOps. Now PythonOp has support for initialization at Op creation time, so this is not used anymore.

Reviewed By: dzhulgakov

Differential Revision: D5242908

fbshipit-source-id: dbaa249466dd0f37f25d204d387b1f99c6dd4fed
2017-06-13 20:18:48 -07:00
Alexander Sidorov
eebda50b79 Operator python traceback
Summary: This is going to show a python Caffe2 user where a failed operator was created. Motivation for having this information not right in protobuf is to avoid having it too verboose and keep ability to read protobufs of a net after a simple print() call.

Reviewed By: jamesr66a

Differential Revision: D5226047

fbshipit-source-id: 7edfe850e05a2ec209577142aa3368664a57a108
2017-06-13 18:50:02 -07:00
Alisson Gusatti Azzolini
d3ec6e8f55 Run python op builder at op creation time
Summary:
This allows to construct a python op by passing a pickled "builder function call" as an argument to the op.
The builder function is called at PythonOp construction time and returns a function that will be called when the op is run.

This way we allow to drop the dependency on 'tokens', which didn't work properly for protobufs that get distributed to other processes. Now, the PythonOp definition is self-contained: as long as the build dependencies are right, sharding the protobuf is enough to execute the net remotely.

Reviewed By: dzhulgakov

Differential Revision: D5080833

fbshipit-source-id: a5deaca5d3143024cdb121519689224e9dbec5ce
2017-06-13 16:29:22 -07:00
Thomas Dudziak
b877d4b5f8 Misc fixes for Python 3
Summary: As title

Differential Revision: D5216942

fbshipit-source-id: def5563f1b259efefab3a829d8a78d8d3297ffc7
2017-06-13 12:18:43 -07:00
Alexander Sidorov
7f1385e70c Improve gradient accumulation of the framework: 1.5x - 2x
Summary:
We waste extra memory by creating two autosplit gradient
blobs and then accumulating it into them main one. Sometimesk, when Sum
/ Sub ops are involved, we can avoid wasting extra memory at all.

Ideally we would not waste any memory and make ops add to the same
blob rather then calculating separate results and then mering
them. But it would require a substantial change to the frameworks and
rewriting a lot of operators.

Reviewed By: dzhulgakov

Differential Revision: D5157667

fbshipit-source-id: 8293824d6cdd971d8853ae90aee68e4a6d1e132b
2017-06-11 22:02:30 -07:00
Dmytro Dzhulgakov
638fe804dc Implement recover_input_schema_by_prefix
Summary:
It's very useful for simple cases like benchmarking nets where we want to encode input/output record in the net and don't want to go through the hurdles of storing input/output record in MetaNetDef.

For those cases I propose remapping the input/output record before saving to 'input_record/{field_name}'. Then we can recover input/output record back just based on the names of the blobs.

Differential Revision: D5170473

fbshipit-source-id: ac5daa60051605ed93022aec1377a49f08f15663
2017-06-11 15:37:12 -07:00
Alexander Sidorov
df72826ead Static RNN
Summary:
Static RNN allows to unroll an RNN into Caffe2 graph using all existing cell abstractions. In this diff I introduce several new tests that already caught a few bugs in our RecurrentNetworkOp gradient accumulation logic by comparing it to an unrolled version.

Another use case is perf - potentially we can run an unrolled net faster because DAGNet will have access to the whole graph. Same about memonger. But this work is not part of this diff

Reviewed By: akyrola

Differential Revision: D5200943

fbshipit-source-id: 20f16fc1b2ca500d06ccc60c4cec6e81839149dc
2017-06-08 17:48:48 -07:00
Alexander Sidorov
264f75fdd0 ZeroGradient op
Summary:
when building a multi layer static RNN the last timestep of
the first layer (and other layers except the last one) doesn't get a
gradient for the cell state as normally user uses results only from
the last layer and cell state doesn't go up either.

ZeroGradient provides a general solution for injecting 0 gradient
blobs. It is in some way similar to StopGradient operator which is
also specialcased

Reviewed By: bwasti

Differential Revision: D5198375

fbshipit-source-id: a21d0cfb3676a77fac72e5897a200d0bd25fc6de
2017-06-08 16:02:38 -07:00
Yiming Wu
4fefff0bbb Auto injecting device copy for single net and several nets
Summary:
This diff plan to attack the problem where we want to just annotate device option for operators and leave Caffe2 to help us inject cross device copy functions. This feature would be useful for mixed device training and multi device training with several nets, where previously we do the heavy lifting of adding copy functions ourselves.

Ideally, this feature will happen like this:

      //construct your nets first
      core.InjectDeviceCopyAmongNets([train_init, train_net, ...])

My ideas are written in comments. I will update them here as well later.

Reviewed By: dzhulgakov

Differential Revision: D5134103

fbshipit-source-id: 173f7da9d1773d1c50ccdc27f1b5cd3067b04af5
2017-06-07 20:03:18 -07:00
Thomas Dudziak
60c78d6160 Fixes range/xrange for Python 3
Summary: As title

Differential Revision: D5151894

fbshipit-source-id: 7badce5d3122e8f2526a7170fbdcf0d0b66e2638
2017-06-07 00:04:26 -07:00
Yiming Wu
8cd208ad6f Infer input and output device from OperatorDef through OperatorSchema
Summary: Infer input and output device from OperatorDef through OperatorSchema. This is inspired by shape inference. With this feature, we can easily analysis device information for all blobs in the net in a generic way. It is really helpful for auto cross device execution.

Reviewed By: akyrola, dzhulgakov

Differential Revision: D5161065

fbshipit-source-id: ee656123112171a4ca00f2fb3f6940f32ddf3135
2017-06-05 23:47:33 -07:00
Ross Girshick
8e99824ce7 Allow subsets of gradient outputs / inputs in Python ops
Summary:
I'm using Python ops in a project and need corresponding Python gradient ops. For my use case, only a subset of the forward op outputs have gradients and only a subset of forward op inputs have gradients. However the current implementation of `GetPythonGradient` forces all grad inputs and outputs to exist. This diff allows one to specify that only a subset of grad inputs / outputs are used when constructing the Python op.

I'm not sure if this is up to caffe2 standards, so please push back on style and content as needed.

Reviewed By: dzhulgakov

Differential Revision: D4897004

fbshipit-source-id: 96fffe8634c51a49b6bce7339a46c6235f7d4bbd
2017-06-05 12:52:01 -07:00
Yiming Wu
8871ef029b quick fix future issue with brew/core/schema/workspace/scope/utils.py
Summary:
fixing missing future package issue.

Recently we found some of our users does not have future module support. So we might need a try/catch wrapper around all past import

Reviewed By: Yangqing

Differential Revision: D5183547

fbshipit-source-id: 262fdf2940ee1be4454bf0b0abb9e6a0f1a0ee82
2017-06-05 12:01:48 -07:00
Alexander Sidorov
846240a340 Caffe2 gradient generator bug fix
Summary:
Bug repro is in a test. Generally speaking accumulation was
not happening if len(ys) >= 2 (list of blobs we compute gradients
from) and for some blob in the net it was both in ys list and also got
a gradient propagated from another element in ys.

Reviewed By: akyrola

Differential Revision: D5121695

fbshipit-source-id: 282d88f2f4f6e27dadae311964f40246a2739130
2017-05-30 18:47:08 -07:00
Thomas Dudziak
47e921ba49 Remove map() and filter() in favor of comprehensions
Summary: These return views in Python 3 which would not do anything in a lot of usages currently present in Caffe2. This diff simply removes (almost) all usages of these two in Caffe2 and sub projects in favor of comprehensions which are also easier to read/understand

Reviewed By: akyrola

Differential Revision: D5142049

fbshipit-source-id: e800631d2df7d0823fed698cae46c486038007dc
2017-05-30 15:32:58 -07:00
Aapo Kyrola
44257ea5ed automatically infer device scope for param
Summary:
hankun is using the optimizer, but having mixed set of of GPU and CPU operators. Currently this won't work with optimizer since it adds optimizers for all parameters in the current device scope. But we can actually infer the device that a param belongs to by looking at the device option in the param_init_net.

Added a test as well.

Reviewed By: salexspb

Differential Revision: D5133652

fbshipit-source-id: ad8689d75ac1f5c78981bae1b6978fe91e40ef0f
2017-05-30 12:02:19 -07:00
Thomas Dudziak
3ccbf23132 String-related fixes for Python 3
Summary: This diff is one step towards enabling python 3 build by making it be more diligent in its handling of strings.

Reviewed By: salexspb

Differential Revision: D4893083

fbshipit-source-id: 28b8adf3280e8d1f0a7dc9b0fee5ad53f2fada57
2017-05-26 16:04:32 -07:00
Alisson Gusatti Azzolini
75bc9f5e77 Relax requirement on token uniqueness
Summary: Relax requirement on token uniqueness since a few use cases broke after the uniqueness requirement was added in a previous diff.

Reviewed By: kittipatv

Differential Revision: D5034132

fbshipit-source-id: 327eb065923e6ea152a360324316f81b7fb9564b
2017-05-09 19:36:00 -07:00
Alisson Gusatti Azzolini
bd8ed6641c Stabilize PythonOp token name
Summary: For distributed jobs, we were relying on the order the PythonOps were registered, which was very fragile.

Reviewed By: dzhulgakov

Differential Revision: D5016847

fbshipit-source-id: f5601467c5b0569d5e8a0efdd76abad0d703c5f5
2017-05-09 11:19:44 -07:00
Aapo Kyrola
711ea1d4ac fix enternalinputs handling in AppendNet v2
Summary: External inputs must be computed before updating the _ops_output structure, otherwise if the net to be appended outputs the external input, it is not added correctly

Differential Revision: D5013496

fbshipit-source-id: 6a83d0a6f1c63ef8ae7bec4d862c0ac2a690d47b
2017-05-05 21:50:57 -07:00
Eider Moore
0c6099ce25 Add __dir__ so autocomplete in iPython works.
Summary: It is good practice to provide __dir__ whenever __getattr__ is defined so that tooling will work intelligently.  In particular, it is hard to explore the available methods in iPython without tab completion.

Reviewed By: dzhulgakov

Differential Revision: D5006545

fbshipit-source-id: 1a150d91d54637d80b292764513943ff70d971b4
2017-05-05 11:32:06 -07:00
Kittipat Virochsiri
22d4eaeb9e JoinContext
Summary:
Layer to allow model to follow different paths for each instantiation context and join later. Together with tagging system cleanup (this is a separate issue), this should reduce the need to write a layer to differentiate between context.

Re: tagging system clean up, we should make exclusion more explicit: EXCLUDE_FROM_<CONTEXT>. This would simplify instation code. TRAIN_ONLY should become a set of all EXCLUDE_FROM_*, except EXCLUDE_FROM_TRAIN.

Reviewed By: kennyhorror

Differential Revision: D4964949

fbshipit-source-id: ba6453b0deb92d1989404efb9d86e1ed25297202
2017-05-02 17:32:26 -07:00
Kittipat Virochsiri
e8e36945cf make debug message more explicit & verbose
Summary: I ran into this earlier and the debug messages were not helpful enuogh

Reviewed By: kennyhorror

Differential Revision: D4985754

fbshipit-source-id: b3d12b5e2cfa1b54fca9126768c84c902664ef28
2017-05-02 12:39:14 -07:00
Krishna Vudata
1f3c7f8080 Handle net.external_inputs correctly in AppendNet
Summary:
When appending net A to net B, an external input of net A should not be added as
an external input of net B if net B is outputting that blob.

Reviewed By: dzhulgakov

Differential Revision: D4975921

fbshipit-source-id: a5c0ada7b96d851e57d345244d322dd93c7be8e4
2017-05-02 11:20:26 -07:00