diff --git "a/\354\233\220\353\263\270/\354\233\220\353\263\270\355\216\230\354\235\264\354\247\200.md" "b/\354\233\220\353\263\270/\354\233\220\353\263\270\355\216\230\354\235\264\354\247\200.md"
index bebf001f3b33a3d9f73fc41d5b0d28b855c2f7f7..cdbbc5bb5443f3ddb2c26ad25a778c58af0400e7 100644
--- "a/\354\233\220\353\263\270/\354\233\220\353\263\270\355\216\230\354\235\264\354\247\200.md"
+++ "b/\354\233\220\353\263\270/\354\233\220\353\263\270\355\216\230\354\235\264\354\247\200.md"
@@ -274,4 +274,77 @@ test_evaluator = dict(
     metric=['bbox', 'segm'],  # Metrics to be evaluated
     format_only=True,  # Only format and save the results to coco json file
     outfile_prefix='./work_dirs/coco_detection/test')  # The prefix of output json files
-```
\ No newline at end of file
+```
+
+## Training and testing config
+
+MMEngine’s runner uses Loop to control the training, validation, and testing processes. Users can set the maximum training epochs and validation intervals with these fields.
+
+```
+train_cfg = dict(
+    type='EpochBasedTrainLoop',  # The training loop type. Refer to https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py
+    max_epochs=12,  # Maximum training epochs
+    val_interval=1)  # Validation intervals. Run validation every epoch.
+val_cfg = dict(type='ValLoop')  # The validation loop type
+test_cfg = dict(type='TestLoop')  # The testing loop type
+```
+
+## Optimization config
+
+optim_wrapper is the field to configure optimization-related settings. The optimizer wrapper not only provides the functions of the optimizer, but also supports functions such as gradient clipping, mixed precision training, etc. Find more in optimizer wrapper tutorial.
+
+```
+optim_wrapper = dict(  # Optimizer wrapper config
+    type='OptimWrapper',  # Optimizer wrapper type, switch to AmpOptimWrapper to enable mixed precision training.
+    optimizer=dict(  # Optimizer config. Support all kinds of optimizers in PyTorch. Refer to https://pytorch.org/docs/stable/optim.html#algorithms
+        type='SGD',  # Stochastic gradient descent optimizer
+        lr=0.02,  # The base learning rate
+        momentum=0.9,  # Stochastic gradient descent with momentum
+        weight_decay=0.0001),  # Weight decay of SGD
+    clip_grad=None,  # Gradient clip option. Set None to disable gradient clip. Find usage in https://mmengine.readthedocs.io/en/latest/tutorials/optimizer.html
+    )
+```
+
+param_scheduler is a field that configures methods of adjusting optimization hyperparameters such as learning rate and momentum. Users can combine multiple schedulers to create a desired parameter adjustment strategy. Find more in parameter scheduler tutorial and parameter scheduler API documents
+
+```
+param_scheduler = [
+    # Linear learning rate warm-up scheduler
+    dict(
+        type='LinearLR',  # Use linear policy to warmup learning rate
+        start_factor=0.001, # The ratio of the starting learning rate used for warmup
+        by_epoch=False,  # The warmup learning rate is updated by iteration
+        begin=0,  # Start from the first iteration
+        end=500),  # End the warmup at the 500th iteration
+    # The main LRScheduler
+    dict(
+        type='MultiStepLR',  # Use multi-step learning rate policy during training
+        by_epoch=True,  # The learning rate is updated by epoch
+        begin=0,   # Start from the first epoch
+        end=12,  # End at the 12th epoch
+        milestones=[8, 11],  # Epochs to decay the learning rate
+        gamma=0.1)  # The learning rate decay ratio
+]
+```
+
+## Hook config
+
+Users can attach Hooks to training, validation, and testing loops to insert some operations during running. There are two different hook fields, one is `default_hooks` and the other is `custom_hooks`.
+
+`default_hooks` is a dict of hook configs, and they are the hooks must be required at the runtime. They have default priority which should not be modified. If not set, runner will use the default values. To disable a default hook, users can set its config to `None`. Find more in HOOK.
+
+```
+default_hooks = dict(
+    timer=dict(type='IterTimerHook'),  # Update the time spent during iteration into message hub
+    logger=dict(type='LoggerHook', interval=50),  # Collect logs from different components of Runner and write them to terminal, JSON file, tensorboard and wandb .etc
+    param_scheduler=dict(type='ParamSchedulerHook'), # update some hyper-parameters of optimizer
+    checkpoint=dict(type='CheckpointHook', interval=1), # Save checkpoints periodically
+    sampler_seed=dict(type='DistSamplerSeedHook'),  # Ensure distributed Sampler shuffle is active
+    visualization=dict(type='DetVisualizationHook'))  # Detection Visualization Hook. Used to visualize validation and testing process prediction results
+```
+
+`custom_hooks` is a list of all other hook configs. Users can develop their own hooks and insert them in this field.
+
+```
+custom_hooks = []
+```