swanlab: \ Waiting for the swanlab cloud response. swanlab: swanlab version 0.6.4 is available! Upgrade: `pip install -U swanlab` swanlab: \ Getting project... swanlab: \ Creating experiment... swanlab: | Creating experiment... swanlab: Tracking run with swanlab version 0.6.3 swanlab: Run data will be saved locally in /home/rwkv/RWKV-TS/RETRO_TEST/Minimind/swanlog/run-20250702_123051-d30a286e swanlab: 👋 Hi Garylu, welcome to swanlab! swanlab: Syncing run MiniMind-TripleExtraction-Epoch-4-BatchSize-192-LearningRate-0.0002 to the cloud swanlab: 🏠 View project at https://swanlab.cn/@Garylu/MiniMind-TripleExtraction swanlab: 🚀 View run at https://swanlab.cn/@Garylu/MiniMind-TripleExtraction/runs/pgnn4um8pb74vf4bpden3 [2025-07-02 12:30:52] tokens_per_iter: 98304 [2025-07-02 12:30:52] Configuration: [2025-07-02 12:30:52] out_dir: out [2025-07-02 12:30:52] epochs: 4 [2025-07-02 12:30:52] embedding_epoch: 2 [2025-07-02 12:30:52] batch_size: 192 [2025-07-02 12:30:52] learning_rate: 0.0002 [2025-07-02 12:30:52] dtype: bfloat16 [2025-07-02 12:30:52] use_swanlab: True [2025-07-02 12:30:52] swanlab_project: MiniMind-TripleExtraction [2025-07-02 12:30:52] num_workers: 1 [2025-07-02 12:30:52] accumulation_steps: 32 [2025-07-02 12:30:52] grad_clip: 1.0 [2025-07-02 12:30:52] warmup_iters: 0 [2025-07-02 12:30:52] log_interval: 50 [2025-07-02 12:30:52] save_interval: 10000 [2025-07-02 12:30:52] dim: 512 [2025-07-02 12:30:52] n_layers: 8 [2025-07-02 12:30:52] max_seq_len: 512 [2025-07-02 12:30:52] use_moe: False [2025-07-02 12:30:52] disable_db: False [2025-07-02 12:30:52] data_path: /home/rwkv/RWKV-TS/RETRO_TEST/extract/processed_trex_data.json [2025-07-02 12:30:52] pretrained_embedding_path: None [2025-07-02 12:30:52] profile: True [2025-07-02 12:30:52] profile_interval: 10 [2025-07-02 12:30:52] use_flash_attn: True [2025-07-02 12:30:52] knowledge_num: 960400 [2025-07-02 12:30:52] knowledge_length: 32 [2025-07-02 12:30:52] database_init_path: ./dataset/combined_prepare.json [2025-07-02 12:30:52] fast_clustering: True [2025-07-02 12:30:52] cluster_cache_path: ./cache/cluster_tokens_single.pt [2025-07-02 12:30:52] recompute_clusters: False [2025-07-02 12:30:52] memory_monitor: False [2025-07-02 12:30:52] memory_monitor_interval: 10 [2025-07-02 12:30:52] max_targets: 5 [2025-07-02 12:30:52] temperature: 1.0 [2025-07-02 12:30:52] detailed_timing: True [2025-07-02 12:30:52] save_dir: out [2025-07-02 12:30:52] swanlab_run_name: MiniMind-TripleExtraction-Epoch-4-BatchSize-192-LearningRate-0.0002 [2025-07-02 12:30:52] n_heads: 32 [2025-07-02 12:30:52] n_kv_heads: 8 [2025-07-02 12:30:52] vocab_size: 6400 [2025-07-02 12:30:52] hidden_dim: None [2025-07-02 12:30:52] multiple_of: 64 [2025-07-02 12:30:52] norm_eps: 1e-05 [2025-07-02 12:30:52] rope_theta: 1000000.0 [2025-07-02 12:30:52] dropout: 0.0 [2025-07-02 12:30:52] flash_attn: True [2025-07-02 12:30:52] embeddings_epoch: 2 [2025-07-02 12:30:52] num_experts_per_tok: 2 [2025-07-02 12:30:52] n_routed_experts: 4 [2025-07-02 12:30:52] n_shared_experts: True [2025-07-02 12:30:52] scoring_func: softmax [2025-07-02 12:30:52] aux_loss_alpha: 0.1 [2025-07-02 12:30:52] seq_aux: True [2025-07-02 12:30:52] norm_topk_prob: True [2025-07-02 12:30:52] knowledge_dim: 128 [2025-07-02 12:30:52] max_subject_len: 8 [2025-07-02 12:30:52] max_predicate_len: 4 [2025-07-02 12:30:52] max_object_len: 8 [2025-07-02 12:30:52] return_dict: True [2025-07-02 12:30:52] output_hidden_states: False [2025-07-02 12:30:52] output_attentions: False [2025-07-02 12:30:52] torchscript: False [2025-07-02 12:30:52] torch_dtype: None [2025-07-02 12:30:52] use_bfloat16: False [2025-07-02 12:30:52] tf_legacy_loss: False [2025-07-02 12:30:52] pruned_heads: {} [2025-07-02 12:30:52] tie_word_embeddings: True [2025-07-02 12:30:52] chunk_size_feed_forward: 0 [2025-07-02 12:30:52] is_encoder_decoder: False [2025-07-02 12:30:52] is_decoder: False [2025-07-02 12:30:52] cross_attention_hidden_size: None [2025-07-02 12:30:52] add_cross_attention: False [2025-07-02 12:30:52] tie_encoder_decoder: False [2025-07-02 12:30:52] max_length: 20 [2025-07-02 12:30:52] min_length: 0 [2025-07-02 12:30:52] do_sample: False [2025-07-02 12:30:52] early_stopping: False [2025-07-02 12:30:52] num_beams: 1 [2025-07-02 12:30:52] num_beam_groups: 1 [2025-07-02 12:30:52] diversity_penalty: 0.0 [2025-07-02 12:30:52] top_k: 50 [2025-07-02 12:30:52] top_p: 1.0 [2025-07-02 12:30:52] typical_p: 1.0 [2025-07-02 12:30:52] repetition_penalty: 1.0 [2025-07-02 12:30:52] length_penalty: 1.0 [2025-07-02 12:30:52] no_repeat_ngram_size: 0 [2025-07-02 12:30:52] encoder_no_repeat_ngram_size: 0 [2025-07-02 12:30:52] bad_words_ids: None [2025-07-02 12:30:52] num_return_sequences: 1 [2025-07-02 12:30:52] output_scores: False [2025-07-02 12:30:52] return_dict_in_generate: False [2025-07-02 12:30:52] forced_bos_token_id: None [2025-07-02 12:30:52] forced_eos_token_id: None [2025-07-02 12:30:52] remove_invalid_values: False [2025-07-02 12:30:52] exponential_decay_length_penalty: None [2025-07-02 12:30:52] suppress_tokens: None [2025-07-02 12:30:52] begin_suppress_tokens: None [2025-07-02 12:30:52] architectures: None [2025-07-02 12:30:52] finetuning_task: None [2025-07-02 12:30:52] id2label: {0: 'LABEL_0', 1: 'LABEL_1'} [2025-07-02 12:30:52] label2id: {'LABEL_0': 0, 'LABEL_1': 1} [2025-07-02 12:30:52] tokenizer_class: None [2025-07-02 12:30:52] prefix: None [2025-07-02 12:30:52] bos_token_id: None [2025-07-02 12:30:52] pad_token_id: None [2025-07-02 12:30:52] eos_token_id: None [2025-07-02 12:30:52] sep_token_id: None [2025-07-02 12:30:52] decoder_start_token_id: None [2025-07-02 12:30:52] task_specific_params: None [2025-07-02 12:30:52] problem_type: None [2025-07-02 12:30:52] _name_or_path: [2025-07-02 12:30:52] _commit_hash: None [2025-07-02 12:30:52] _attn_implementation_internal: None [2025-07-02 12:30:52] _attn_implementation_autoset: False [2025-07-02 12:30:52] transformers_version: None 三元组提取任务头配置: - 主语最大长度: 8 - 谓语最大长度: 4 - 宾语最大长度: 8 已冻结以下组件的权重: - tok_embeddings - knowledge_dataset - layers (所有transformer层) - output - pos_cis 注意:triple_extraction_head 保持可训练状态 [2025-07-02 12:30:53] Loading pretrained weights from /home/rwkv/RWKV-TS/RETRO_TEST/extract/Experiment_1_2_2_pretrain_512.pth [2025-07-02 12:30:53] Successfully loaded pretrained state_dict with 143 parameters [2025-07-02 12:30:53] Loaded 143 parameters from pretrained weights [2025-07-02 12:30:53] Skipped 0 parameters [2025-07-02 12:30:53] Key loaded parameters: [2025-07-02 12:30:53] ✅ tok_embeddings.weight [2025-07-02 12:30:53] ✅ knowledge_dataset.keys [2025-07-02 12:30:53] ✅ knowledge_dataset.knowledge_dataset [2025-07-02 12:30:53] ✅ knowledge_dataset.tok_embeddings.weight [2025-07-02 12:30:53] ✅ knowledge_dataset.to_queries.0.weight [2025-07-02 12:30:53] ... and 61 more [2025-07-02 12:30:53] Database embeddings and sentences stored in model [2025-07-02 12:30:53] LLM总参数量:14.486 百万 [2025-07-02 12:30:53] 模型初始化完成 [2025-07-02 12:30:53] 检测到pos_cis复数张量,将其设置为不参与分布式训练 [2025-07-02 12:30:53] 三元组提取训练:使用 TriplePretrainDataset 🚀 开始加载和预处理三元组数据... 📂 加载原始数据... 📊 原始数据量: 3459987 个样本 🔍 验证数据格式并选择单个target... 验证数据格式: 0%| | 0/3459987 [00:00 [2025-07-02 12:39:26,868] [INFO] [logging.py:107:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 2 optimizer [2025-07-02 12:39:26,869] [INFO] [stage_1_and_2.py:150:__init__] Reduce bucket size 500000000 [2025-07-02 12:39:26,869] [INFO] [stage_1_and_2.py:151:__init__] Allgather bucket size 500000000 [2025-07-02 12:39:26,869] [INFO] [stage_1_and_2.py:152:__init__] CPU Offload: False [2025-07-02 12:39:26,869] [INFO] [stage_1_and_2.py:153:__init__] Round robin gradient partitioning: False [2025-07-02 12:39:31,187] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states [2025-07-02 12:39:31,188] [INFO] [utils.py:782:see_memory_usage] MA 0.38 GB Max_MA 0.41 GB CA 0.43 GB Max_CA 0 GB [2025-07-02 12:39:31,191] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 60.17 GB, percent = 27.3% [2025-07-02 12:39:33,966] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states [2025-07-02 12:39:33,967] [INFO] [utils.py:782:see_memory_usage] MA 0.38 GB Max_MA 0.44 GB CA 0.49 GB Max_CA 0 GB [2025-07-02 12:39:33,967] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 60.21 GB, percent = 27.3% [2025-07-02 12:39:33,968] [INFO] [stage_1_and_2.py:571:__init__] optimizer state initialized [2025-07-02 12:39:36,078] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer [2025-07-02 12:39:36,079] [INFO] [utils.py:782:see_memory_usage] MA 0.38 GB Max_MA 0.38 GB CA 0.49 GB Max_CA 0 GB [2025-07-02 12:39:36,080] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 60.21 GB, percent = 27.3% [2025-07-02 12:39:36,082] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer [2025-07-02 12:39:36,082] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None [2025-07-02 12:39:36,082] [INFO] [logging.py:107:log_dist] [Rank 0] DeepSpeed LR Scheduler = None [2025-07-02 12:39:36,083] [INFO] [logging.py:107:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0], mom=[(0.9, 0.999)] [2025-07-02 12:39:36,083] [INFO] [config.py:1014:print] DeepSpeedEngine configuration: [2025-07-02 12:39:36,084] [INFO] [config.py:1018:print] activation_checkpointing_config { "partition_activations": false, "contiguous_memory_optimization": false, "cpu_checkpointing": false, "number_checkpoints": null, "synchronize_checkpoint_boundary": false, "profile": false } [2025-07-02 12:39:36,084] [INFO] [config.py:1018:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'intra_op_parallelism': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False} [2025-07-02 12:39:36,084] [INFO] [config.py:1018:print] amp_enabled .................. False [2025-07-02 12:39:36,084] [INFO] [config.py:1018:print] amp_params ................... False [2025-07-02 12:39:36,085] [INFO] [config.py:1018:print] autotuning_config ............ { "enabled": false, "start_step": null, "end_step": null, "metric_path": null, "arg_mappings": null, "metric": "throughput", "model_info": null, "results_dir": "autotuning_results", "exps_dir": "autotuning_exps", "overwrite": true, "fast": true, "start_profile_step": 3, "end_profile_step": 5, "tuner_type": "gridsearch", "tuner_early_stopping": 5, "tuner_num_trials": 50, "model_info_path": null, "mp_size": 1, "max_train_batch_size": null, "min_train_batch_size": 1, "max_train_micro_batch_size_per_gpu": 1.024000e+03, "min_train_micro_batch_size_per_gpu": 1, "num_tuning_micro_batch_sizes": 3 } [2025-07-02 12:39:36,085] [INFO] [config.py:1018:print] bfloat16_enabled ............. True [2025-07-02 12:39:36,085] [INFO] [config.py:1018:print] bfloat16_immediate_grad_update True [2025-07-02 12:39:36,085] [INFO] [config.py:1018:print] checkpoint_parallel_write_pipeline False [2025-07-02 12:39:36,085] [INFO] [config.py:1018:print] checkpoint_tag_validation_enabled True [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] checkpoint_tag_validation_fail False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] comms_config ................. [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] communication_data_type ...... None [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] compile_config ............... deepcompile=False free_activation=False offload_activation=False offload_opt_states=False double_buffer=True symmetric_memory=False debug_log=False offload_parameters=False sync_before_reduce=False sync_after_reduce=False sync_before_allgather=False sync_after_allgather=False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] curriculum_enabled_legacy .... False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] curriculum_params_legacy ..... False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'pin_memory': False, 'curriculum_learning': {'enabled': False}, 'dynamic_batching': {'enabled': False, 'lr_scaling_method': 'linear', 'min_batch_size': 1, 'max_batch_size': None, 'sequence_picking_order': 'dataloader', 'verbose': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}} [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] data_efficiency_enabled ...... False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] dataloader_drop_last ......... False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] disable_allgather ............ False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] dump_state ................... False [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] dynamic_loss_scale_args ...... None [2025-07-02 12:39:36,086] [INFO] [config.py:1018:print] eigenvalue_enabled ........... False [2025-07-02 12:39:36,087] [INFO] [config.py:1018:print] eigenvalue_gas_boundary_resolution 1 [2025-07-02 12:39:36,087] [INFO] [config.py:1018:print] eigenvalue_layer_name ........ bert.encoder.layer [2025-07-02 12:39:36,087] [INFO] [config.py:1018:print] eigenvalue_layer_num ......... 0 [2025-07-02 12:39:36,087] [INFO] [config.py:1018:print] eigenvalue_max_iter .......... 100 [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] eigenvalue_stability ......... 1e-06 [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] eigenvalue_tol ............... 0.01 [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] eigenvalue_verbose ........... False [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] elasticity_enabled ........... False [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] flops_profiler_config ........ { "enabled": false, "recompute_fwd_factor": 0.0, "profile_step": 1, "module_depth": -1, "top_modules": 1, "detailed": true, "output_file": null } [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] fp16_auto_cast ............... None [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] fp16_enabled ................. False [2025-07-02 12:39:36,088] [INFO] [config.py:1018:print] fp16_master_weights_and_gradients False [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] global_rank .................. 0 [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] grad_accum_dtype ............. None [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] gradient_accumulation_steps .. 32 [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] gradient_clipping ............ 1.0 [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] gradient_predivide_factor .... 1.0 [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] graph_harvesting ............. False [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8 [2025-07-02 12:39:36,089] [INFO] [config.py:1018:print] initial_dynamic_scale ........ 1 [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] load_universal_checkpoint .... False [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] loss_scale ................... 1.0 [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] memory_breakdown ............. False [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] mics_hierarchial_params_gather False [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] mics_shard_size .............. -1 [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] nebula_config ................ { "enabled": false, "persistent_storage_path": null, "persistent_time_interval": 100, "num_of_version_in_retention": 2, "enable_nebula_load": true, "load_path": null } [2025-07-02 12:39:36,090] [INFO] [config.py:1018:print] optimizer_legacy_fusion ...... False [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] optimizer_name ............... None [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] optimizer_params ............. None [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True} [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] pld_enabled .................. False [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] pld_params ................... False [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] prescale_gradients ........... False [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] scheduler_name ............... None [2025-07-02 12:39:36,091] [INFO] [config.py:1018:print] scheduler_params ............. None [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] seq_parallel_communication_data_type torch.float32 [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] sparse_attention ............. None [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] sparse_gradients_enabled ..... False [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] steps_per_print .............. inf [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] tensor_parallel_config ....... dtype=torch.float16 autotp_size=0 tp_overlap_comm=False tensor_parallel=TPConfig(tp_size=1, tp_grain_size=1, mpu=None, tp_group=None) injection_policy_tuple=None keep_module_on_host=False replace_with_kernel_inject=False [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] timers_config ................ enabled=True synchronized=True [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] train_batch_size ............. 6144 [2025-07-02 12:39:36,092] [INFO] [config.py:1018:print] train_micro_batch_size_per_gpu 192 [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] use_data_before_expert_parallel_ False [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] use_node_local_storage ....... False [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] wall_clock_breakdown ......... False [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] weight_quantization_config ... None [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] world_size ................... 1 [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] zero_allow_untested_optimizer True [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='none', nvme_path=None, buffer_count=5, buffer_size=100000000, max_in_cpu=1000000000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='none', nvme_path=None, buffer_count=4, pin_memory=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False module_granularity_threshold=0 use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False zeropp_loco_param=None mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True log_trace_cache_warnings=False [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] zero_enabled ................. True [2025-07-02 12:39:36,093] [INFO] [config.py:1018:print] zero_force_ds_cpu_optimizer .. True [2025-07-02 12:39:36,094] [INFO] [config.py:1018:print] zero_optimization_stage ...... 2 [2025-07-02 12:39:36,094] [INFO] [config.py:1004:print_user_config] json = { "train_batch_size": 6.144000e+03, "train_micro_batch_size_per_gpu": 192, "gradient_accumulation_steps": 32, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "none", "nvme_path": null }, "offload_param": { "device": "none", "nvme_path": null }, "stage3_gather_16bit_weights_on_model_save": false }, "gradient_clipping": 1.0, "steps_per_print": inf, "bf16": { "enabled": true }, "fp16": { "enabled": false }, "zero_allow_untested_optimizer": true }