师资

EN       返回上一级       师资搜索
陈冠华
助理教授
chengh3@sustech.edu.cn

陈冠华,现任南方科技大学统计与数据科学系准聘助理教授,博士生导师;计算机科学与工程系和人工智能高等研究院兼职助理教授。他入选深圳鹏城孔雀计划特聘岗位和2025年微软亚洲研究院铸星学者计划。陈冠华2022年博士毕业于香港大学计算机科学系,此前于清华大学获得学士和硕士学位。他曾于微软亚洲研究院和华为诺亚方舟实验室进行研究实习,于清华大学自然语言处理课题组访问。他以第一作者或通讯作者身份在ACL/EMNLP/NeurIPS/NAACL等国际会议发表20余篇论文,现主持一项国家自然科学基金青年项目和一项广东省自然科学基金面上项目,作为核心骨干参与一项科技部重点研发计划青年科学家项目。他是中国中文信息学会大模型与生成专委会及青年工作委员会委员,担任ACL/EMNLP国际顶会领域主席和ACL/EMNLP/NeurIPS/ICML/TASLP等多个国际顶级期刊会议审稿人。更多个人信息请关注个人主页https://ghchen.me。


招收硕士生/博士生/博士后和访问学生

I am looking for self-motivated PostDoc/PhD/Master/visiting students to join our lab. If you are interested, please send me an email with your CV. For your reference, here are the latest information of the PhD (https://stat-ds.sustech.edu.cn/notice/402) and Master (https://stat-ds.sustech.edu.cn/notice/388) application process of this year. Currently, we have 16 RTX 4090 GPUs (24GB), 16 NVIDIA L40 GPUs (48GB), and 4 A100 GPUs (40GB) available for students. Several cloud A800/H800 GPU servers are also available for resource-intensive research. Enough APIs of open-sourced/proprietary LLMs are also provided for students.


研究方向

自然语言处理和机器学习应用,关注推理大模型、大模型多智能体、多模态大模型和智能医疗等低资源场景中的大模型关键技术。


以第一作者或通讯作者身份所发表的论文


  1. [AAAI'26] Enhancing Uncertainty Estimation in LLMs with Expectation of Aggregated Internal Belief.
  2. [NeurIPS'25] Beyond the Surface: Enhancing LLM-as-a-Judge Alignment with Human via Internal Representations.
  3. [EMNLP'25] G2: Guided Generation for Enhanced Output Diversity in LLMs.
  4. [EMNLP'25 Findings] Pi-SQL: Enhancing Text-to-SQL with Fine-Grained Guidance from Pivot Programming Languages.
  5. [ACL'25] ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMs.
  6. [ACL'25 (Industry Track)] PlanGPT: Enhancing Urban Planning with Tailored Language Model and Efficient Retrieval.
  7. [ACL'25 Findings] Fanno: Augmenting High-Quality Instruction Data with Open-Sourced LLMs Only.
  8. [ACL'25 Findings] Tag-Instruct: Controlled Instruction Complexity Enhancement through Structure-based Augmentation.
  9. [NAACL'25] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning.
  10. [NAACL'25] SeqAR: Jailbreak LLMs with Sequential Auto-Generated Characters.
  11. [NAACL'25] Self-DC: When to Reason and When to Act? Self Divide-and-Conquer for Compositional Unknown Questions.
  12. [NAACL'25 Findings] LayAlign: Enhancing Multilingual Reasoning in LLMs via Layer-Wise Adaptive Fusion and Alignment Strategy.
  13. [NeurIPS'24] SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation.
  14. [EMNLP'24] Distract Large Language Models for Automatic Jailbreak Attack.
  15. [ACL'24 Findings] PACIT: Unlocking the Power of Examples for Better In-Context Instruction Tuning
  16. [ACL'23] mCLIP: Multilingual CLIP via Cross-lingual Transfer
  17. [EMNLP'22 Findings] Multilingual Sentence Transformer as A Multilingual Word Aligner
  18. [EMNLP'22] XLM-D: Decorate Cross-lingual Pre-training Model as Non-Autoregressive Neural Machine Translation
  19. [ACL'22] Towards Making the Most of Cross-Lingual Transfer for Zero-Shot Neural Machine Translation
  20. [EMNLP'21] Zero-shot Cross-lingual Transfer of Neural Machine Translation with Multilingual Pretrained Encoders
  21. [AAAI'21] Lexically Constrained Neural Machine Translation with Explicit Alignment Guidance
  22. [IJCAI'20] Lexical-Constraint-Aware Neural Machine Translation via Data Augmentation

(最近更新于2025年12月)