GLM-4.7-Flash: 30B MoE model achieves 59.2% on SWE-bench, runs on 24GB GPUs

1 points | by czmilo 11 hours ago

1 comments