cross-posted from: https://lemmy.sdf.org/post/29607342

Archived

Here is the data at Hugging Face.

A team of international researchers from leading academic institutions and tech companies upended the AI reasoning landscape on Wednesday with a new model that matched—and occasionally surpassed—one of China’s most sophisticated AI systems: DeepSeek.

OpenThinker-32B, developed by the Open Thoughts consortium, achieved a 90.6% accuracy score on the MATH500 benchmark, edging past DeepSeek’s 89.4%.

The model also outperformed DeepSeek on general problem-solving tasks, scoring 61.6 on the GPQA-Diamond benchmark compared to DeepSeek’s 57.6. On the LCBv2 benchmark, it hit a solid 68.9, showing strong performance across diverse testing scenarios.

  • naeap
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    Had the same problem and someone guided me to the hugging face documents/tutorials.
    They are quite nice to get a local model up and running, play around with it, how to fine tune it and connect it with agents

    Haven’t tried much, but the articles were exactly what I was looking for
    Hope it helps you as well