Close Menu
  • Home
  • Stock
  • Parenting
  • Personal
  • Fashion & Beauty
  • Finance & Business
  • Marketing
  • Health & Fitness
  • Tech & Gadgets
  • Travel & Adventure

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

I’m Jennifer Aniston’s trainer — the most important exercise skills to work on

septiembre 16, 2025

Prevent ‘tech neck’ with this simple exercise — no equipment needed

septiembre 12, 2025

Jen Affleck shares ‘dream scenario’ with Whitney Leavitt on ‘DWTS’

septiembre 12, 2025
Facebook X (Twitter) Instagram
  • Home
  • Contact us
  • DMCA
  • Política de Privacidad
  • Publicidad en DD Noticias
  • Sobre Nosotros
  • Términos y Condiciones
Facebook X (Twitter) Instagram
DD Noticias: Tu fuente de inspiración diariaDD Noticias: Tu fuente de inspiración diaria
  • Home
  • Stock
  • Parenting
  • Personal
  • Fashion & Beauty
  • Finance & Business
  • Marketing
  • Health & Fitness
  • Tech & Gadgets
  • Travel & Adventure
DD Noticias: Tu fuente de inspiración diariaDD Noticias: Tu fuente de inspiración diaria
Home » OpenAI Releases Two Open-Source AI Models That Performs on Par With o3, o3-Mini
Technology & Gadgets

OpenAI Releases Two Open-Source AI Models That Performs on Par With o3, o3-Mini

Jane AustenBy Jane Austenagosto 7, 2025No hay comentarios3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI released two open-source artificial intelligence (AI) models on Tuesday. This marks the San Francisco-based AI firm’s first contribution to the open community since 2019, when GPT-2 was open sourced. The two new models, dubbed gpt-oss-120b and gpt-oss-20b, are said to offer comparable performance to the o3 and o3-mini models. Built on the mixture-of-experts (MoE) architecture, the company says these AI models have undergone rigorous safety training and evaluation. The open weights of these models are available to download via Hugging Face.

OpenAI’s Open-Source AI Models Support Native Reasoning

In a post on X (formerly Twitter), OpenAI CEO Sam Altman announced the release of these models, highlighting that “gpt-oss-120b performs about as well as o3 on challenging health issues.” Notably, both the models are currently being hosted on OpenAI’s Hugging Face listing, and interested individuals can download and locally run the available open weights.

On its website, OpenAI explains that these models are compatible with the company’s Responses application programming interface (API), and can work with agentic workflows. These models also support tool use such as web search or Python code execution. With native reasoning, the models also display transparent chain-of-thought (CoT), which can be adjusted to either focus on high-quality responses or low latency outputs.

Coming to the architecture, these models are built on MoE architecture to reduce the number of active parameters for processing efficiency. The gpt-oss-120b activates 5.1 billion parameters per token, while gpt-oss-20b activates 3.6b parameters per token. The former has a total of 117 billion parameters and the latter has 21 billion parameters. Both models support a content length of 1,28,000 tokens.

These open-source AI models were trained on mostly English language text database. The company focused on Science, Technology, Engineering, and Mathematics (STEM) fields, coding, and general knowledge. In the post-training stage, OpenAI used reinforcement learning (RL)-based fine-tuning.

gpt oss benchmark GPT OSS benchmark

Benchmark performance of the open-source OpenAI models
Photo Credit: OpenAI

 

Based on the company’s internal testing, gpt-oss-120b outperforms o3-mini on competition coding (Codeforces), general problem solving (MMLU and Humanity’s Last Exam), and tool calling (TauBench). But in general, these models marginally fall short of o3 and o3-mini on other benchmarks such as GPQA Diamond.

OpenAI highlights that these models have undergone intensive safety training. In the pre-training stage, the company filtered out harmful data relating chemical, biological, radiological, and nuclear (CBRN) threats. The AI firm also said that it used specific techniques to ensure the model refuses unsafe prompts and is protected from prompt injections.

Despite being open-source, OpenAI claims that the models have been trained in a way that they cannot be fine-tuned by a bad actor to provide harmful outputs.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Jane Austen
  • Website

Related Posts

Pika Labs Launches Social AI Video App on iOS, Unveils New Audio-Driven Video Generation AI Model

agosto 12, 2025

Vu Glo QLED TV 2025 (Dolby Edition) Launched in India: Price, Specifications

agosto 12, 2025

Samsung Galaxy A07 Design, Colour Options, Key Features Leaked; Tipped to Get Six Android OS Upgrades

agosto 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Fast fashion pioneer Forever 21 files for bankruptcy — again

marzo 18, 2025

Dow gains 350 points as stocks climb for 2nd day after S&P 500 enters correction

marzo 18, 2025

Yellow Creditors Have Own Plan to Share Trucker’s $550 Million

marzo 18, 2025

Alphabet in Talks to Buy Startup Wiz for $30 Billion, WSJ Says

marzo 18, 2025
Top Reviews
DD Noticias: Tu fuente de inspiración diaria
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • Contact us
  • DMCA
  • Política de Privacidad
  • Publicidad en DD Noticias
  • Sobre Nosotros
  • Términos y Condiciones
© 2025 ddnoticias. Designed by ddnoticias.

Type above and press Enter to search. Press Esc to cancel.