Confirm this is a feature request for the Python library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
Feature request
When building applications that support multiple OpenAI models (e.g., gpt-4.1-mini, gpt-5, gpt-5.1, gpt-5.4), developers need to know which parameters each model supports. Currently this requires maintaining hardcoded model lists or prefix checks, which break as new models are released.
The problem
Different model families have different parameter constraints:
temperature: Supported by gpt-4.x but rejected by gpt-5.x reasoning models when reasoning_effort != none
reasoning: Only supported by gpt-5 family models, not gpt-4.x
reasoning.effort values: Vary by model — gpt-5 supports minimal/low/medium/high, gpt-5.1+ adds none, gpt-5.4+ adds xhigh
Today, developers must write code like:
def is_reasoning_model(model: str) -> bool:
return model.startswith("gpt-5")
def get_reasoning_effort_options(model: str) -> list[str]:
if not is_reasoning_model(model):
return []
if model.startswith("gpt-5."):
minor = int(model[6:].split("-")[0])
options = ["none", "low", "medium", "high"]
if minor >= 4:
options.append("xhigh")
return options
return ["minimal", "low", "medium", "high"]
This is fragile — it breaks whenever a new model family is released, and the version-based logic for effort levels is reverse-engineered from documentation rather than provided by the SDK.
Proposed solution
Add model capability metadata to the SDK in some way, for example:
# Check if a parameter is supported
openai.models.supports(model="gpt-5.4", parameter="temperature") # False
openai.models.supports(model="gpt-5.4", parameter="reasoning") # True
# Get valid values for an enum parameter
openai.models.valid_values(model="gpt-5.4", parameter="reasoning.effort")
# ["none", "low", "medium", "high", "xhigh"]
openai.models.valid_values(model="gpt-5", parameter="reasoning.effort")
# ["minimal", "low", "medium", "high"]
Or alternatively, expose it as model metadata:
from openai.models import get_model_capabilities
caps = get_model_capabilities("gpt-5.4")
caps.supports_temperature # False
caps.supports_reasoning # True
caps.reasoning_effort_options # ["none", "low", "medium", "high", "xhigh"]
Use case
This came up while migrating azure-search-openai-demo from Chat Completions to the Responses API. The app supports multiple model deployments and needs to dynamically adjust which parameters it sends and presents in the UI for users. I imagine there are many similar apps that provide users with model options and parameters.
Additional context
No response
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
Feature request
When building applications that support multiple OpenAI models (e.g., gpt-4.1-mini, gpt-5, gpt-5.1, gpt-5.4), developers need to know which parameters each model supports. Currently this requires maintaining hardcoded model lists or prefix checks, which break as new models are released.
The problem
Different model families have different parameter constraints:
temperature: Supported by gpt-4.x but rejected by gpt-5.x reasoning models when reasoning_effort != nonereasoning: Only supported by gpt-5 family models, not gpt-4.xreasoning.effortvalues: Vary by model — gpt-5 supportsminimal/low/medium/high, gpt-5.1+ addsnone, gpt-5.4+ addsxhighToday, developers must write code like:
This is fragile — it breaks whenever a new model family is released, and the version-based logic for effort levels is reverse-engineered from documentation rather than provided by the SDK.
Proposed solution
Add model capability metadata to the SDK in some way, for example:
Or alternatively, expose it as model metadata:
Use case
This came up while migrating azure-search-openai-demo from Chat Completions to the Responses API. The app supports multiple model deployments and needs to dynamically adjust which parameters it sends and presents in the UI for users. I imagine there are many similar apps that provide users with model options and parameters.
Additional context
No response