What is model-optimization?

68/100
Trust Score (C)
⚠️ Use Caution

model-optimization is a AI tool that A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.. It has a Nerq Trust Score of 68/100 (C). 1.6K GitHub stars. Published by Unknown. Last analyzed March 2026.

Why This Score

Trust & Safety Overview

68
TRUST SCORE
C
GRADE
1.6K
STARS
0
DOWNLOADS

What model-optimization Does

model-optimization is a tool in the AI tool category. A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.. It is published by an independent developer and has no specified license. With 1.6K GitHub stars and 0 downloads, it has a growing community of users and contributors.

Who Should Use model-optimization

model-optimization is suitable for evaluation and non-critical use. Review the trust score breakdown before using in production.

Details

AuthorUnknown
CategoryAI tool
LicenseNot specified
Typetool
SourceView on GitHub
Security Score0/100
Activity Score0/100

How to Get Started

Check the trust score before installing:

curl nerq.ai/v1/preflight?target=tensorflow-model-optimization

Setup guide · Full safety report · Production review · Is it safe?

Safer Alternatives

ToolTrustStars
openclaw84218.2K
stable-diffusion-webui69160.7K
prompts.chat69145.8K
generative-ai-for-beginners72106.7K
ComfyUI72103.7K

Frequently Asked Questions

What is model-optimization used for?
model-optimization is a AI tool tool. A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning..
Is model-optimization free?
License: Check project page. model-optimization has 1.6K GitHub stars.
Is model-optimization safe?
model-optimization has a Nerq Trust Score of 68/100 (C). Use with caution.
What are alternatives to model-optimization?
Top alternatives: openclaw, stable-diffusion-webui, prompts.chat. See full comparison.

Last updated March 2026. Trust scores based on automated analysis of public data.