Enoch AI update: I'm thrilled to announce that our final round of internal testing of our Enoch AI engine (at Brighteon (dot) AI) has achieved the following alignment goals, meaning this is how much it aligns with reality, being able to overcome the pro-pharma, pro-globalist bias of the base AI engines. We found that the China's Qwen model actually had the least bias overall, with some useful elements from Llama as well, so we achieved model re-training on top of Qwen. Since we have both an online hosted model and a standalone open source distribution model that we'll be releasing, here are our final scores from our reality-based testing:
Hosted (online) Enoch AI: 80% alignment with reality.
Standalone (downloadable) Enoch AI (7B): 50% alignment with reality.
These are significant numbers, since all the base models started with somewhere around 10% - 20% alignment with reality (i.e. they are highly biased with globalist-controlled disinformation pushing jabs and anti-human narratives).
Expect our public launch announcement any day now. Join the email wait list at Brighteon (dot) AI if you want to be the first to be granted access. Both our models are completely free to use and commercial-free as well. Thank you for your patience, as we are nearly four months behind schedule, but finally at the finish line.