|
At the very beginning, the first month Venice AI was launched I tested it for a month and for sure it was 100% uncensored as advertised, but I tried it without paying and without registration and hiding my IP with a VPN and blocking trackers because I don´t want anybody to link my real identity with weird stuff or health queries or whatever, nothing illegal but it is very personal. When I say it WAS uncensored I mean not even once it told me it could not create or reply, but sometimes it did not do what I asked and I had to change the AI model and words, I fine tuned it and it worked. I capitalized WAS, because things have changed now, I believe some people might have used for abuse. The free not registered mode no longer creates adult pictures and it censors queries, the paid one I don´t know but I am not going to try it, I do not believe it is possible to be anonymous once you pay, even cryptocurrency can be tracked down. They say that they don´t track what people do but who knows what is true and what not, in the end what is true is that you are using their servers, what you create is stored in their servers, you use your IP, you use your payment method, you can be tracked down, I won´t use it, not worth it. Learn how to run A.I. local without Internet access instead, and encrypt your hard drive with LUKS for paranoia in case somebody steals your hard drive. Something you should know, many of these companies pay for API access to use other companies A.I. models, Venice is paying a third party for their servers, they don´t have their own model, and they are responsible for abuse, that third party can suspend Venice API access if they detect abuse, they can shut down their business, there are many parts involved in this. Look at what happened to Meta, sued for millions for not protecting minors, companies specially in the US, are scared to hell of being sued, it can destroy their company a lawsuit of a couple of million dollars, they have many incentives to spy on you and very little not to. Meta told to pay $375m for misleading users over child safety "A court in New Mexico has ordered Meta to pay $375m (£279m) for misleading users over the safety of its platforms for children." https://www.bbc.com/news/articles/cql75dn07n2o ![]() |