Author: Detective Anon

  • Do not fall for the Oceanviews.ai scam.

    Dillon Moses
    Dillon Moses

    This is Dillon Moses. Together with Vepa Durdiyev they run a scam company called Oceanviews.ai Just like Vepa Durdiyev he runs a carefully crafted avatar on LinkedIn and other socials. A mid-30s digital strategist with a pedigree that included a (fictional) stint at a prestigious consulting firm, a bank and an MBA from a reputable university. His profile is oozing in plausible vagueness, littered with buzzwords and FOMO tactics. He regularly posts slick, AI-generated infographics about market trends

    They attempt to ride the AI-hype to trick businesses into doling out money by making fairy-tail claims such as:

    Being able to see into the future.

    Oceanviews can use predictive analytics to forecast trends and customer behavior, helping businesses plan ahead.

    Promising a magical minimum of $100K extra profit by signing a contract with them. Which grants them access to your entire business, emails, reports, clients, employees.

    Guaranteed to find 100k missed in revenue or your pilot is free.

    Even though they are just a fraudulent brand new start-up they pretend to be “Trusted Partners” with Microsoft, PayPal, Amazon, Facebook, Oracle, Salesforce etc.

    oceanviews.ai fraud
    oceanviews.ai fraud

    Using big outrages percentages that are supposed to show improvements and create a sense of Fear Of Missing Out for the potential victim. Accompanied with a ChatGPT generated use-case PDF. Link

    • 30% increase in commissions
    • 25% boost in policies submitted
    • 35% reduction in marketing costs
    • 40% faster lead conversion

    Every page loaded with frivolous marketing buzzwords and telltale signs that all text has been generated by an AI chatapp.

    • Real-Time Customer Upgrade Signals: Instantly identify customers showing intent to upgrade—so you can engage at the right time with the right offer.
    • Behavioral Intelligence that Drives Results: Understand which behaviors actually lead to conversion and retention—so you can scale what works.
    • Cross-Platform Data Unification: Eliminate silos and connect insights across Salesforce, HubSpot, DocuSign, and more to surface missed revenue opportunities.
    • Forecasts Backed by Market Context: Confidently plan ahead with AI models that account for economic shifts, seasonal demand, and customer behavior trends.

    Oceanviews.ai another fraudulent company.

  • The Digital Mirage: How Vepa Durdiyev Built an Empire on Empty Promises

    Vepa Durdiyev
    Vepa Durdiyev

    In the sprawling, interconnected world of freelance marketplaces and professional networking, trust is the currency that fuels collaboration. We rely on profiles adorned with skill badges, endorsements from colleagues, and the seamless interface of platforms like Upwork and LinkedIn to assure us we are in capable hands. It was within this very ecosystem of trust that Vepa Durdiyev engineered a phantom empire, becoming a virus who exploited the system’s virtues for his own fraudulent ends.

    Vepa Durdiyev runs a carefully crafted avatar, a digital marionette who operates from a non-extradition country. The persona, however, was impeccable. On LinkedIn, “Vepa” was a suave, mid-30s digital strategist with a pedigree that included a (fictional) stint at a prestigious consulting firm and an MBA from a reputable university. His profile is a masterclass in plausible vagueness, littered with buzzwords like “synergy,” “apex-developer,” and “growth hacking.” He regularly posted slick, AI-generated infographics about market trends and engaged thoughtfully in the comments of other influencers, building a network of hundreds connections. He was not just a user; he was a member of the community.

    His hunting ground was Upwork. Here, Vepa’s profile was a thing of beauty. He had a 100% job success score, bolstered by a series of small, initial projects for which he delivered exceptional, almost suspiciously perfect work. These were the bait. He would take on a simple logo design or a minor website copy edit for a pittance, over-deliver dramatically, and secure a glowing five-star review. Each review was a brick in the formidable wall of his credibility. He understood the platform’s algorithm better than its engineers, knowing that a high response rate, a perfect score, and a history of completed contracts would place him at the top of every search.

    The scam, let’s call it “Project Sisyphus,” was elegant in its cruelty. He would use LinkedIn’s advanced search to identify his ideal victims: founders of early-stage tech startups, particularly those in the frantic pre-seed or seed funding stage. These were individuals under immense pressure, often with more ambition than experience, and crucially, a pressing need to build a minimal viable product (MVP) or a killer investor deck.

    After connecting on LinkedIn with a personalized note praising their vision, Vepa would wait. A few days later, he would message them about a “groundbreaking” but time-sensitive opportunity. He’d seen their post about seeking a developer and had a top-tier team, his team, with a two-week window before they started a major project. He presented a package: a full-stack development team, a UI/UX designer, and a project manager (all him, using different voices on Slack) for a flat, surprisingly reasonable fee. The catch was a 50% upfront payment to secure the “elite team.”

    The pressure and the polished presentation worked. Flattered by the attention of such a “reputable” professional and desperate to seize the moment, founders would agree. The Upwork contract provided a final layer of false security, its escrow system feeling like a guarantee. The moment the substantial upfront payment was released, Vepa Durdiyev began to fade.

    The first month was a flurry of activity. The “team” would be highly communicative, delivering beautiful wireframes, detailed project plans, and enthusiastic updates. They were building the dream. Then, communication would slow. Excuses would surface, a key developer was ill, a server issue, a family emergency, a sudden marriage. The deliverables became less substantial. The mounting list of issues from the client would be met with calm, reassuring messages from Vepa, promising to “get the project back on track.” And then, silence. Leaving the victim with a half-finished Figma board, a development plan turned fairy-tale and a bank account that was tens of thousands of dollars lighter.

    Estimates suggest “Project Sisyphus” netted Vepa Durdiyev over $1 million before the pattern was finally pieced together by a forum of angry, comparing victims.

    The story of Vepa Durdiyev is a stark cautionary tale for the digital age. He wasn’t a hacker breaking down firewalls; he was a social engineer manipulating the very foundations of professional trust. He exposed the soft underbelly of our gig economy: the fact that a five-star rating can be manufactured, a LinkedIn profile can be a work of fiction, and the pressure to succeed can blind us to the too-good-to-be-true.

    His legacy is a lingering sense of unease. He reminds us that in a world where we are encouraged to build our professional lives online, we must also learn to look for the cracks in the digital façade. For every genuine connection made, a Vepa Durdiyev is waiting, not in a shadowy alley, but in your LinkedIn and Upwork inbox, offering the world and leaving nothing but an empty promise and a lesson in the price of digital trust.

    Check out one of his most prominent ongoing scams, Oceanviews.ai, where he has teamed-up with another fraudster, Dillon Moses.

  • Tadrus Capital AI fraud by Mina Tadrus

    Mina Tadrus of Tadrus Capital AI fraud
    Mina Tadrus of Tadrus Capital AI fraud

    Mina Tadrus is the founder and Chief Executive Officer of Tadrus Capital LLC, a financial technology company that was marketed as a hedge fund utilizing artificial intelligence-driven, high-frequency trading strategies to generate high returns for investors. According to court documents and regulatory filings, Tadrus falsely claimed that his fund would deliver guaranteed annual returns of 18% to 30% by employing AI-based quantitative models. He raised over $5 million from at least 31 investors, primarily from the Egyptian-American Coptic Christian community, by promoting the fund as “recession-proof” and leveraging the excitement around AI technology.

    However, Tadrus was charged with investment adviser fraud in September 2023 by the U.S. Department of Justice, which alleged that he operated a Ponzi scheme. The Securities and Exchange Commission (SEC) filed a civil complaint in November 2023, accusing Tadrus of misrepresenting the fund’s investment strategy and using investor funds to pay earlier investors, cover personal expenses, and purchase luxury items, rather than investing them as promised. In February 2025, Tadrus pled guilty to the charges in a federal court in Brooklyn, New York, before Judge Hector Gonzalez. He was sentenced to 30 months in prison on August 19, 2025, and ordered to pay $4,224,850 in restitution.

    “While Tadrus sold a dream of high-profits to his investors, the only return they saw was the negative result of being swindled by someone they trusted. Today’s sentence and imposed restitution sees that Tadrus will spend real time behind bars and pay for his crimes. This new reality is not AI generated,” stated IRS-CI New York Special Agent in Charge Chavis.

    As set forth in court filings, Tadrus, a former stockbroker registered with the Financial Industry Regulatory Authority (FINRA) and derivatives consultant for a global financial institution, founded Tadrus Capital LLC in June 2020. Tadrus claimed to operate “the world’s first private high-yielding and fixed-income quantitative hedge fund” powered by artificial intelligence (AI) high-frequency trading models to guarantee investors up to 30% returns annually. In reality, Tadrus used no AI-based algorithmic trading. Tadrus also falsely claimed that Tadrus Capital was “recession-proof” and maintained liquidity with access to $5.5 billion in purchasing power.

    Prior to founding Tadrus Capital, Tadrus worked as a derivatives consultant at JPMorgan Chase and later as a trader and supervisor at T3 Trading Group. He holds a Master of Arts in Intelligence Studies from American Military University (AMU) and a Juris Doctorate from the University of Dayton School of Law. Despite claims in earlier media coverage that he managed institutional capital using algorithmic trading and founded a quantitative alternative asset management firm, the legal proceedings confirm that the investment activities he described were not genuine, and the fund did not engage in the AI-powered trading it advertised.

  • $20M Fraud Scheme by Babatunde Francis Ayeni

    Babatunde Francis Ayeni
    Babatunde Francis Ayeni

    MOBILE, AL – A Nigerian national was sentenced to ten years in federal prison for his role in a massive cyber fraud conspiracy that victimized over 400 people across the United States resulting in a collective loss of nearly $20 million.

    According to court documents and testimony, 33-year-old Babatunde Francis Ayeni, a citizen of Nigeria living in the United Kingdom at the time of his arrest, was involved in a sophisticated business e-mail compromise scheme targeting real estate transactions in the United States. Ayeni pleaded guilty to conspiracy to commit wire fraud in April of 2024.

    The conspiracy was carried out by individuals operating out of Nigeria and the United Arab Emirates To carry out this scheme, conspirators sent phishing e-mails containing attachments and links embedded with malicious code to title companies, real estate agents, and real estate attorneys across the United States. If an employee at a targeted real estate business clicked on the malicious link or attachment, they were prompted to enter their e-mail account login information. The employee’s login credentials were captured and sent to e-mail accounts controlled by Ayeni and other co-conspirators. The conspirators then logged into the employee’s e-mail and monitored the account for transactions where a buyer was scheduled to make a payment as part of a real estate transaction. Ayeni and other conspirators then sent e-mails to the purchaser from the compromised e-mail account. These e-mails contained wiring instructions. When the purchaser wired the funds as instructed in the e-mail, the money was deposited into bank accounts associated with the criminals instead of the legitimate real estate transaction. Ayeni fraudulently obtained the e-mail credentials of a real estate title company in Gulf Shores, Alabama, allowing him and co-conspirators to defraud victims in the Southern District of Alabama, and elsewhere.

    Over 400 people across the United States were victims of the conspiracy. Of these, 231 victims were unable to reverse the wire transactions in time and lost their entire transaction. The collective loss of these 231 victims was $19,599,969.46.

    During the multi-day sentencing hearing, United States District Judge Terry Moorer heard the impact of this crime from nearly twenty victims. In addition to those who spoke in court, numerous victims provided victim impact statements about how the crime affected them, noting that in addition to losing all of the money they saved for the purchase of a new home, they felt significant shame, despair, and depression due to being victimized the way they were.

    United States Attorney Sean P. Costello said, “Cyber-enabled crimes can cause substantial and lasting harm to victims in an instant. Criminals across the world may believe that they are causing no harm to their victims and that they are safe behind their keyboards, but this case proves otherwise. With our law enforcement partners, we will continue to aggressively investigate, pursue, and hold accountable the crooks who perpetrate frauds online, wherever they are.”

    Paul Brown, Special Agent in Charge of the Mobile Division of the FBI, said, “This type of behavior will not be tolerated in Alabama. After listening to our citizens speak about how the loss of funds impacted their lives, and the subsequent loss of what they thought was down payments for their future homes, I am pleased to see Ayeni receive a substantial sentence for these crimes. FBI Mobile will continue to educate the public about the potential dangers of online activity. If you believe you have been the victim of online fraud, please visit IC3.gov to file an official report.”

    Co-defendants Feyisayo Ogunsanwo and Yusuf Lasisi remain at-large and are believed to be outside the United States. The United States continues to actively seek their arrest and extradition to face justice in this case.

    To learn more about business email compromise scams, please visit www.fbi.gov/how-we-can-help-you/scams-and-safety/common-scams-and-crimes/business-email-compromise and www.ic3.gov/ Media/Y2023/PSA230609. Anyone who has been the victim of an internet-based crime should contact the Internet Crime Complaint Center (IC3) at www.ic3.gov.

    This case was investigated by the Federal Bureau of Investigation with assistance from law enforcement partners in the United Kingdom and elsewhere.

    Assistant U.S. Attorney Christopher Bodnar prosecuted the case on behalf of the United States. Substantial assistance was also provided by Amanda Chadwick and Rachel Yasser with the Department of Justice Office of International Affairs.

  • Top 10 AI-fraud

    AI fraud encompasses a range of deceptive practices leveraging artificial intelligence to impersonate individuals, generate fraudulent content, and automate large-scale attacks. Key types of AI fraud include:

    1. Deepfake video scams: AI-generated videos that convincingly impersonate individuals, often used in CEO/CFO fraud or celebrity endorsement scams. For example, deepfakes of Elon Musk have been used in investment scams, and similar videos have featured celebrities like Gordon Ramsay and Taylor Swift promoting fake products. The number of deepfakes online is doubling every six months, with an estimated 8 million expected to be shared in 2025.
    2. Voice cloning: AI is used to create synthetic voice messages that mimic real individuals, commonly employed in grandparent scams, extortion attempts, and impersonation of executives. Research indicates that 28% of UK adults believe they have been targeted by such scams, and 37% of organizations globally reported being targeted by deepfake voice attempts.
    3. Synthetic identity fraud: Fraudsters combine real stolen data (e.g., Social Security numbers) with AI-generated personal details to create fake identities. These synthetic identities are used to open bank accounts, apply for loans, and commit financial fraud. This is the fastest-growing financial crime in the U.S., with projected losses reaching $23 billion by 2030.
    4. Advanced financial malware: AI-powered malware can adapt and evolve in real time, evading traditional antivirus software. It can alter its behavior based on the security environment, making detection difficult. Reports suggest tools like OpenAI’s ChatGPT have been used to generate new strains of such malware.
    5. AI-enhanced phishing: Large language models (LLMs) are used to craft highly convincing phishing emails and websites that mimic trusted brands. These messages lack common red flags like grammatical errors and can bypass spam filters. AI-powered phishing can achieve success rates comparable to human-crafted messages, with one study showing 60% of participants fell victim to AI-automated phishing.
    6. Fraud-as-a-Service (FaaS): Criminals use ready-to-use AI toolkits sold on dark web forums or Telegram channels. These kits include tools like WormGPT, Agent Zero, FraudGPT, and DarkBard, which are designed for phishing, identity spoofing, and generating malicious content. Some tools even offer customer support and subscription models.
    7. Automated vishing (voice phishing): Tools like ViKing, developed by researchers, demonstrate how AI can run entire phone scams without human intervention, using voice cloning and real-time conversation adaptation. In trials, it successfully deceived 52% of participants, rising to 77% among those unaware of the threat.
    8. Document fraud: Services like OnlyFake allow fraudsters to generate realistic digital IDs, passports, and invoices for as little as $15, bypassing Know Your Customer (KYC) checks.
    9. Business email compromise (BEC): AI tools are used to craft urgent, personalized payment requests that mimic corporate tone and context, often incorporating details from public sources like LinkedIn or financial filings to increase credibility.
    10. Invoice swapping: Tools intercept legitimate invoice emails and replace payment details with fraudulent accounts before the payment is processed, often going unnoticed until the real vendor follows up.

    These fraud methods are increasingly difficult to detect due to their scalability, personalization, and use of advanced AI, requiring proactive detection and public awareness.

  • Builder.ai fraudster Sachin Dev Duggal

    Sachin Dev Duggal
    Sachin Dev Duggal

    What was the builder.ai fraud?

    1. The “AI-Washing” and Technological Misrepresentation

    The Claim:
    Builder.ai marketed itself as a “no-code” AI platform that could build custom apps “as easy as ordering pizza.” Its flagship feature was an AI assistant named “Natasha,” which was presented as an intelligent project manager and coder that could automate a significant portion of the software development process. The company claimed its AI could build 80% of an app automatically.

    The Alleged Reality:
    According to numerous reports and a lawsuit from a former executive:

    • “Natasha” was largely a facade. Internal company slang reportedly referred to Natasha as “A Guy In India,” highlighting the reliance on human labor.
    • The platform did not generate functional code on its own. Instead, it often produced basic, non-functional templates or outlines.
    • The bulk of the actual coding, debugging, and integration work was performed by an army of over 700 human engineers based in India, working for an associated company called Assembly Lines.
    • The “AI” was allegedly used for more basic tasks like breaking down a project into components and assigning tickets to human engineers, rather than the sophisticated, autonomous code generation it was marketed as.

    Why This is a Problem:
    This is a classic case of “AI-washing”—exaggerating the capabilities of AI to attract investment in a hot market. Investors like Microsoft, Insight Partners, and the Qatar Investment Authority poured in over $450 million based on the promise of a scalable, AI-driven platform, not a traditional software outsourcing firm with a high human labor cost.

    2. Financial Misconduct and Revenue Inflation

    The Claim:
    In 2022, Builder.ai secured a $45 million debt financing round from Israeli lender Viola Credit. To obtain this loan, companies must demonstrate strong financial health and revenue.

    The Alleged Reality:
    In May 2024, The Wall Street Journal reported that Viola Credit had sued Builder.ai, alleging it discovered the company had provided “materially misleading” financial statements.

    • Builder.ai reportedly told Viola it had achieved $22.8 million in revenue for the first half of 2022.
    • However, an investigation by Viola allegedly found the true figure was closer to $123,000—an inflation of over 18,000%.
    • This alleged inflation was a key factor in Viola’s decision to lend the money. Upon discovering the discrepancy, Viola moved to seize control of Builder.ai’s accounts.

    The Aftermath and Fallout

    The revelations triggered a cascade of problems for the company:

    1. Investor and Lender Crisis: The relationship with its major lender broke down completely, creating a severe cash flow crisis.
    2. Leadership Changes: In February 2024, co-founder Sachin Dev Duggal moved from the CEO role to Chief Wizard (a titular role), and the company appointed a new CEO.
    3. Mass Layoffs and Operational Cuts: The company underwent significant layoffs and scaled back its operations drastically to conserve cash.
    4. Intense Scrutiny: The case became a poster child for the dangers of AI hype and the lack of due diligence during the peak of the tech investment boom. It drew comparisons to other high-profile startup frauds like Theranos and WeWork, though on a different scale.

    Builder.ai’s Defense

    Builder.ai has fought back against these allegations. The company:

    • States that its AI is real and is used to automate the “scut work” of software development, making human engineers more efficient.
    • Argues that the reliance on human experts for complex tasks is a feature, not a bug, and that this was never hidden from sophisticated investors who conducted due diligence.
    • Claims the lawsuit with Viola Credit is a contractual dispute and that the lender acted in “bad faith.”
    • Maintains that its financial reporting was accurate and complied with standards.

    Conclusion

    The “Builder.ai fraud” refers to the sweeping allegations that the company systematically misled investors and clients about the capabilities of its AI technology and the state of its finances. While the company denies any wrongdoing, the scandal has severely damaged its reputation, led to a financial crisis, and serves as a cautionary tale about the potential for deception in the highly competitive and richly funded world of AI startups. The outcome of its legal battles with Viola Credit and others will ultimately determine the final verdict on these allegations.