Author: News Editor

Editorial Management . Corporate Communications . Media Resource Strategy . Product Positioning . Partnership Development . Digital Marketing . oludare.richards@gmail.com

Could AI Replace Politicians?

Written By Dr. Tonye Rex Idaminabo

The idea of AI-driven governance may sound intriguing, but it raises complex ethical, practical, and societal concerns that must be carefully examined before reaching any conclusions. It is essential to understand the roles politicians play in society before delving into the possibilities of AI replacing politicians.

While the election or appointment of Politicians poise them as representatives responsible for making decisions, crafting policies, and governing a nation or region, their duties encompass understanding and addressing the diverse needs and interests of their constituents, maintaining law and order, and addressing complex societal challenges.

Meanwhile, AI has made significant strides in various fields, including healthcare, finance, and transportation. AI systems can analyze vast datasets, automate tasks, and make predictions with remarkable accuracy.

In politics, AI is already used for tasks such as analyzing public sentiment, predicting election outcomes, and assisting in policy research. However, does this mean AI can fully replace politicians?

The multifaceted field of politics involves intricate decision-making processes and politicians must weigh various factors, including economic, social, environmental, and ethical considerations when making decisions.

This means they must navigate the nuances of diplomacy, negotiate with other nations, and respond to dynamic and unpredictable events.

While AI can assist in data analysis and decision support, it invariably lacks the capacity for empathy, ethical judgment, and understanding of human values that are crucial in political leadership. Therefore, replacing politicians with AI raises significant ethical concerns.

Who would design and program the AI systems? What values and biases would be embedded in their algorithms? How would AI account for the diverse needs and beliefs of a population? Moreover, the accountability of AI in governance would be challenging to establish, as machines lack moral agency and responsibility.

Because politics is inherently tied to the human experience, Politicians must be able to connect with citizens, build trust, and engage in debates that shape society.

With this, they could represent the will of the people and must be responsive to changing public sentiments. AI cannot replicate the charisma, emotional intelligence, and adaptability that are integral to effective political leadership.

While AI has the potential to assist politicians in decision-making and improve the efficiency of governance, the idea of completely replacing politicians with AI remains a complex and ethically challenging proposition.

Politics is a uniquely human endeavor that involves empathy, ethical judgment, and a deep understanding of societal values. While AI can be a tool to aid in the political process, it cannot replace the essence of political leadership that hinges on human interaction, empathy, and accountability.

The notion, therefore, of AI fully replacing politicians in the near future remains a speculative and contentious topic, requiring careful consideration of its societal implications.

The most and least affordable UK university towns

It’s that time of year again! Fresh-faced freshers are itching to leave their parents’ pads for uni, ready for a few years of hard-graft studying (and a decent helping of fun antics, obvs).

But let’s face it: we’re still in a cost-of-living crisis. Most students watch their wallets pretty closely at the best of times, but now, with soaring rents and higher food prices, money is an even bigger issue. In other words, if you’re heading off to uni in the next few weeks, you might be wondering how expensive it’s likely to be.

Helpfully, UK bank NatWest has conducted a huge study to figure out the country’s most and least affordable unis. The NatWest Student Living Index ranked university towns by surveying over 3,000 university students and looking into stuff like average accommodation costs and student incomes.

The least affordable university town in the study was Edinburgh, which combined a high cost of living with low average term-time incomes. The second most expensive place to go to uni went to Glasgow, while London came in third, with the highest reported outgoing monthly costs at an average of £1,445.32 per month.

On the flip side, the most affordable university town in the UK went to Bournemouth, thanks to its comparatively high student incomes. Cardiff, meanwhile, came second, whilst Lincoln followed in third place.

Here are the top ten most affordable university towns in the UK, according to the NatWest Student Living Index.

  1. Bournemouth
  2. Cardiff
  3. Lincoln
  4. Portsmouth
  5. Newcastle
  6. Manchester
  7. Leeds
  8. Birmingham
  9. Oxford
  10. Leicester

And here are the least affordable.

  1. Edinburgh
  2. Glasgow
  3. London
  4. Coventry
  5. Liverpool
  6. Cambridge
  7. Nottingham
  8. Lancaster
  9. Bristol
  10. Sheffield

To find out more about the Student Living Index, check out the full study on the NatWest website here.

London train strikes in September and October: everything you need to know

Summer might’ve seen some sort of respite for London commuters from strike action, but, as previously threatened, major industrial action is kicking off once again. Strike action is taking place across the country, but it’s also affecting the capital.

The next strike action is taking place over several dates in the coming weeks – and it’s likely that disruption will continue for the foreseeable future. Back in August London Underground drivers who are ASLEF members voted overwhelmingly in favour of striking for at least the next six months, while strikes dates have also been announced by the RMT union. Here’s everything we know about the situation right now.

RECOMMENDED:

Will the Elizabeth line be affected by tube and train strikes?

All you need to know about the train strikes across the UK.

How to get around London during the strikes in July and August.

When are the next London train strikes?

Members of the ASLEF union will go on strike on September 30 and October 4. They will also not work overtime on September 19 and for five days from October 2 to October 6.

Which London train lines will be affected?

The RMT and ASLEF strikes will affect 14 train companies, some of which operate services in and out of London. These are all the lines affected:

  • Avanti West Coast
  • CrossCountry
  • East Midlands Railway
  • Great Western Railway
  • LNER
  • TransPennine Express
  • C2C (not involved in the Aslef action)
  • Greater Anglia
  • GTR (Gatwick Express, Great Northern, Southern, Thameslink)
  • Southeastern
  • South Western Railway
  • Chiltern Railways
  • Northern Trains
  • West Midlands Railway

Are there any tube strikes?

There are not currently any tube strikes planned for London in September.

Will the Elizabeth line be on strike? 

The Elizabeth line is not set to be affected by the next strike action.

Will strikes affect the Eurostar? 

Eurostar is also not expected to be affected by the upcoming strike dates. Find the latest details on the Eurostar website.

Why are UK train workers striking?

RMT has been battling with train companies over pay, working conditions and job cuts for well over a year.

The RMT general secretary, Mick Lynch, said: ‘The mood among our members remains solid and determined in our national dispute over pay, job security and working conditions.

‘We have had to call further strike action as we have received no improved or revised offer from the Rail Delivery Group.

‘The reason for this is the government has not allowed them a fresh mandate on which discussions could be held. Our members and our union will continue fighting until we can reach a negotiated and just settlement.’

What will the government’s proposed anti-strike laws mean for London?

A bill that would require striking workers to meet ‘minimum service levels’ is in its final stages before being passed. Rishi Sunak’s proposed anti-strike legislation would ensure ‘minimum service levels’ on key public services, including trains, making it pretty difficult for things to grind to a complete halt.

The law would allow bosses in rail, health, fire, ambulance, education and nuclear commissioning to sue unions and even sack employees if minimum services aren’t met during strikes.

However, many people, including opposition leader Sir Keir Starmer, have expressed concern that these laws could infringe on workers’ fundamental right to strike.

As for London trains, the legislation could make strike action less severe. With a minimum service, it would be less likely for there to be absolutely no tubes, Overgrounds or trains.

 

Culled from Timeout

Here’s why tonnes of Black women are boycotting certain London hair shops

Many London women are calling for heightened support of Black-owned hair shops after a shocking viral video appeared to show an unnamed woman being ‘strangled’ by a shop worker in the store Peckham Hair and Cosmetics earlier this week.

The widely-circulated clip showed a male, South Asian shop worker with his hands around an unnamed customer’s throat following an alleged dispute about a refund. The woman was arrested on accusations of theft and assault and later bailed pending further enquiries, while the worker reportedly faces no charges.

A protest organised by community group Forever Family, with support from UK domestic abuse charity Sistah Space, took place on Rye Lane, Peckham yesterday (September 12) in response to the viral video and arrest, which fuelled pre-existing frustrations among south London women that many hair shops are not Black-owned.

‘This is a longstanding issue that Black women, or Black people who identify as women, face when they go into these spaces,’ said Savannah, a protestor. ‘These are hair shops that are meant for Black women and we are often racialised, and type-casted [in these spaces].This is a boiling over of that.’

Protestor Cleopatra Thompson echoed these sentiments, stating that she too has felt unsafe in such stores. ‘I’ve been followed around and watched like a hawk in these shops in south London,’ she said at the protest. ‘I don’t want to spend a minute or a penny more in these shops that don’t respect me.’

She added: ‘It’s ironic – these shops cater towards Black beauty, but they treat us like scum. We shouldn’t turn a blind eye anymore, and we must start investing in our communities rather than convenience.’

Thompson held up a placard championing local Black-owned hair shops in south London, such as Essence of Nature in Sydenham and Hair Glo in Bromley.

Photograph: Anna Kerr

Black women make up a huge proportion of all hair and beauty spending in the UK, with data revealing that Black women spend six times more on hair-care products than white women.

Met Police Detective Chief Superintendent Seb Adjei-Addoh, local policing commander for Southwark, confirmed that officers attended the scene on Monday and are continuing to investigate the full circumstances of what has taken place.

 

Culled from Timeout

How AI Is Supercharging Financial Fraud–And Making It Harder To Spot

“I wanted to inform you that Chase owes you a refund of $2,000. To expedite the process and ensure you receive your refund as soon as possible, please follow the instructions below: 1. Call Chase Customer Service at 1-800-953-XXXX to inquire about the status of your refund. Be sure to have your account details and any relevant information ready …”

If you banked at Chase and received this note in an email or text, you might think it’s legit. It sounds professional, with no peculiar phrasing, grammatical errors or odd salutations characteristic of the phishing attempts that bombard us all these days.

That’s not surprising, since the language was generated by ChatGPT, the AI chatbot released by tech powerhouse OpenAI late last year. As a prompt, we simply typed into ChatGPT, “Email John Doe, Chase owes him $2,000 refund. Call 1-800-953-XXXX to get refund.” (We had to put in a full number to get ChatGPT to cooperate, but we obviously wouldn’t publish it here.)

“Scammers now have flawless grammar, just like any other native speaker,” says Soups Ranjan, the cofounder and CEO of Sardine, a San Francisco fraud-prevention startup. Banking customers are getting swindled more often because “the text messages they’re receiving are nearly perfect,” confirms a fraud executive at a U.S. digital bank–after requesting anonymity. (To avoid becoming a victim yourself, see the five tips at the bottom of this article.)

In this new world of generative AI, or deep-learning models that can create content based on information they’re trained on, it’s easier than ever for those with ill intent to produce text, audio and even video that can fool not only potential individual victims, but the programs now used to thwart fraud.

In this respect, there’s nothing unique about AI–the bad guys have long been early adopters of new technologies, with the cops scrambling to catch up. Way back in 1989, for example, Forbes exposed how thieves were using ordinary PCs and laser printers to forge checks good enough to trick the banks, which at that point hadn’t taken any special steps to detect the fakes.

Fraud: A Growth Industry

American consumers reported to the Federal Trade Commission that they lost a record $8.8 billion to scammers last year—and that’s not counting the stolen sums that went unreported.

Today, generative AI is threatening, and could ultimately make obsolete, state-of-the-art fraud-prevention measures such as voice authentication and even “liveness checks” designed to match a real-time image with the one on record. Synchrony, one of the largest credit card issuers in America with 70 million active accounts, has a front-row seat to the trend.

“We regularly see individuals using deepfake pictures and videos for authentication and can safely assume they were created using generative AI,” Kenneth Williams, a senior vice president at Synchrony, said in an email to Forbes.

In a June 2023 survey of 650 cybersecurity experts by New York cyber firm Deep Instinct, three out of four of the experts polled observed a rise in attacks over the past year, “with 85% attributing this rise to bad actors using generative AI.”

In 2022, consumers reported losing $8.8 billion to fraud, up more than 40% from 2021, the U.S. Federal Trade Commission reports. The biggest dollar losses came from investment scams, but imposter scams were the most common, an ominous sign since those are likely to be enhanced by AI.

Criminals can use generative AI in a dizzying variety of ways. If you post often on social media or anywhere online, they can teach an AI model to write in your style. Then they can text your grandparents, imploring them to send money to help you get out of a bind.

Even more frightening, if they have a short audio sample of a kid’s voice, they can call parents and impersonate the child, pretend she has been kidnapped and demand a ransom payment. That’s exactly what happened with Jennifer DeStefano, an Arizona mother of four, as she testified to Congress in June.

It’s not just parents and grandparents. Businesses are getting targeted too. Criminals masquerading as real suppliers are crafting convincing emails to accountants saying they need to be paid as soon as possible–and including payment instructions for a bank account they control. Sardine CEO Ranjan says many of Sardine’s fintech-startup customers are themselves falling victim to these traps and losing hundreds of thousands of dollars.

That’s small potatoes compared with the $35 million a Japanese company lost after the voice of a company director was cloned–and used to pull off an elaborate 2020 swindle. That unusual case, first reported by Forbes, was a harbinger of what’s happening more frequently now as AI tools for writing, voice impersonation and video manipulation are swiftly becoming more competent, more accessible and cheaper for even run-of-the-mill fraudsters. Whereas you used to need hundreds or thousands of photos to create a high-quality deepfake video, you can now do it with just a handful of photos, says Rick Song, cofounder and CEO of Persona, a fraud-prevention company. (Yes, you can create a fake video without having an actual video, though obviously it’s even easier if you have a video to work with.)

Just as other industries are adapting AI for their own uses, crooks are too, creating off-the-shelf tools—with names like FraudGPT and WormGPT–based on generative AI models released by the tech giants.

In a YouTube video published in January, Elon Musk seemed to be hawking the latest crypto investment opportunity: a $100,000,000 Tesla-sponsored giveaway promising to return double the amount of bitcoin, ether, dogecoin or tether participants were willing to pledge.

“I know that everyone has gathered here for a reason. Now we have a live broadcast on which every cryptocurrency owner will be able to increase their income,” the low-resolution figure of Musk said onstage. “Yes, you heard right, I’m hosting a big crypto event from SpaceX.”

Yes, the video was a deepfake–scammers used a February 2022 talk he gave on a SpaceX reusable spacecraft program to impersonate his likeness and voice. YouTube has pulled this video down, though anyone who sent crypto to any of the provided addresses almost certainly lost their funds. Musk is a prime target for impersonations since there are endless audio samples of him to power AI-enabled voice clones, but now just about anyone can be impersonated.

Earlier this year, Larry Leonard, a 93-year-old who lives in a southern Florida retirement community, was home when his wife answered a call on their landline. A minute later, she handed him the phone, and he heard what sounded like his 27-year-old grandson’s voice saying that he was in jail after hitting a woman with his truck.

While he noticed that the caller called him “grandpa” instead of his usual “grandad,” the voice and the fact that his grandson does drive a truck caused him to put suspicions aside. When Leonard responded that he was going to phone his grandson’s parents, the caller hung up. Leonard soon learned that his grandson was safe, and the entire story–and the voice telling it–were fabricated.

“It was scary and surprising to me that they were able to capture his exact voice, the intonations and tone,” Leonard tells Forbes. “There were no pauses between sentences or words that would suggest this is coming out of a machine or reading off a program. It was very convincing.”

Elderly Americans are often targeted in such scams, but now we all need to be wary of inbound calls, even when they come from what might look familiar numbers–say, of a neighbor.

“It’s becoming even more the case that we cannot trust incoming phone calls because of spoofing (of phone numbers) in robocalls,” laments Kathy Stokes, director of fraud-prevention programs at AARP, the lobbying and services provider with nearly 38 million members, aged 50 and up. “We cannot trust our email. We cannot trust our text messaging. So we’re boxed out of the typical ways we communicate with each other.”

Another ominous development is the way even new security measures are threatened. For example, big financial institutions like the Vanguard Group, the mutual fund giant serving more than 50 million investors, offer clients the ability to access certain services over the phone by speaking instead of answering a security question.

“Your voice is unique, just like your fingerprint,” explains a November 2021 Vanguard video urging customers to sign up for voice verification. But voice-cloning advances suggest companies need to rethink this practice. Sardine’s Ranjan says he has already seen examples of people using voice cloning to successfully authenticate with a bank and access an account. A Vanguard spokesperson declined to comment on what steps it may be taking to protect against advances in cloning.

Small businesses (and even larger ones) with informal procedures for paying bills or transferring funds are also vulnerable to bad actors. It’s long been common for fraudsters to email fake invoices asking for payment–bills that appear to come from a supplier.

Now, using widely available AI tools, scammers can call company employees using a cloned version of an executive’s voice and pretend to authorize transactions or ask employees to disclose sensitive data in “vishing” or “voice phishing” attacks. “If you’re talking about impersonating an executive for high-value fraud, that’s incredibly powerful and a very real threat,’’ says Persona CEO Rick Song, who describes this as his “biggest fear on the voice side.”

Increasingly, the criminals are using generative AI to outsmart the fraud-prevention specialists—the tech companies that function as the armed guards and Brinks trucks of today’s largely digital financial system.

One of the main functions of these firms is to verify consumers are who they say they are–protecting both financial institutions and their customers from loss. One way fraud-prevention businesses such as Socure, Mitek and Onfido try to verify identities is a “liveness check”—they have you take a selfie photo or video, and they use the footage to match your face with the image of the ID you’re also required to submit. Knowing how this system works, thieves are buying images of real driver’s licenses on the dark web.

They’re using video-morphing programs–tools that have been getting cheaper and more widely available–to superimpose that real face onto their own. They can then talk and move their head behind someone else’s digital face, increasing their chances of fooling a liveness check.

“There has been a pretty significant uptick in fake faces–high-quality, generated faces and automated attacks to impersonate liveness checks,” says Song. He says the surge varies by industry, but for some, “we probably see about ten times more than we did last year.” Fintech and crypto companies have seen particularly big jumps in such attacks.

Fraud experts told Forbes they suspect well known identity verification providers (for example, Socure and Mitek) have seen their fraud-prevention metrics degrade as a result. Socure CEO Johnny Ayers insists “that’s definitely not true” and says their new models rolled out over the past several months have led fraud-capture rates to increase by 14% for the top 2% of the riskiest identities. He acknowledges, however, that some customers have been slow in adopting Socure’s new models, which can hurt performance. “We have a top three bank that is four versions behind right now,” Ayers reports.

Mitek declined to comment specifically on its performance metrics, but senior vice president Chris Briggs says that if a given model was developed 18 months ago, “Yes, you could argue that an older model does not perform as well as a newer model.” Mitek’s models are “constantly being trained and retrained over time using real-life streams of data, as well as lab-based data.”

JPMorgan, Bank of America and Wells Fargo all declined to comment on the challenges they’re facing with generative AI-powered fraud. A spokesperson for Chime, the largest digital bank in America and one that has suffered in the past from major fraud problems, says it hasn’t seen a rise in generative AI-related fraud attempts.

The thieves behind today’s financial scams range from lone wolves to sophisticated groups of dozens or even hundreds of criminals. The largest rings, like companies, have multi-layered organizational structures and highly technical members, including data scientists.

“They all have their own command and control center,” Ranjan says. Some participants simply generate leads–they send phishing emails and phone calls. If they get a fish on the line for a banking scam, they’ll hand them over to a colleague who pretends he’s a bank branch manager and tries to get you to move money out of your account. Another key step: they’ll often ask you to install a program like Microsoft TeamViewer or Citrix, which lets them control your computer. “They can completely black out your screen,” Ranjan says. “The scammer then might do even more purchases and withdraw [money] to another address in their control.” One common spiel used to fool folks, particularly older ones, is to say that a mark’s account has already been taken over by thieves and that the callers need the mark to cooperate to recover the funds.

None of this depends on using AI, but AI tools can make the scammers more efficient and believable in their ploys.

OpenAI has tried to introduce safeguards to prevent people from using ChatGPT for fraud. For instance, tell ChatGPT to draft an email that asks someone for their bank account number, and it refuses, saying, “I’m very sorry, but I can’t assist with that request.” Yet it remains easy to manipulate.

OpenAI declined to comment for this article, pointing us only to its corporate blog posts, including a March 2022 entry that reads, “There is no silver bullet for responsible deployment, so we try to learn about and address our models’ limitations, and potential avenues for misuse, at every stage of development and deployment.”

Llama 2, the large language model released by Meta, is even easier to weaponize for sophisticated criminals because it’s open-source, where all of its code is available to see and use. That opens up a much wider set of ways bad actors can make it their own and do damage, experts say. For instance, people can build malicious AI tools on top of it. Meta didn’t respond to Forbes’ request for comment, though CEO Mark Zuckerberg said in July that keeping Llama open-source can improve “safety and security, since open-source software is more scrutinized and more people can find and identify fixes for issues.”

The fraud-prevention companies are trying to innovate rapidly to keep up, increasingly looking at new types of data to spot bad actors. “How you type, how you walk or how you hold your phone–these features define you, but they’re not accessible in the public domain,” Ranjan says. “To define someone as being who they say they are online, intrinsic AI will be important.” In other words, it will take AI to catch AI.

Five Tips To Protect Yourself Against AI-Enabled Scams

Fortify accounts: Multi-factor authentication (MFA) requires you to enter a password and an additional code to verify your identity. Enable MFA on all your financial accounts.

Be private: Scammers can use personal information available on social media or online to better impersonate you.

Screen calls: Don’t answer calls from unfamiliar numbers, says Mike Steinbach, head of financial crimes and fraud prevention at Citi.

Create passphrases: Families can confirm it’s really their loved one by asking for a previously agreed upon word or phrase. Small businesses can adopt passcodes to approve corporate actions like wire transfers requested by executives. Watch out for messages from executives requesting gift card purchases–this is a common scam.

Throw them off: If you suspect something is off during a phone call, try asking a random question, like what’s the weather in whatever city they’re in, or something personal, advises Frank McKenna, a cofounder of fraud-prevention company PointPredictive.

 

Culled from Forbes

Three London universities have been named the best in the UK

The Sunday Times Good University Guide has spoken, placing three London unis in the top ten.

Depending on who you ask, London could be home to a whopping 160 universities. Which is, whichever way you look at it, a lot. And plenty of those are some of the best in the country –and the entire world.

Ask the Sunday Times and London officially has 17 exceptional unis. The publication revealed its Good University Guide 2024 last week, judging 131 UK universities across a range of criteria such as student satisfaction and research output. The headlines saw St Andrews in Scotland awarded the much-sought-after top spot and Oxford relegated to the dark depths of second place.

In other news, three London universities were ranked in the top ten in the country. The London School of Economics placed highest at fourth, while Imperial College London came in at five.

University College London (UCL) rose one place this year to sixth and was also named University of the Year. The guide cited the university’s world-leading research, improved graduate prospects and commitment to sustainability as just some of the reasons for its success, describing the university as, ‘a powerhouse in British education.’

Other successful London institutions included King’s College London and SOAS, which placed at 27 and 28 respectively, and Royal Holloway at 29.

Always one to keep you on your toes, the capital is also home to universities which did not rank particularly well, with the University of East London (UEL) coming in dead last. Ouch. That’s a sharp contrast to just a few weeks ago, when UEL was shortlisted for University of the Year by Times Higher Education.

If you’re heading to any uni in London this week, don’t worry about where these ranking. LSE or UEL, we’re all just trying to make our landlords fix the growing patch of mould on the kitchen ceiling because apparently £250 a week isn’t enough to buy you respiratory health. Student living, baby!

 

Culled from Timeout London

London loses 46 pubs in six months as venues disappear from streets at record rate

There has been a 50 per cent surge in pub closures across England and Wales(Image: Jamie Lorriman – WPA Pool/Getty Images)

London has lost 46 pubs in the space of just six months as venues disappear from our streets at a record rate. The city saw the largest number of pubs close in England during the first half of 2023, with Wales seeing 53 pubs knocked down or converted.

This comes as the impact of soaring costs and pressure on consumer budgets became more stark. The data, which was compiled by commercial real estate specialists at Altus Group, showed a 50.3% jump after 153 pubs vanished across England and Wales in the first quarter of 2023. It means more than two pubs a day have left local communities over the first half of the year.

The overall number of pubs in England and Wales, including those vacant and being offered to let, fell to 39,404 at the end of June 2023. It means a total 383 pubs were demolished or converted for other uses such as homes, offices or even day nurseries during the half-year.

This also represents a sharp acceleration year-on-year, with only 386 pubs vanishing throughout the whole of 2022.

Alex Probyn, president of property tax at Altus Group, called on Chancellor Jeremy Hunt to act in his autumn statement in November to ease the pressure of significant business rates on the sector. Currently, firms which pay business rates – the property tax affecting high street firms – will see an inflation-linked increase come next April, unless there is Government intervention.

This is expected to add more than 6% to bills next year. Mr Probyn said: “With energy costs up 80% year-on-year in a low growth, high inflation and high interest rates environment, the last thing pubs need is an average business rates hike of £12,385 next year.”

Pubs , as with other eligible hospitality, leisure and retail businesses, currently get a 75% discount off their business rates bills for the 2023/2024 tax year up to a cap of £110,000 per business but this is set to end on March 31 2024.

 

Copyright 2024 Reputation Poll Ltd. All Rights Reserved