When your building super is an AI bot
The new maintenance coordinator at a Dallas apartment complex is getting praise from tenants and coworkers for their good work and late-night assistance. Previously, the building’s eight-person staff, who managed the building’s 814 apartments and townhomes, were overworked and working longer hours than they wanted.
In addition to working overtime, the complex’s new employee, District at Cypress Waters, is available 24/7 to schedule repair requests and never takes a day off.
That’s because the maintenance coordinator is an artificial intelligence bot that property manager Jason Busboom started using last year. The bot, which sends text messages under the name Matt, takes requests and manages appointments.
The team also has Lisa, the leasing bot that answers questions from potential tenants, and Hunter, the bot that reminds people to pay rent. Mr. Busboom chose the personalities he wanted for each AI assistant: Lisa is professional and informative; Matt is friendly and helpful; and Hunter is stern and must sound authoritative when reminding tenants to pay rent.
The technology has freed up valuable time for Mr. Busboom’s human staff, he said, and everyone is now much happier in his or her job. Previously, “when someone went on vacation, it was very stressful,” he added.
Chatbots — and other AI tools that can track common area usage and monitor energy consumption, support construction management, and perform other tasks — are becoming increasingly common in property management. The money and time saved by the new technologies could generate $110 billion or more in value for the real estate industry, according to a report released in 2023 by McKinsey Global Institute. But the advancement of AI and its slingshot into public consciousness have also raised questions about whether tenants should be informed when interacting with an AI bot.
Ray Weng, a software programmer, discovered he was dealing with AI leasing agents last year while looking for an apartment in New York. Officers in two buildings then used the same name and gave the same answers to his questions.
“I’d rather deal with someone,” he said. “It’s a big commitment to sign a lease.”
Some of the tours of his apartments were self-guided, Mr. Weng said, “and when it’s all automated, it feels like they don’t care enough that a real person is talking to me.”
EliseAI, a New York-based software company whose virtual assistants are used by owners of nearly 2.5 million apartments in the United States, including some managed by property management company Greystar, is focused on making its assistants as human as possible, said Minna Song, EliseAI’s CEO. In addition to being available via chat, text and email, the bots can communicate with tenants via voice and can have multiple accents.
The virtual assistants that help with maintenance requests can ask follow-up questions, such as verifying which sink needs repaired in case a tenant is unavailable when the repair is made, Ms. Song said, and some are starting to help tenants troubleshoot maintenance issues themselves. Tenants with a leaky toilet, for example, could receive a message with a video showing where the water shutoff valve is and how to use it while they wait for a plumber.
The technology is so good at having a conversation and asking follow-up questions that tenants often mistake the AI assistant for a human. “People come to the leasing office and ask for Elise by name,” Ms. Song said, adding that tenants have texted the chatbot to have coffee, told managers that Elise deserved a raise and even issued gift certificates for the chatbot.
Not telling customers that they have communicated with a bot is risky. Duri Long, an assistant professor of communications studies at Northwestern University, said it could cause some people to lose trust in the company using the technology.
Alex John London, a professor of ethics and computational technologies at Carnegie Mellon University, said people may view the deception as disrespectful.
“All things considered, it’s better to have your bot announce at the beginning that it’s a computer assistant,” Dr. London said.
Ms. Song said it’s up to each company to keep an eye on evolving regulatory standards and think carefully about what it tells consumers. A vast majority of states don’t have laws requiring disclosure of AI use when interacting with a human, and the laws that do exist are mostly about voting and sales, so a bot used for maintenance scheduling or rent reminders wouldn’t have to be disclosed to customers. (The District at Cypress Waters doesn’t tell renters and potential renters they’re interacting with an AI bot.)
Another risk is the information the AI generates. Milena Petrova, an associate professor of real estate and corporate finance at Syracuse University, said humans would need to be involved “to critically analyze all the results,” especially for any interactions beyond the most basic and routine ones.
Sandeep Dave, chief digital and technology officer at CBRE, a real estate services company, said it didn’t help that the AI ”appears very confident, so people will tend to believe it.”
Marshal Davis, who manages real estate and a real estate technology consulting firm, oversees the AI system he created to help his two office workers answer the 30 to 50 calls they receive daily at a 160-apartment complex in Houston. The chatbot is good at answering simple questions, such as questions about rent payment procedures or details about available apartments, Davis said. But on more complicated issues, the system can “answer the way it thinks it should and not necessarily the way you want it to,” Davis said.
Mr. Davis records most conversations, runs them through another AI tool to summarize them, and then listens to the conversations that seem problematic — such as “when the AI says, ‘Customer has expressed frustration,’” he said — to understand how the system can be improved. .
Some renters aren’t entirely convinced. Jillian Pendergast interacted with bots last year while looking for an apartment in San Diego. “They’re great for booking appointments,” she said, but dealing with AI assistants instead of humans can get frustrating when they start repeating responses.
“I see the potential, but I feel like they are still in the trial-and-error phase,” Ms. Pendergast said.