Cybersecurity for Real Estate Organizations
- Published
- Oct 9, 2024
- Topics
- Share
Watch this on-demand webinar where Michael Richmond, from EisnerAmper's cybersecurity team, discusses the latest threats facing the real estate industry and provide actionable steps to safeguard your organization.
Transcript
Michael Richmond:Thanks, Bella. So, good morning, everybody, and thank you for joining us today. As Bella mentioned, I'm the Partner in our Outsourced IT Group in charge of cybersecurity and digital forensics practice area, which is fitting for this webinar today as we are in Cybersecurity Awareness Month. And really, just want to cover a few things from an agenda standpoint, what we're going to be covering in the PowerPoint and webinar today.
Really, I always lead off with defining what cybersecurity is. I think it's a good baseline for everybody, it gets us all on the same page, so we're talking the same from a taxonomy standpoint. And then looking at some of the cyber attack drivers that are out there, you're going to recognize these. There's three that we're going to cover. And then looking at threat landscape. What's really out there? What's driving threat actors today? What do we need to be concerned with? And then kind of wrapping it all up with incident response planning. How do we deal with these as we go through and look at our organizations, what our risk profile looks like? How do we actually come up with a plan, develop that? What are some key areas to focus on? And then wrap that all up with some Q&A to hopefully answer any questions that we don't cover in the webinar. So, let's get started.
I always use this slide for the majority of, I've been using it for probably three or four years, to define what cybersecurity is. Three-legged stool is the perfect example because really what we're building on is confidentiality, integrity and availability of information systems and supporting ecosystems around that in the face of attacks, and how we mitigate that and recover from that. And it's really that confidentiality, integrity and availability are three components that are ever present. Now, they may have a different focus area, depending on the type of information and assets we're looking to protect, but they're always there. Right? So if we're talking about health information for a particular person, confidentiality may be a higher priority. If we're talking about the financials for our organization, integrity is certainly up there. We don't want that data changing from day to day. The data we put in there, we want it to be the same as yesterday, as today, so it's consistent and the integrity is maintained, but it also needs to be available. Right? You can't act on it if it's not available to us to use.
So looking at that, really, the driver for a lot of things that we're talking about from organizationally what we're trying to do to be successful, competitive in the marketplace, is using data as a resource. That's really what all organizations do, regardless of your vertical, whether it's multi-tenant and you're managing that, construction on the build side using real estate as your investment vehicles and managing those, whether it's commercial, residential, what have you. Right? We're using data as a resource and we need it to kind of come out of what I consider the unstructured, inaccessible, or siloed type models that we have. We have disparate systems. How do we put out that all together, gain visibility, give us things to act on?
And really, it needs to kind of take on all of these aspects that you see on the right side of the slide, because we want it to be aligned with our data governance model and have access to that. It needs to be timely and accurate. Our end user who's going to be consuming that data and the reporting that comes off of that, they need to have quick access to it, it needs to be easy to understand. But then it also needs to really, at the end of the day, the one thing we want to do is go from kind of that knee-jerk gut reaction to, let's make an informed decision. Right? It's data-driven, this is what we need to be competitive in our marketplace, more efficient, move into new markets, better utilize the resources we have, understand our client base better, all of those things. Right? And that needs to be data-driven. And you see that across the board for multiple organizations. Right? We are a data-driven society and so that becomes more and more prevalent across that and it becomes more and more important.
And so we have challenges with that. When we look at that from a cyber... As a cybersecurity practitioner, we are always playing catch up on pretty much everything we're doing in the information technology space. Historically, the systems and programs that are built around this to secure the assets for organizations tend to be kind of a narrow focus, looking at a specific set of issues. They really can't adapt that well. So as we have new threats or new challenges that emerge, we are not in a position to where the technology and the systems we have that we're leveraging for our day-to-day operational cybersecurity within our organizations can adapt to that.
And a perfect example of where there's security shortcomings. We use vulnerability scanners and other tools, and our pen testing engagements and those types of things. It's an excellent tool and provides a lot of visibility into issues across organizations when you're trying to analyze the shortcomings for cybersecurity practices. What it doesn't do is it will never tell you there's a misconfiguration then, right? It's just looking for defined vulnerabilities across an enterprise, but it can't tell you that Joe in IT made a misconfiguration on the firewall and now you have an open port where a threat actor can leverage that. So that's always kind of we're playing catch up with this. And then new technologies come out, AI being leveraged across different systems, third party service providers, and we'll cover some of these things coming up, are just new things we're having to deal with and have to incorporate it into our processes to understand where our information's flowing and how it's being leveraged across the enterprise. So our data governance aligns with that, and then our cybersecurity operations can align with that as well.
So kind of looking at what these cyber attack drivers are, and I think you're going to recognize a lot of these. There's three of them we're going to focus on, and they are consistent and have been consistent over the last several years.
The first one is, I will be adamant about this from an IT standpoint of working for IT consulting for the last almost 30 years, and the last 25 strictly in cybersecurity. People are the problem. IT would be easy and IT support would be easy if it wasn't for the people and the end users. But that's why we're here, right? We're trying to empower the people to work. But regardless of kind of when you look at the data that's out there, especially the most recent Verizon's 2024 data breach investigation report, regardless of the attacker's method, the core tactic is the same. Right? We just really want to exploit human nature because people have an innate willingness to trust and be helpful for their own gain.
And so if you look at the data, on the left is the top actions within social engineering incidents. So like pre-texting, phishing, extortion, those are kind of the methods from a threat actor. And then the top vector in social engineering, if you look at that, email's 90% of it. And so email's always going to be a big focus. Everybody's got an email account typically in organizations, it's typically easily to determine what that email address is, or it's already published.
And so when you think about it, 68% right now of all breaches involved a non-malicious human element. So it was either a person fell victim to a social engineering attack or they committed some type of error. Right? And it wasn't a malicious error, but they made an error. And that's, we're talking about two-thirds of all breaches encompass this human element. So if we can do something about that from a security awareness and training, but not just, hey, click on this training video, look at this, this is what you need to catch a fraudulent email. But truly aligning it with what the risks are in our organization, we can make some headway into dropping that number down.
The really... I want to highlight though, the speed at which this happens. When you look at it and look at the data, the median time for a user to fall for a phishing email is less than 60 seconds. So if you look at that, the time it takes for them, so let's say our email filtering solution didn't catch it and it lands in a user's inbox. It takes them about 21 seconds on average to just click on the email and open it. It gets worse after that though, because if we don't have any controls to, if they click on the link or the malicious document that has some exfiltration scripts or some other things within that, it takes about 28 seconds for them to actually enter in their credentials, and then we've got compromised credentials.
So that's under 60 seconds and realistically, if you look at your organization's operational cybersecurity capabilities, can we compete with that timeframe? Can we do something in under 60 seconds to mitigate that? And do we have the capabilities to detect that if it's happening in real time? And the short answer is, for a lot of people who aren't really prepared for this and don't have a solid cybersecurity program in place, the answer is, no.
The next really kind of driver for breaches is stolen credentials. It's really about looking historically at this. Stolen credentials, compromised credentials have been for the last 10 years, you're looking at a third of all the breaches. So we've got between the two from a end user perspective and social engineering and security awareness, and then passwords and stolen credentials, 90% are in there.
So we're talking about where we've got bad password with hygiene and password management across the organization from our users, and it ties directly back into security awareness and training to a certain extent. There are technical pieces that we can use to offset this, but password reuse, you hear about it all the time. I'm using the same password for my Netflix account, they got compromised, or LinkedIn was compromised. We're reusing that for both personal use and corporate use. We're using the same password or a variation of a simple password where it's summer.123, and then when it goes time when we've got to change it, it's summer.1234. We're using variations or the same passwords across all these systems. It's trivial at that point when threat actors are using these and they're amassing these in large databases, to do brute force password spraying and some other techniques to try to just compromise your systems that are available externally that they can try that have external login box, your VPN, your Office 365, let's say if we're not protecting it with multifactor authentication, and other things along those lines. This is really where we have exposure.
And we can add technology into this by leveraging these password databases ourselves that have been compromised, and we can integrate that into our password management change process, where technology can actually go through these databases and say, okay, as we change our password, does it exist over here? It doesn't have to be tied to the user per se, but does that breached password already exists out in the wild? Okay, well, we can't use that, or we need to increase the complexity and add other technology applications on top of that, such as making sure MFA is enabled for everything.
And so kind of the next driver, all these are going to sound familiar if you're managing risk, cyber risk in your organization, vulnerabilities. These are the top three and there's a reason why they're top three, because they're effective from a threat actor standpoint. Actors can make these things happen. They are still from an initial access other than, all right, social engineering may not be the vector we're using, we don't have compromised passwords. Let's see if there's vulnerabilities across the system that your organization is using. So there's still a key initial access method for threat actors.
We saw a big increase in that, it grew by probably 180% last year. A lot of that was attributed, I don't know if you're aware, for the MOVEit vulnerability and breach. I personally was affected, the entire database and user account information for several DMVs or Department of Motor Vehicles for multiple states were compromised. A lot of higher ed and governmental institutions were using these. That drove a lot of it last year where we see the huge increase. But the challenge is, the resource is available to organizations. So it takes around 55 days for organizations to remediate about half of their critical vulnerabilities, after the patch is available. So we're two months in and we've only got about half of it done and guess what? In that two months, others have come out. So it's a constant battle, there's a constant backlog. We're having trouble keeping up.
Another issue related to that from a vulnerability management standpoint is, some organizations don't patch comprehensively. There's patching third-party applications, patching Adobe and Java and three different browsers if you're using Microsoft Edge or Safari, and you have to patch all those. And it's difficult for organizations and systems to do that from a technical perspective. We can patch operating systems, Windows update and some other things are there to handle that, and most organizations are on top of that. Depending on your complexity, you may not have a policy in place though, that says this is the list of applications we allow in the organization and we've got technical controls in place to only allow those to be installed, and then those are the ones we've got to patch. That's realistically how you manage that issue.
The other thing, kind of tying back into only getting to 50% of our vulnerabilities. We need to look at what the vulnerability means to us as an organization. And the way we do that is, all vulnerabilities are typically ranked one through 10 for criticality. It's called the common vulnerability scoring system, and the more critical, the higher the number. And so that's just kind of a baseline. That doesn't mean it's critical to your organization per se, because then we also need to align that with what the assets, so the IT system, the application, how we're using that and how that's going to affect business operations if something happens to that, based on this vulnerability. So it's making it personal to us.
And then you incorporate information from the threat landscape that's out there. A lot of good information is out there, but CISA from the federal government program that's running through that for Homeland Security, they have a known exploited vulnerability catalog. And so basically what that is, is all right, we have a vulnerability, that doesn't necessarily mean there's an exploit that a threat actor can use. CISA puts this list together and says, "Okay, we've got a vulnerability, there's an exploit and here's the information related to it." And they update that on a regular basis, almost daily. They add new vulnerabilities to it that contain exploits. So then we can align those, and so for those 50% of critical vulnerabilities that we're patching after the patches are available in that 55-day window, we can prioritize. And so we have the ones that are going to affect our business operation systems. We know there's exploits out there, we know the criticality associated with those, and those are the ones we attack.
Because when we look at what's at stake really, for all of these, right? So we're talking about the majority of the ransomwares, the breaches, all of these things that when a threat actor's coming after you, it's financially driven, and it's not personal. Right? Some of these are just attacks based on numbers. You're the next email address in the list that they're going after, blasting out from phishing campaigns or other things like that, or just happen to have compromised credentials that match. That's what we're talking about. And so those are the three primary ways we covered, but it's about the money.
When you look at it, the median loss, like a business email compromise. Let's say your email address and credentials are compromised. To cover that, it's about $50,000 is the median. Same thing for all and when you include ransomware and some of the extortion type events that we've seen, in the same range, $46,000 is the median. I can tell you that is a much wider range and the exposure is much greater, and it'll vary depending on kind of the data that's exfiltrated, if there's some third party implications to that, that there's damage. For some of the ones, I can tell you, they do a very good job from a threat actor standpoint if they do gain access to your systems, of evaluating how much money you can take. So kind of break down an incident response project that we worked on. We were doing the incident response remediation and some other things.
This was a construction company on the West Coast, they got hit with a ransomware. The main driver was, all right, it's locked out their systems, we got to get back to work. And the group that was actually working with them to negotiate with the threat actor really was, "Okay, here's kind of our initial offer to get the decryption keys." And the threat actor was like, "No, that's not enough. We know that you have a line of credit at this bank that's $20 million and you're only using two of it. We'll take 15, and you still have some buffer for your line of credit and it doesn't max you out on that line of credit." That's kind of the tactics they take and work through. They're very well-informed and very well motivated. And that gives you an idea, yes, $50,000 is the median, but it could be much more than that. And if we are not prepared and we don't have the benefit of cyber liability insurance and coverage for these types of things and we haven't the necessary controls, it'd be very expensive.
Just to cover one of the questions back on a previous slide for the business email compromise and some of those. Sophia was asking, what is pre-texting from an email standpoint in social engineering? If you think about that, it's where they set the stage of where they've got context for other emails in that. So they may be in the environment for a little while. They're able to create and leverage pre-text. So, perfect example of that is wire transfer changes. Say it's a title company, as we've worked on a couple of issues with that and some others from investment firms for real estate, where they're exchanging large sums of money. They're sitting in the environment and they used previous emails for a compromised account to establish legitimacy in their emails, in the email chain. So that's what pre-texting is.
So as we look at that, we've got these drivers for our threat actors. What are the threats we're facing and what are we looking at and what do we have to work with and prevent against? And I like to establish kind of a baseline understanding a lot. We did it for defining cybersecurity, and I kind of want to define cyberspace here as well. And I like to use the US Army's definition of cyberspace here.
And so they kind of break it down into three areas, in what they call dimensions. So the physical dimension, obviously cabling, the actual hardware infrastructure that makes up the networks and interconnections for the environment. And then you skip over informational to the cognitive, that's kind of where the socioeconomic pieces of this influenced all of this. So that's really kind of the knowledge, values, beliefs, and intentions of individuals and groups that are working with this information in cyberspace. And then obviously you back down to the informational layer and dimension, and that's just all of it. Right? So that's our files, our online videos of Netflix, our shopping data in Amazon or any other retailer. Right? So it's that informational piece within there.
And I like to point out that the US Army's definition, because it's maintaining that environment for information exploitation. And in this context, information exploitation is not a negative thing. Going back to what we're trying to do with all this information, be agile, make better informed decisions, those types of things that we mentioned earlier, that's what this is. Now, you certainly can flip that to where it's not beneficial to organizations and to where it is negative connotations from an exploitation standpoint. And so that's really it. It's information driven. That's where the threats are going to evolve. It's just different in how the systems that information is in, how that information is leveraged to make new models, new technologies, and the like.
One of the bigger things from personally standpoint, is our digital identity. So even back in 2017, the economists declared that data had become the world's most valuable resource, surpassing oil. And if you look at a lot of the large corporations that have the largest market cap or are in the top of traded companies across the world, they're data driven. Meta, Apple, maybe selling hardware, but data's a huge component of that. Amazon and AWS, Microsoft, all of these, they're data driven. Google, for really what they do, they're an advertising firm. They may have YouTube and their search engine and everything else, but advertising is where they still make the majority of their money, and they need the data about individuals to do that.
And that digital identity is being comprised of more and more components as you look at it. Obviously our username and passwords, our health data, our location data. Like I said, from a personal standpoint, our buying and entertainment preferences, even to the biometric data, our fingerprints, our iris maps, and then client and account identifiers across different systems. And these pieces that keep collecting and storing this are growing and growing and growing and growing, and that becomes an issue for us and how we manage that and we have to be aware of that.
So we have to have some information protection strategies in place. So kind of the high level of that is solid data governance. How we collect it, how we use it, what systems are storing it, and having that map of knowing where it is. So we have to know our data and understand the landscape across the environment in the organization and how we're using that. And then really, we've got to protect it. We've got to be very flexible in how we do that because it may be within a system that we fully maintain and support and own. We may be leveraging third parties to do that. A lot of software's service applications, we can easily sign up with a credit card, but there's data going into that and we'll talk a little bit about that further down. But we've got to be cognizant of that and that whole data flow through the whole data lifecycle and how we govern that and delete. So gathering it and then also managing it and then deleting it, cradle to grave for that. And being able to detect risky behavior throughout that.
I was asked probably two months ago, we were having a conversation with the security team and it came up, it was a really interesting conversation between a lot of technology guys. What's the first step from a cybersecurity standpoint? If you haven't done anything and don't know anything, what would you tell someone? And a lot of esoteric conversations came out of that, but really to distill it down, the easy thing to do to start with, is identify your assets. If you don't know what you're trying to protect and leverage across your organization, you can't do anything significant to enhance security around that, or those assets. So you've got to start from an identification standpoint. Same thing for data governance and data management.
And so as you start to mature and you've identified those, you get into data controls that you can leverage across that. Like I said, good data governance, but looking at technology solutions to support that as well, to aid you, right? So data loss prevention systems where we're tagging data of different sensitivity levels. There's very straightforward, in a lot of the online platforms like Office 365, Google Cloud, AWS, of tagging that data and alerting us when something happens with that. We can extend that down into other systems as well. And then having the appropriate activity monitoring and access control wrapping around that, so we know when somebody's accessing it, what they're doing with it and how that data's moving around the environment.
And then looking at solutions also to minimize our risk across that, right? So data redaction, anonymization throughout that, so we're not having issues with sensitive data or even data that's under a compliance umbrella. Credit cards, PII for personally identifiable information for individuals, or other information that we're leveraging in our organizations that are compliance driven. We don't have issues with that necessarily, if we're trying to use that and roll that up into advanced reporting and trying to get the business intelligence out of that and other things.
There's ways to minimize the risk associated with that by implementing the technologies as well, both to support it and also deliver solutions. But we need to kind of be cognizant of that, it's got to be auditable and it needs to be transparent from a management standpoint and reportable, so we can stay on top of that and understand what the impact would be to the organization, because want to identify risks within that information profile and have the appropriate policies, the audit frameworks as well to support that. If we have a internal control review and just making sure that everything's working effectively, or if we have a third party that comes in on a regular basis, right? We want to be able to demonstrate that we are managing our IT assets and the information that's within those, effectively.
One thing I think that's critical, and we've seen this from a real estate perspective, it's always there. It's kind of a double-edged sword, it's physical security. Cyber security and physical security go hand in hand. They always have, right? So historically, secure areas for sensitive equipment, sensitive systems, has always been there. Wiring, HVAC, you name it, power generation regulation systems. All of those physical plant type things have always had needed and required physical security. What it does though, is it also as we look to modernize and leverage that to help with IOT devices and new systems and new capabilities, whether it's security cameras, IOT devices for environmental monitoring and management. All of these things have data and they're typically connected to the network and so they are just another device that needs to be incorporated into our physical security management process.
So when you look at it, we need to protect the digital assets just the same as they come across with those, and evaluate those from a risk standpoint. I know if you look at it for physical security and some of the solutions is out there, especially for federal government space, maybe we're subleasing to the federal government or a three letter agency, or maybe have Army or somebody else like that within our commercial space or shared space office building. We really have to understand what the impact would be for that because some of those suppliers are actually forbidden, right? In some of those environments, from a Chinese perspective, a lot of technology comes out of that. These lower cost items, we need to fully understand what they're doing and who they're supported by, and some of those are on a forbidden list from a federal government standpoint.
Or we've got sensitive areas or sensitive clientele for our commercial space. Are we actually securing that in the proper fashion, providing these services? Multi-tenant shared commercial space. To give you an example, we were doing a wireless audit for a newly built commercial space to understand, okay, did they have adequate wireless coverage? Could they move between these? And within that, they had these shared meeting rooms that had AV devices within there that they could use for shared meetings, displaying content, and other things like that. And it was really, that's one of the issues from a physical security standpoint and new devices that go in there. Who's responsible for that? Because it was actually, we discovered that the US Army was subleasing some space and had a special project going on in there, and they were using these meeting rooms but nobody was patching these devices. And you ostensibly could sit on the street and connect to them and view the content that was being projected on there if they were leveraging those in these shared conference rooms.
Same thing kind of goes to all these other devices from a physical security standpoint. They can introduce unknowns and there's a gray area of, who's responsible? Is it IT that's responsible? Is it plant services? Who's actually managing it? Do we have a third party vendor that's supposed to be managing it? Is that clearly stated in the contract? You look at what kind of the reference model for worst case could be is the target breach. They had a HVAC contractor that was managing the HVAC for their stores. Little bits on Target, they didn't properly segregate HVAC network from their credit card processing network, and poor controls from an HVAC contractor standpoint led to that credit card data being exfiltrated through that connection. I will tell you, the moral of that story is, nobody remembers the HVAC company's name unless you're intimately familiar with it. Everybody remembers Target. So, that's really what it falls into as the takeaway.
And then the other thing with all these new devices that we're adding them on, adding capabilities. We don't really incorporate them to our incident response planning. And what happens if those systems are unavailable or they're under attack and those third party vendors are there and the responsibilities aren't really clearly defined.
So that kind of brings us up. It's a natural segue, third party risk, right? The Target one's a perfect example of coming into that. The myriad risks associated with third parties are growing, and I think this is becoming more and more of a focus when you look at how we are, especially for small organizations that don't necessarily have large IT teams, don't necessarily have the resources to do that. We're leveraging third parties across the board. Any software platforms for doing Salesforce, Office 365, the multiple accounting patches in NetSuite, all of those QuickBooks Online, Yardi, RealPage, all of those that we're leveraging that fit through our systems. There may be some investment portals that we're using if we're using investment portals for our real estate portfolio. We're intending for somebody else to manage that, but here's, all the risks are still there. We haven't necessarily taken the risks from us and push those off to somebody else. We have it to a certain level, right?
And that's kind of where this shared security model comes into it. Now, this is typically driven from cloud. You'll see these published by Amazon for their AWS services, and Google Cloud, and Microsoft for their Azure and Office 365. And really, it's because it really kind of helps demonstrate the consumer responsibility level is inverse to the service provider, depending on where you are in the technology stack. If you're just accessing your Office 365 email or your Gmail account, you're way up at the application level. And so your responsibility for the security and backup and availability of that system is pushed over to the service provider. The further you get down to that, the more you own. And that's really, you have to approach that. They push these out.
I will tell you, this is the Microsoft shared security model. I'm not hiding a lot of detail within this, is the problem. You have to read through a ton more contractual obligations and other things in the fine print, because this is about as detailed as it gets when they tell you, "We're going to do this," and that's it. And so we have to fill in the blanks as organizations as we leverage these platforms to say, "Okay, what do we need to do? What are the risks associated with that?"
And so we've got to really manage and do it, and you'll hear everything that I approach is going to be a risk-based approach for dealing with this. And so as we go through the continuum of evaluating third parties and looking at that, it's going to start with a risk assessment and the risk profile. So we have to establish clear security requirements for our third party vendors. And that could be any of them, and whether they're IT or not, right? I think it's a good practice for any third party vendor, you establish those security requirements for how you're bringing them into whatever operational role they're filling.
And really, the only way to do that is really contractually. So we want to have specific security measures if we can modify the language, we want to get that specific security measures included in vendor contracts and there may be a service level agreement that's supported through that contract. We want that clearly defined if we can negotiate that, for what the, let's say there is a breach when the third party that we're leveraging. What are their notification requirements? What are their response requirements to us through that service level agreement? What are they going to do to protect our data, notification, and the like? Are we going to require them for data protection standards? So we've got our vendors are complying with that, whether it's DPR, HIPAA, California Consumer Privacy Protection Act, any of those that are involving, coming out. We just list those across the board. Can we get those included in there for the contractual pieces in there? And then really defining how they're going to incident response protocols, who gets notified of what and when.
And as we go through this, I think you have to think of it much in the way we talk about data governance life cycle from cradle to grave, where we take that data in until we get rid of it. Same thing that a lot of people really don't focus on for third party vendors. Really, the thing you have to think about is, okay, we are bringing them on, because it's always that honeymoon kind of effect when you first bring them in. They do a great job, they sell you on their functions and features. Hey, this is going to answer so many of our business challenges and problems that we have. Let's get them in, let's do a proof of concept. They come in, everything's great.
That honeymoon period's over, maybe they're not innovating, maybe there's a better. Their competitor comes in and said we were close but we bought on price, whatever. Whether it's the driver, how do we get out of that? How do we get our data out of that third party provider, to move to another platform? Or maybe we're growing, right? We outgrew that platform, we need to move to something else. We're going to bring it in-house, we're merging, what have you. Do we have that early documented from a contractual standpoint? Do we understand what the third party service provider is going to do to help us transition through that, or if they are going to help us transition through that? So it's that whole life cycle of third party vendors.
And really, you're starting to see this emerge from a IT risk perspective, because they're just everywhere across ecosystems regardless. There's always some new third party vendor coming out, some new technology, where they want... And part of it's business driven, right? The recurring business, the recurring revenue, that recurring monthly or annual subscription model, is much better suited to a software as a service or service provider model, as opposed to kind of your on-prem traditional licensing. And so there's business reasons driving these as well and that's why there's kind of been that holistic shift over the last three to five years to the service provider model.
One other thing that we kind of see, and this has really taken precedence in probably the last year and a half, is really governance in cyber risk. So it's two aspects that I want to cover for this. It's the internal aspect of, how are we governing cybersecurity and escalating that up to decision makers within our organization? And do we have a focused strategy and reporting mechanism so we're making informed decisions and guiding the operations of the entity in the right way? But it's also how we're able to leverage that if we are doing that well, to demonstrate that to interested parties, and whether it's investors that are looking at organizations.
I think when you look and break it down like that, so if we can have a discussion with someone within an organization and they can really evidence their cyber proficiency, so if they've got a strong sense of the risks that are prevalent in the organization, they're willing to discuss those topics, and kind of at a high level what they've done from a governance and response to those to manage those. The reporting and training and other things like that, really can strengthen the confidence level from investors, business partners, etc, in what you're doing as an organization. And so it's, they feel that the risk is lower, as you look at to partner with them in this. If you get some vague or kind standard response, may indicate that we're a little less prepared for threats, we may lag our peers in the industry. The takeaway might be, we're more vulnerable to attacks. Right?
If you get to the point where you're having conversations with investors or others that they're able to say, "Okay, what is your budget for cyber?" That's going to give them insight into strategy and your actions and if we can be transparent on our spending for cyber insurance, our resourcing, the vendors we use, because we don't have to do it. Like I said, third-party vendors are out there, we don't have to build it in-house, but are we leveraging the right vendors that are providing the right services? It kind of helps build that complete picture of what we're doing. And really can, it's really trying to boost confidence and instill confidence in our organization what we're able to do, to ensure that we're not going to have issues down the road.
So I think I'd be remiss if I didn't talk about AI. It's obviously the buzzword on everything. Just had the Nobel Prize awarded for AI use to discover the building blocks for basically all proteins. And there were a couple other AI elements in the Nobel Prizes awarded this week. But intelligence, it's just the ability to acquire implied knowledge and skills. Right? And so if we do that, we take that and extrapolate that to computers. I'm not going to dig into these too much, but there's really kind of four levels of artificial intelligence, reactive machines and limited memory are kind of the two we operate into today, the last two are just theoretical. And I will say, the issues we roll into are being thought about and really, some of these are kind of what you see on here and this is really just informational so you kind of understand what's happening at a high level and who's thinking about this.
The one on the left of the list is the White House proposed the AI Bill of Rights. It's a non-binding framework they would like to have implemented. And on the left is AI ethics concerns from IBM. So you have governmental and technology looking at this and saying, "Okay, these are the ethical elements in a blueprint we should follow." And you'll see they're closely aligned, right? But I think the big thing to take away from these is safety, fairness, and being able to explain the models and how they're used, is cornerstone to both. The one thing I would point out in the White House model, optional, if we have the ability to opt out of such systems where it's appropriate, so if we feel like we're being treated unfairly, we can actually get out of it. We're not stuck in the AI making decisions and get out to a human to help us.
Because there are some risks with AI. Here's kind of some from data poisoning. We can manipulate the models and the training data that's used to get a specific outcome, that's kind of early on in the system. Just they can steal our model and use it for other things. And then there's obviously privacy risks within there. I think the key one that we need to focus on is operational risks, out of this list. So if we're kind of putting something in, we've got AI, maybe we're not in tune enough with how it works, what decisions it's coming up with and how it's making that through our systems. So it's ingrained in an operational element in our organization and we don't have sufficient oversight, then I think we've got problems with that. And obviously there's compliance and bias issues that exist across the board when we don't understand how the models work.
So some examples of how AI goes wrong. This one's a little dated, it is from 2021, but it gives you an idea of what happens when it's not necessarily the model's fault. So the Zillow Offers program, basically is where they made cash offers based off of a Zestimate, which used AI to come up with that, a machine learning algorithm specifically. With the idea of okay, they renovate the properties and they flip them quickly. And so they were making offers on on-market homes as well as off-market. The median error rate though for the model was about 1.9%, but for off-market homes it could be as high as 6.9%. So that's, we're not talking about really big margins in flipping homes in this model anyway, but when you introduce that much of an error rate, it was catastrophic for them. So they ended up having to cut 25% of the workforce, which equated about 2,000 people, and had to take a $304 million inventory write down because they bought 27,000 homes and only sold about 17,000 through about a three-year period of this program's work.
It's a good example of though, the black swan events that really influenced this, that had nothing to do with how the technology or the model worked or anything else. But you had COVID-19 through that and then there was a home renovation labor shortage. And so the accuracy of the model could not account for that for these outside factors, and it led to this outcome.
Looking at kind of a current one that we're talking about right now, with RealPage and the United States versus RealPage. DOJ has filed a complaint that RealPage is using features like current rents, vacancy rates, lease expiration dates, daily to get this data to make a recommendation using artificial intelligence for what we should charge for rent. So basically it's, they're accusing RealPage of price fixing, which really looks if you look at it on the surface, it's the age-old dilemma of where innovation has out passed and outpaced legislation, because we've got algorithms that are leveraged everywhere. It was a core building block for Google and their search engine. They serve up our social media content on a daily basis, our online shopping, power dating apps, they're everywhere. They've been making decisions for us for healthcare. I think the RealPage case points to a larger issue though, that the algorithms are creating a new legal frontier. So this is ongoing right now.
And so I think we're going to see more enforcement type issues arise because we've outpaced it from a technology standpoint. And when you look at that, we're starting to get to get reactions. So in early September, San Francisco became the first city to actually prohibit landlords from using rental pricing software and there's some civil penalties associated with that. And then previously earlier in the year, you had two US Senators that introduced legislation. I don't think it made it out of committee, but they introduced legislation that would make it illegal for landlords to use software to coordinate rental prices nationwide. And I think that is, from that standpoint, it may alter kind of the model for RealPage.
But when you look at it, RealPage is filling an need. Separately within the complaint, the DOJ is alleging that RealPage has a monopoly, 80% market share for commercial revenue management, right? And so they're having heartburn over that. I don't have a problem with that. I think that demonstrates that they have a good product that's filling an need, and it's giving businesses what they need by what we talked about for that data and insights and leveraging that data to make business decisions. They're doing something right.
My kind of cautionary tale within that particular piece, is it echoes very similarly to what happened to multiple car dealers earlier in the year for the CDK platform. And so it's eerily similar that we've got that much dependency in market share. CDK is a platform for car dealerships that new car sales, loan documents, service, inventory, service scheduling, time and billing, all these things, true vertical integrated platform for car dealerships. The problem was, it's software as a service, third party vendor that completely failed due to a ransomware event so it was inaccessible for multiple weeks. So you had car dealerships that couldn't sell cars, they couldn't service, they couldn't sell parts. And because they were vertically integrated and relied solely on this platform to deliver their business and operate their business, it didn't have the necessary secondary operational processes to deliver service and sell cars.
I can tell you, my wife is a nurse. When the EMR goes down, they go back to paper, right? They have alternate processing methods to do it, and it's defined. Right? It has to be down for a certain amount of time and then they're able to incorporate that data back in. That's the level of detail and focus we need when we look at this to be able to manage the risks associated with third parties and incorporating, evaluating the risks for AI. There's some resources we can leverage as well.
So AI Risk Repository just came out this year from MIT, publicly available, if you're into some light reading, you can go through that and they list that out of, hey, here's the risk associated with that AI that may be presented to your organizations. MITRE ATLAS is another tool, framework that presents tactics that threat actors may use. So if we need to prevent those tactics and techniques, those are defined within there. And then NIST, the National Institute for Standards and Technology, has their AI risk management framework that they're developing that's currently in public comment. And so these are tools we can use to become more informed and establish AI as part of our evaluation process as we look at risk profile across our organizations.
So, how do we build our IR strategy to support this, right? So that's kind of one of the things. We've got operational cybersecurity day-to-day that we're working through. How do we build a strong IR plan? I will say, mentioning this, again, lean on the frameworks. You don't have to create it from scratch. A lot of smart people have spent time on this. I will say, it doesn't make it your own. This is a good, it's a framework, right? You have to fill in a lot of the details for your organization, but you'll notice that either in the NIST or the SANS. One, SANS is a little more prescriptive. NIST focuses on preparation more, but I will say that's similar across the board. But leverage the frameworks that are out there.
Key pieces within that. You're finding your purpose and scope. The instant response team and who you put on that is critical. I think one of the things is, really clearly defining the risks and incidents that you're going to respond to and include in part of the plan, is really of utmost importance so you can define those and address those directly. And you run through this, this is a continuum. It should be an ongoing process. You need to devote the resources to make that happen. I would say training and improvement is probably one of the key ones as well. You want to do... we go through with our clients tabletop exercises, running through these, and making sure that they have the capabilities and have thought through all of the different nuances that come up through that, and have the appropriate key building blocks within that.
And some of those key building blocks are supporting elements. Right? So one of the things we want to look at is, we want the supporting components like our business continuity plan, our disaster recovery plan, are fully fleshed out so they support IR. Right? So if IR needs to lean on those processes that are there, we can do that. And they go back and forth, they're very intertwined because you can't really spin up that incident response process on a moment's notice. You've got to be able to prevent and also respond to these events. So that fully functioning IR plan is going to reference those other key IT elements, processes that you've got.
Speaking from personal experience, having worked through Katrina and multiple hurricanes, I live on the Gulf coast. The psychological aspect of this, the employee piece. You have to look at it for your incidents that you're planning for, but don't forget the people, and incorporating into that. That is one of the critical elements through this and really testing that plan. And having communication templates, I would say is, and as you've worked through your IR plan, creating those communication templates for both, not only external. Right? So the public relations aspect of this for your incidents that you're planning for, but also the internal. What are we going to tell our people? How are we going to tell them what methods for consuming that disseminating information do we have, based on the events that we're planning for? Really, it'll make all the difference in the world on the success of your incident response capabilities.
And then I'll go right back to it, maintain the right infrastructure and tools within that. And that continuous improvement is really the strong piece in that. You got to have those tabletops running through different scenarios that challenge the teams and what they're going against, because maybe somebody's out on maternity leave when the event happens. We can't control that. Or they're out on PTO. That main person on your incident response team is no longer available. Running through those scenarios and things like that can really help strengthen the plans.
And then as you mature, you can develop the capabilities to leverage threat intelligence. And so that helps us look at other data sources across environments to make decisions with threat intelligence data with our incident response planning, because we've added context to that. And really, from cyber threat intelligence, we’ve leveraged that, allows us to do a couple of other things from our incident response plan. So we can start identifying avenues of attacks to address those. We start hunting for breaches and anomalies within our network. And then from an industry standpoint, we can say, "Are there shifts by threat actors for their methods of attack?" Is phishing more prevalent? Is maybe this vulnerability for this specific software that's going on? We need to handle that. We can leverage that information within our environment and be able to be proactive in our incident response efforts.
So with that, just to recap, comprehensive cybersecurity awareness training program that's focused to the risk in our environment. Like we've talked about it, and you're going to hear those hopefully from any cybersecurity professional. Effective patch management that includes third parties, comprehensive strong access controls, data governance, and third party vendors, security and evaluation, managing through our risk profile for the organizations. And AI, it's not going away, it's only going to be more prevalent. We need to start thinking about how we incorporate the evaluation of AI in our processes, and really help develop that strong IR plan that encompasses everything that we mentioned.
So with that, we'll open it up for any additional questions that we have. I don't see any in the Q&A that were submitted, but if anybody that has anything they'd like to ask, please feel free.
Well, with that I will say thank you for attending. I appreciate your attentiveness. And Bella, I will turn it back over to you.
Transcribed by Rev.com
What's on Your Mind?
Start a conversation with Michael
Receive the latest business insights, analysis, and perspectives from EisnerAmper professionals.