Menu
Log in


INTERNATIONAL FOUNDATION FOR
CULTURAL PROPERTY PROTECTION

Log in

News


  • September 18, 2023 6:36 AM | Anonymous

    Reposted from Securitas Technology

    The global AI security market is expected to reach $14.18 billion by 2026, up from $5.08 billion in 2020. This significant increase in market interest is possible thanks to tremendous innovations in the security industry over the past decade. AI has enhanced health and safety measures through advanced monitoring and driven competitive advantage by providing actionable insights for operational improvement.
     
    The business world is catching on. Next year, we anticipate security leaders will adopt AI-enabled technologies at a rate that rivals the research and development for such capabilities.
     
    That’s because edge technologies like advanced video and audio analytics – formerly a “nice to have” – have become mission critical. A wave of renewed security concerns has driven this development, alongside shifting expectations toward workplace security and efficiency. Security leaders increasingly seek technologies that (1) empower human technicians to work at the top of their abilities and (2) integrate with existing systems.
     
    Let’s discuss which technologies will receive the most attention and highest adoption rates in the coming year and beyond.

    AI and the security industry as it stands today

    We already see attitudes around AI changing as the concept becomes ubiquitous in the business world. Globally, 34% of enterprises have deployed AI-based solutions, and an additional 42% of organizations are exploring doing so, according to the IBM Global AI Adoption Index
     
    Business leaders are even more keen to explore and implement AI to support their security measures. In fact, 66% of surveyed leaders reported that they would consider using AI, machine learning (ML) and advanced analytics to protect their people; 66% would consider these technologies to safeguard their assets; and 54% would consider leveraging them to harden their security network.
     
    And as the internet of things (IoT) expands rapidly alongside continued device adoption, we anticipate these numbers will increase. Why? Interconnected neural networks and IoT expansion will usher in an era of unprecedented data. At this point, AI and ML will become paramount to understanding business operations.
     
    Today, security leaders use AI technologies to parse complex data streams, often through AI-enabled video surveillance that traces anomalous movements and uses collected information to pre-process threats. For example, AI-enabled video surveillance systems can identify loitering or otherwise unusual behavior that suggests danger. The system flags such events to human technicians, who can expediently and proactively address the threat. Without AI, these concerning events might go unnoticed or take days to isolate among various video streams.
     
    This technology is also used to mitigate false alarm occurrences. When a system identifies a possible threat, it can call upon pre-programmed stimuli or ML to assess the likelihood of actual danger. Say the system identifies a loitering person, but the individual in question is in a public thoroughfare that frequently sees slow-moving traffic. The system can vet the threat and determine it’s likely a non-issue. We see this capability adopted often as a tool to reduce time-to-response in the case of actual threats and to help eliminate unnecessary emergency mobilization.
     
    Other AI use cases parallel larger trends in the security industry. As access control solutions evolve to become more mobile, many security leaders have adopted AI to verify tripped alarms. In other cases, AI has intelligently monitored the movement and status of top-priority assets, including vaccines. The implications of these technologies go beyond immediate cost savings.

    Cutting-edge technology use cases in security

    Many security leaders remain unaware that their existing technologies hide a wealth of deep security- and operations-related insights. However, over the past year, we’ve seen several organizations adopt bleeding-edge AI technologies that harness the power of latent insights to drive operational efficiency.
     
    Technologies that monitor and analyze a comprehensive security ecosystem can learn a lot about their operating environment. AI tech learns about standard facility exit and entry times via access control; employee and consumer behavior through video surveillance; and asset movement through real-time location systems (RTLS). Using this information, AI tech can understand what works for a facility and what may need improvement.
     
    For instance, in a retail environment, AI technologies can assess optimal hours of operation based on when a store experiences the most entry. Or, it can even identify which departments require more or less staffing based on consumer movement. In a corporate environment, these same observations can be used to assess when energy-intensive systems, including lights and temperature, need to be adjusted. Business leaders can use information about optimal operating conditions to curb costs and more efficiently distribute limited labor resources.
     
    Of course, data-based security insights are also imperative for strengthening health and safety protections. One promising example of this involves another technology with emerging security use cases: drones. Several factors influence whether drone activity should be considered suspicious, such as the drone’s make, model and weight limit. Other environmental factors of interest include the drone’s location and how long the drone has remained in the area. While collecting this information, AI technologies can synthesize a risk factor and present human technicians with a threat probability. 
     
    The cohesive collection and presentation of data are critical here. Although AI security tech can make incredibly informed decisions about possible dangers, it’s also capable of presenting complex data about threats simply, allowing human technicians to double-verify or make their own calls as needed. These data dashboards are a compelling selling point for leaders who thrive on data and may explain why AI security tech has taken off in recent years.

    The future of AI and security tech

    Security insights no longer live in silos. As enterprises digitally transform their processes, they invite a cohesive technological ecosystem that latently learns and presents information in an incredibly powerful way. And as IoT 2.0 approaches, those insights will become necessary to keep up with magnified data needs. 
     
    Both trends explain why AI security technologies will accelerate in adoption in the years to come. For security leaders across all industries, the question is: When will “cutting-edge” technologies become necessary functions? And what providers will I trust to transform operation-wide processes to become more secure and intelligent?

    See Original Post


  • September 18, 2023 6:24 AM | Anonymous

    Reposted from National Cybersecurity Alliance

    While the idea of getting held up for cryptocurrency in the digital wild west is scary, there are some steps you can take to significantly reduce the likelihood of an attack. We’ll also explain what to do if your data is currently being held for ransom.  

    Essentially, ransomware is malicious software that encrypts a victim’s data, and the criminals who infected the victim’s device demand a ransom for the data’s release. Typically, ransomware (like other forms of malware) infiltrate systems through deceptive emails (i.e., phishing attacks) or software vulnerabilities, causing devastating consequences for individuals and businesses. Individuals can face emotional distress and data loss, while businesses suffer operational disruptions, financial damages, and reputational harm.  

    Ransomware attacks are, unfortunately, common news headlines. Huge corporations, large school districts, and governments have dealt with sickingly effective ransomware operations. The WannaCry ransomware attack in 2017 infected an estimated 200,000 computers around the world and ended up costing a total of $4 billion, according to recent analysis. According to Verizon’s 2023 Data Breach Investigations Report, ransomware is now the second-most common cybersecurity incident and is now being present in almost 16% of all incidents (Verizon found that most common is a Denial of Service attack). 

    We’ll explain how you can mitigate your risk by adopting some simple-to-learn cybersecurity behaviors. Early detection through antivirus and intrusion systems is vital, and you can backup your data effectively to facilitate recovery without paying any ransom. Following these guidelines strengthens defenses and safeguards against the dire consequences of ransomware. 

    How to prevent ransomware attacks 

    As you might suspect, preventing a ransomware attack is easier than dealing with the frustrating fallout after it has happened. By practicing some good cyber hygiene behaviors, you exponentially increase your chances of staying off the ransomware radar.  

    • Lock down your login with strong passwords, a password manager, and multi-factor authentication 
    • Back up your data regularly to the cloud or an external drive (or both!)  
    • Antivirus software is worth it 
    • Update your software regularly (turning on automatic updates is easiest!)  
    • Avoid the phishing bait 
      • Most ransomware attacks start as a phishing message, which is when a cybercriminal sends you an email, message, social media post, or text that includes a malicious download or link. Here are some common signs of a phishing message:  
        • Does it contain an offer that’s too good to be true?    
        • Does it include language that’s urgent, alarming, or threatening?    
        • Is it poorly crafted writing riddled with misspellings and bad grammar?   
        • Is the greeting ambiguous or very generic?    
        • Does it include requests to send personal information?   
        • Does it stress an urgency to click on unfamiliar hyperlinks or attachments?   
        • Is it a strange or abrupt business request?   
        • Does the sender’s e-mail address match the company it’s coming from? Look for little misspellings like pavpal.com or anazon.com.  

    How to detect ransomware 

    Generally, the people behind a ransomware attack want to get your attention, but ransomware might not be so obvious at first. Look out for: 

    • A ransom note or message on your screen demanding payment to unlock your data or device 
    • An inability to access your files, folders, software, or apps 
    • A change in the file extensions or names of your encrypted files 
    • A noticeable slowdown or malfunction of your device or network 
    • An increase in network traffic or CPU usage 

    How to recover from a ransomware attack 

    If you suspect a device is infected with ransomware, you want to act fast but remain collected. Don’t start talking to the digital hostage-takers, but reach out for help from cybersecurity experts, law enforcement, and others, like your employer’s security team. Here are some techniques to take on ransomware and get your data back.    

    1. Stay calm and focused. Hackers want to send you into a state of panic – don’t let them! By maintaining your cool, you can make more informed decisions. Even if the situation is dire, a calm approach will ensure you are taking stock of all your options.   
    2. Take a photo of the ransomware message for evidence. 
    3. Quarantine your device by disconnecting from Wi-Fi and unplugging any ethernet cables.Remove any external hard drives or thumb drives ASAP because many ransomware programs will try to corrupt your backups.   
    4. Check your antivirus software to see if it has decryption tools to remove the ransomware. Depending on the malware, your antivirus software might be able to decrypt your data without requiring you to pay a ransom to anyone. Even if you can’t undo the encryption, the software might be able to identify the strain of ransomware which will help with the investigation.   
    5. Wipe your hard drive and reinstall your operating system. Ideally, you will have backed up your files on the cloud or an external hard drive. Wiping your hard drive will eliminate everything you saved on your computer, but it might also eliminate the ransomware program, too.   
    6. Report the ransomware attack to your local police department, the FBI, CISA, and the U.S. Secret Service.
    7. Should you pay the ransom?We recommend never paying out during a ransomware attack because it only fuels more cybercrime. If you have exhausted every option and you believe the files being held hostage are worth the ransom, consider that there is no guarantee that the cybercriminals will decrypt your files even if you pay. Consult with law enforcement, cybersecurity professionals, and legal advisors to assess the situation and make an informed decision.  
    8. Once you have control of your device again, change all your passwords because the hackers could’ve looked through passwords saved on your web browser or elsewhere. 

    You have the power to prevent & beat ransomware 

    While ransomware can seem like one of the scariest things that can happen to you online, you can work to prevent it with some simple cybersecurity habits. Now that you know what to do, you can work fast to mitigate any attack if ransomware turns its ugly eye your way. Most importantly, remember that you are not alone when dealing with an attack – reach out to experts and law enforcement.   

    See Original Post


  • September 04, 2023 4:35 PM | Anonymous

    Reposted from Artnet News

    Climate activists have once again targeted a famous work of art, with a member of the group On2Ottawa throwing pink paint on Tom Thomson’s Northern River (1915) at the National Gallery of Canada in Ottawa and affixing himself to the museum floor on Tuesday.

    “Fortunately, the artwork was not harmed during the incident,” the institution said in a statement. “The work was displayed in a protective glazed panel and has been taken out from display for further evaluation. We expect it will be rehung shortly.”

    The protestor, who has been identified as, Kaleb Suedfeld, aged 28, smeared the paint across the glass with his palm before applying glue to his hand, sitting down, and reading a prepared speech.

    “Fossil fuel industries are destroying the work of art that is our planet and our government is firmly in their grip, doing nothing to stop their crimes,” Suedfeld said. “We are shocked that the governments around the world, including our own, are allowing our beautiful planet, this work of art, to be gutted and burned to fuel the pockets of fossil fuel plutocrats.”

    The museum called Ottawa Police Service to the scene, and they arrested Suedfeld.

    On2Ottawa describes itself as “a non-violent civil disobedience campaign” aimed at prompting government officials “to take urgent and meaningful action on the climate crisis.” In response to Canada’s record-setting wildfires, which since March have affected all 13 provinces and territories, it has staged numerous protests in recent weeks blocking traffic in Ottawa.

    Targeting works of art is a tactic denounced by many art-world authorities, including the Association of Art Museum Directors, which in November insisted that “attacks on works of art cannot be justified, whether the motivations are political, religious, or cultural… Such protests are misdirected, and the ends do not justify the means.”

    But some activists maintain that such disruptive activities are as necessary due to their ability to attract widespread media attention, as opposed to petitions or direct outreach to public officials.

    “That does not get the coverage that we absolutely need to succeed as a project,” On2Ottawa spokesperson Laura Sullivan told ARTnews, noting that the pink paint tossed at the Thomson painting was washable. To date, 12 members of the activist group have been arrested at protests, which are set to continue over the next week and a half.

    The National Gallery called the incident “unfortunate,” but declined to comment further due to the ongoing police investigation.

    The first art museum climate protest was at the Louvre in Paris in May 2022, where a man smeared cake on the glass protecting Leonardo da Vinci’s Mona Lisa. A campaign launched by Just Stop Oil roughly a month later saw activists target high-profile paintings including works by Vincent van Gogh and J.M.W. Turner at a quartet of U.K. museums.

    From there, the floodgates opened, triggering copycat actions at institutions across Europe and beyond that continue to this day, despite concerns about potential damage to the works and widespread criticism of the trend.

    See Original Post

  • September 04, 2023 4:26 PM | Anonymous

    Reposted from ASIS

    A good mentor can be invaluable. Mentors serve as sounding boards for new ideas, guides for future career moves, and emotional support during times of turmoil and indecision. But not every mentor is a good fit for every mentee, and mentors themselves can always improve their approach and outreach.

    Find the Right Fit

    “If the relationship is a good fit, mentees can gain a lot—guidance, insight, perspective, options, lessons learned, confidence, and sometimes even a lifelong friend,” says Jennifer Holcomb, CPP, PSP, vice president and security solution lead for Anser Advisory. “The pairing should support not only what the mentee is seeking immediately, but also their long-term career path.”

    While the best kind of mentor is an available mentor, notes Miguel Merino, former head of security for SEAT, S.A., and a member of the ASIS Mentoring Program, “it’s important that a mentee’s values align with your own. Otherwise, this will cause friction between the two of us.”

    This mismatch in values or expectations could undermine the value that a mentee receives from the relationship, and it can diminish the advice that the mentor provides.

    “The mentorship relationship is so valuable because both people learn from the experience,” Merino says. “Mentees receive mainly encouragement, motivation, guidance, and professional advice for improvement of new skills, deeper industry knowledge, and a wealth of contacts the mentee needs.”

    In addition, the mentor has the opportunity to glean unexpected knowledge from the relationship, provided they are willing to listen. “Someone who is not willing to learn can never lead or teach,” he notes. “Usually, a mentee is a person with less experience or skills in his or her professional activities than the mentor, but nonetheless—and quite often—mentees have knowledge and skills that mentors don’t.”

    “The great thing about being a mentor is gaining friends and meeting interesting people,” says Alan Greggo, CPP, regional operations manager for Pinkerton Consulting and Investigations and a member of the ASIS Professional Development Steering Committee. “As a mentor, I have learned patience and empathy for what other professionals faced within their professional lives. Sometimes that spills over into their personal lives; it’s inevitable that personal topics will come up in a relationship like this. Mentors must be keenly aware that it is not always evident what their mentee has going on in their lives. Be patient and be a good listener when you don’t have the answer.”

    Tailor Your Guidance

    Common objectives among ASIS Mentoring Program participants include professional development (70 percent), pursuing certification (60 percent), networking (60 percent), career path development (50 percent), management and leadership (40 percent), and career transitioning (10 percent), Greggo says. Those varying priorities make a one-size-fits-all mentoring approach impractical and less valuable.

    Greggo suggests applying situational leadership principles to mentorship, leveraging awareness of the mentee’s abilities and strengths to determine if the mentor needs to apply a hands-on, micromanaging approach or more of a consulting or informal role. This can be tailored based on the mentee’s personality as well as their current or perspective roles.

    “When mentoring people at different levels of their career, it’s important to understand what skills they have already and what they are seeking to learn,” Holcomb says. “Then you base your approach given their starting point. For example, if a mentee has successfully managed a project, you could tailor your mentorship to guide them on how to run a program. But if the mentee is still learning how to manage their day-to-day, I would guide them on managing tasks and time.

    “It is important to note the difference in how each mentee learns and understands feedback,” she continues. “I can improve my impact on a mentee and their ultimate success if I adapt my approach to connect with the mentee and build on their experience.”

    For instance, she says, managers are often challenged with determining how to motivate individual employees and how to incorporate those strategies into a cohesive and authentic management style. Mentees across the board often struggle with technical writing skills, Holcomb notes. In response, she provides context, feedback, guidance, and continued opportunities to write and improve.

    “To me, the mentoring approach to a frontline employee is much different than it is to a long-term security professional,” Greggo says. “But to understand what part of the situational leadership spectrum is needed, every mentoring engagement needs to start with a meeting to determine the goals and objectives the mentee wants to achieve from that engagement. Use the mentee’s resume to understand their experience and engagement level. This is a good tool to use to form questions for those first few meetings. If the two participants don’t take time for understanding and agreement on [those objectives], the engagement won’t be successful. It’s likely to be an unorganized, sporadic, and messy experience.”

    Melissa Mack, CPP, agrees that setting clear priorities is essential for an effective mentorship. “I encourage employees to identify where they want to be in three, five, and 10 years,” says Mack, who is managing director at Pinkerton and a mentor within the ASIS Mentoring Program. “Conduct a personal skill sets, traits, and qualifications gap analysis and put strategies in place to close those gaps in order to reach their professional development goals. The most effective development approach is to identify and be able to articulate what your professional value proposition is.”

    In addition, be wary of being too prescriptive in your mentorship. Successful mentors don’t push directives but offer perspectives instead, says Herbert Clay, CPP, director of corporate security at Sony Electronics. This suggestion-based approach can foster more creative thinking and develop the mentee’s reflexive and adaptive management style.

    “Through that dialogue, through that sounding board, it really helped me develop the reflexive nature of being able to think through a problem based on the catalog of information I developed from my mentors and then with my own experience,” Clay says.

    And after those thought-provoking conversations, don’t forget to follow up to see what conclusions the mentee reached, what the results were, and how to improve for next time, he adds.

    Rethink Etiquette

    After the COVID-19 pandemic began and organizations shifted broadly to remote or hybrid work, longstanding workplace norms and etiquette fell by the wayside. Many companies relaxed rules around work wardrobes, retiring neckties and embracing blue jeans and more casual wear. Work hours have remained flexible for many people, enabling them to get work done around family obligations. However, as many organizations push for more in-person work and office attendance, some of those workplace etiquette norms are making a resurgence… and they are meeting some resistance, both from new workers and longtime employees who wholeheartedly embraced the different ways of working.

    “Setting the example of what’s appropriate is the best way to get that point across,” Greggo says. “Leaders don’t ask employees to complete tasks that they would not do themselves. In a like manner, leaders don’t lecture employees on appropriate behavior in the workplace without themselves being appropriate every day. If there is a problem, I work one-on-one with the individual in a respectful and empathetic manner to first understand where the employee is coming from, what’s causing the inappropriate behavior, and then explain what is expected. There has to be an agreement as to what appropriate looks like, and the discussion must lead to change.”

    “As a general rule, an employee must be congratulated in public and reprimanded in private,” Merino says. “Take into consideration that our intention is to obtain an improvement or a change with a positive reaction from the employee about what’s appropriate in the workplace. We must treat the subject objectively and gather factual information in advance, before having an individual interview.”

    Private interviews to correct behavior from an employee or a mentee should follow a four-step model, Merino says:

    1. Reinforce the employee’s/mentee’s strengths.
    2. Describe the facts with firm arguments.
    3. Describe the consequences or issues with the person’s action.
    4. Agree with the employee/mentee about his or her adherence to what’s appropriate in the workplace.

    Also, especially because workplace norms keep shifting, mentors and managers should emphasize empathy when course-correcting on behavior.

    “The pandemic crippled the opportunity for collaboration that advances the team as a whole,” Mack says. “It did, however, add an opportunity for workers to get to know each other personally in the sense that now we were looking into someone’s life—home, family, pets, etc.”

    When a change is necessary, though, mentors and managers should adapt their approach toward more of a coaching perspective, she says. “It’s more about providing guidance for the employee to buy into improvement of their personal behavior because they personally internalize the what and why versus the mentorship approach sharing their own experiences which the employee may feel doesn’t apply or resonate.”

    Reach Out First

    “I learned at a very early stage in my career that if I wanted my employees to be successful and advance, it was important to have a strong and effective training program,” Greggo says. “Untrained employees lose interest quickly and lose motivation because of the stress of not knowing how to do the job. Dedicating a job-specific trainer for the employees was helpful, but a formal, documented training program with progress reports and testing was necessary.”

    Untrained employees lose interest quickly and lose motivation because of the stress of not knowing how to do the job.

    If managers take the first step to push employees toward mentorships (internal or external to the organization), education, or training, it demonstrates a level of personal investment in the employee’s potential.

    “New and young employees starting out in their careers are not always thinking in terms of professional development,” Greggo notes. “Some organizations argue that employees are responsible for their own professional development and don’t really have a lot to offer for employees to pick from in terms of development. The employees’ manager should be working to introduce them to professional development options if their company has them.”

    Merino agrees: “mentoring new hires so they have someone to talk to other than their manager or creating an individualized career growth plan, including soft skills, can be a good starting point. In my opinion, employees are very grateful that their manager maintains good communication lines with them; offers them effective support and mentoring; promotes their training and participates; encourages them to assume new responsibilities; and above all, that the manager is an example for them.”

     See Original Post

  • September 04, 2023 4:12 PM | Anonymous

    Reposted from DHS, S&T

    The Department of Homeland Security (DHS) Science and Technology Directorate (S&T) has released an Operational Field Assessment (OFA) of a gunshot detection system developed for first responders.

    First used by the military to detect incoming fire, gunshot detection systems use multiple sensor units to detect and triangulate the precise location of firearm discharges.

    Over the last decade, law enforcement agencies in many large- and medium-sized cities have implemented gunshot detection systems. Most agencies employ fixed systems, where the sensors are installed indoors or at fixed outdoor locations to provide gunshot detection over a large, pre-defined area (often many square miles) to an accuracy of within a few feet.

    An October 2022 Department of Justice-funded report analyzed how agencies across the country are currently using gunshot detection systems. The report concluded that more research is still needed to show whether gunshot detection systems are effective at deterring gun violence or reducing crime, but that there are proven benefits of these systems for first responders. Benefits to responders include significant reductions in response times and better situational awareness.

    Gunshot detection systems are typically integrated into computer-aided dispatch (CAD) systems, enabling real-time alerting. Since many gunshots are never reported to 911, this capability enables responders to immediately dispatch to the scene regardless of whether 911 was called. Some systems will bypass the 911 system entirely and provide alerts directly to officers. This reduces response times by several vital minutes, which gives responders a better chance at neutralizing the threat and reducing casualties.

    Gunshot detection systems can also provide critical situational awareness information to first responders before arrival on scene, such as precise locations where gunshots were fired and whether multiple types of gunshots were detected, suggesting multiple shooters.

    The gunshot detection system evaluated by S&T in the new OFA improves on the technology currently available on the market in several ways.

    First, the system is designed to be portable. While there are portable systems currently on the market, S&T’s system prioritized the ease with which the technology could be installed, moved, and set up by responders without requiring more than two officers or technical expertise.

    Second, most current systems use acoustic technology to detect the sound of gunshots, but this system uses both light and sound. The system can detect the unique flash of light produced when a bullet is fired. This added light detection makes the system more accurate than systems which rely on sound alone. It is less likely to generate false positives when it detects gun-like sounds such as a vehicle misfiring or fireworks.

    Six law enforcement officers from Iowa, New Hampshire, and New York served as evaluators to test and provide feedback on the gunshot detection system. These officers set up the outdoor sensors, overlayed maps and sensors using the situational awareness software, observed gunshot detection notifications on a PC and mobile device, and participated in a debrief with S&T’s National Urban Security Technology Laboratory (NUSTL) to gather feedback.

    S&T released a Tech Speak Minisode featuring interviews with the evaluators about their feedback on the system earlier this year. The full results of the operational field assessment are available in the report.

  • September 04, 2023 3:42 PM | Anonymous

    Reposted from CISA/DHS

    Today, the Department of Homeland Security announced the availability of $374.9 million in grant funding for the Fiscal Year (FY) 2023 State and Local Cybersecurity Grant Program (SLCGP). State and local governments face increasingly sophisticated cyber threats to their critical infrastructure and public safety. Now in its second year, the SLCGP is a first-of-its-kind cybersecurity grant program specifically for state, local, and territorial (SLT) governments across the country to help them strengthen their cyber resilience. Established by the State and Local Cybersecurity Improvement Act, and part of the Bipartisan Infrastructure Law, the SLCGP provides $1 billion in funding over four years to support SLT governments as they develop capabilities to detect, protect against, and respond to cyber threats. This year’s funding allotment represents a significant increase from the $185 million allotted in FY22, demonstrating the Administration and Congress’s commitment to help improve the cybersecurity of communities across the nation. 

    SLCGP is jointly administered by the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Emergency Management Agency (FEMA). CISA provides expertise and guidance on cybersecurity issues while FEMA manages the grant award and allocation process. Award recipients may use funding for a wide range of cybersecurity improvements and capabilities, including cybersecurity planning and exercising, hiring cyber personnel, and improving the services that citizens rely on daily.

    State and local governments have until October 6 to apply for this FY23 grant opportunity.

    For more information and helpful resources on the State and Local Cybersecurity Grant Program, visit CISA’s webpage: cisa.gov/cybergrants

  • September 04, 2023 3:35 PM | Anonymous

    Reposted from CISA

    As valued community members, we want to share an exciting development. We are thrilled to announce the launch of our brand-new Commercial Facilities Sector Management Team (SMT) Update, Arenas to Zoos!

    Arenas to Zoos is a publication designed to bring you the latest updates, exclusive insights, and engaging content in one place. Whether you're a long-time sector partner, a new subscriber, or someone interested in Cybersecurity and Infrastructure Security Agency (CISA) bulletins, this update is designed to keep you informed.

    What can you expect from Arenas to Zoos?

    Our update will be sent monthly, ensuring you receive valuable content without feeling overwhelmed. Please share this news with colleagues who might also find Arenas to Zoos helpful.

    Your feedback and suggestions are always appreciated as we strive to make this update an enriching experience for you. Should you have any questions or need assistance, please don't hesitate to contact us at CommercialFacilitiesSector@cisa.dhs.gov


  • September 04, 2023 3:27 PM | Anonymous

    Reposted from CISA

    Discussions of artificial intelligence (AI) often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system.

    And like any software system, AI must be Secure by Design. This means that manufacturers of AI systems must consider the security of the customers as a core business requirement, not just a technical feature, and prioritize security throughout the whole lifecycle of the product, from inception of the idea to planning for the system’s end-of-life. It also means that AI systems must be secure to use out of the box, with little to no configuration changes or additional cost.

    AI is powerful software

    The specific ways to make AI systems Secure by Design can differ from other types of software, and some best practices for safety and security practices are still being fully defined. Additionally, the manner in which adversaries may choose to use (or misuse) AI software systems will undoubtedly continue to evolve – issues that we will explore in a future blog post. However, fundamental security practices still apply to AI software.

    AI is software that does fancy data processing. It generates predictions, recommendations, or decisions based on statistical reasoning (precisely, this is true of machine learning types of AI). Evidence-based statistical policy making or statistical reasoning is a powerful tool for improving human lives. Evidence-based medicine understands this well. If AI software automates aspects of the human process of science, that makes it very powerful, but it remains software all the same.

    Software should be built with security in mind

    CEOs, policymakers, and academics are grappling with how to design safe and fair AI systems, and how to establish guardrails for the most powerful AI systems. Whatever the outcome of these conversations, AI software must be Secure by Design.

    AI software design, AI software development, AI data management, AI software deployment, AI system integration, AI software testing, AI vulnerability management, AI incident management, AI product security, and AI end-of-life management – for example – all should apply existing community-expected security practices and policies for broader software design, software development, etc. AI engineering continues to take on too much technical debt where they have avoided applying these practices.  As the pressure to adopt AI software system increases, developers will be pressured to take on technical debt rather than implement Secure by Design principles. Since AI is the “high interest credit card” of technical debt, it is particularly dangerous to choose shortcuts rather than Secure by Design.

    Some aspects of AI, such as data management, have important operational differences with expected practices for other software types. Some security practices will need to be augmented to account for AI considerations. The AI engineering community should start by applying existing security best practices. Secure by Design practices are a foundation on which other guardrails and safety principles depend. Therefore, the AI engineering community should be encouraged to integrate or apply these Secure-by-Design practices starting today.

    AI community risk management 

    Secure by Design “means that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure.” Secure by Design software is designed securely from inception to end-of-life. System development life cycle risk management and defense in depth certainly applies to AI software. The larger discussions about AI often lose sight of the workaday shortcomings in AI engineering as related to cybersecurity operations and existing cybersecurity policy. For example, systems processing AI model file formats should protect against untrusted code execution attempts and should use memory-safe languages. The AI engineering community must institute vulnerability identifiers like Common Vulnerabilities and Exposures (CVE) IDs. Since AI is software, AI models – and their dependencies, including data – should be capturedinsoftware bills of materials. The AI system should also respect fundamental privacy principles by default.

    CISA understands that once these standard engineering, Secure-by-Design and security operations practices are integrated into AI engineering, there are still remaining AI-specific assurance issues. For example, adversarial inputs that force misclassification can cause cars to misbehave on road courses or hide objects from security camera software. These adversarial inputs that force misclassifications are practically different from standard input validation or security detection bypass, even if they’re conceptually similar. The security community maintains a taxonomy of common weaknesses and their mitigations – for example, improper input validation is CWE-20. Security detection bypass through evasion is a common issue for network defenses such as intrusion detection system (IDS) evasion.

    See Original Post

  • September 04, 2023 3:21 PM | Anonymous

    Reposted from CISA

    Over the last decade, unmanned aircraft systems (UAS or “drones”) have become a regular feature of American life.  We use them for recreation, research, and commerce, and we look forward to realizing the benefits of future drone innovation.  But the proliferation of this new technology has also introduced new risks to public safety, privacy, and homeland security.  Malicious actors increasingly use UAS domestically to commit and enable crimes, conduct illegal surveillance and industrial espionage, and thwart law enforcement efforts at the local, state and Federal level.

    To meet this evolving threat, the Biden Administration has released the attached 2023 updates to its counter-UAS legislative proposal from last year.  This comprehensive proposal strengthens existing authorities to address the current and future threat while protecting the airspace, the communications spectrum, and the privacy, civil rights, and civil liberties of the American people.  Teams of security professionals from the Departments of Homeland Security, Justice, Defense, Energy, and State, as well as the Intelligence Community and regulatory professionals from the Federal Aviation Administration, Federal Communications Commission, and National Telecommunications and Information Administration, collaborated on this proposal.  Through this proposal and the Administration’s Domestic Counter-Unmanned Aircraft Systems National Action Plan, we are working to expand where we can protect against nefarious UAS activity, who is authorized to take action, and how it can be accomplished lawfully.  We seek measured expansions of authority while safeguarding the airspace, communications spectrums, individual privacy, civil rights, and civil liberties.  To promote all of these ends, we urge Congress to adopt legislation to close critical gaps in existing law and policy that currently impede government and law enforcement from protecting the American people and our vital security interests.

    With respect to the authorities requested for the Department of Homeland Security and Department of Justice, the Administration’s 2023 Legislative Proposal is nearly identical in substance to S. 1631, championed by Senators Peters, Johnson, Sinema, and Hoeven.  Both call for a measured expansion of Department of Homeland Security and Department of Justice counter-UAS authorities.  Built into the architecture of both are critical First and Fourth Amendment protections designed to harness the good applications of drones while guarding against misuse.

    We fully support S. 1631 and applaud the leadership of its sponsors.  However, the Administration’s comprehensive legislative proposal highlights additional counter-UAS needs across other federal departments and agencies.  Please let us know if you have any questions or feedback on the proposal, and thank you for your continued support.

    CISA will coordinate updates on the national plan and legislative proposal at future partnership engagements to allow for direct discussions. In the interim, if you have any questions, please reach out to the CISA sUAS Security Branch at sUASsecurity@cisa.dhs.gov.


  • September 04, 2023 3:11 PM | Anonymous

    Reposted from CISA

    Today, the Cybersecurity and Infrastructure Security Agency (CISA) released the FY2024-2026 Cybersecurity Strategic Plan, which guides CISA’s efforts through 2026 and outlines a new vision for cybersecurity, a vision grounded in collaboration, in innovation, and in accountability.  

    Aligned with the National Cybersecurity Strategy and nested under CISA’s 2023–2025 Strategic Plan, the Cybersecurity Strategic Plan provides a blueprint for how the agency will pursue a future in which damaging cyber intrusions are a shocking anomaly, organizations are secure and resilient, and technology products are safe and secure by design. To this end, the Strategic Plan outlines three enduring goals: 

    • Address Immediate Threats. We will make it increasingly difficult for our adversaries to achieve their goals by targeting American and allied networks. We will work with partners to gain visibility into the breadth of intrusions targeting our country, enable the disruption of threat actor campaigns, ensure that adversaries are rapidly evicted when intrusions occur, and accelerate mitigation of exploitable conditions that adversaries recurringly exploit. 
    • Harden the Terrain. We will catalyze, support, and measure adoption of strong practices for security and resilience that measurably reduce the likelihood of damaging intrusions. We will provide actionable and usable guidance and direction that helps organization prioritize the most effective security investments first and leverage scalable assessments to evaluate progress by organizations, sectors, and the nation.  
    • Drive Security at Scale. We will drive prioritization of cybersecurity as a fundamental safety issue and ask more of technology providers to build security into products throughout their lifecycle, ship products with secure defaults, and foster radical transparency into their security practices so that customers clearly understand the risks they are accepting by using each product. Even as we confront the challenge of unsafe technology products, we must ensure that the future is more secure than the present – including by looking ahead to reduce the risks posed by artificial intelligence and the advance of quantum-relevant computing. Recognizing that a secure future is dependent first on our people, we will do our part to build a national cybersecurity workforce that can address the threats of tomorrow and reflects the diversity of our country. 

    Learn more about CISA’s Cybersecurity Strategic Plan at https://www.cisa.gov/cybersecurity-strategic-plan 

  
 

1305 Krameria, Unit H-129, Denver, CO  80220  Local: 303.322.9667
Copyright © 2015 - 2018 International Foundation for Cultural Property Protection.  All Rights Reserved