Social Engineering Research
A paper detailing relevant background information on social engineering.
For my first independent study course in the human interaction program, I decided to research social engineering. Human elements in security are the least consistent, yet most consistently vulnerable aspect of any computer system. Because of this, social engineering is a key consideration in any security focus. In order to produce the summary, I identified eighteen different articles, web pages, and sources of information that could be useful.
A number of these articles were focused on the ethical considerations of social engineering, which I would argue is even more important to keep in mind when exploiting human behavior, as opposed to exploiting software (though ethical considerations should always be a part of any security testing and research). As this research was confined to purely theoretical, no human subjects committee needed to be consulted. In any case, the paper leads with the ethical considerations before reviewing prior work and classifying social engineering attack types.
Following the ethical considerations, various types of phishing are examined, as well as some defensive techniques to mitigate these attacks. The classifications I decided to split techniques into were purely physical techniques of information gathering, penetration techniques, or research; technological techniques, one example of which is phishing; and combined physical and technological techniques, which includes my favorite technique: USB baiting.
In the final section, I briefly explore the psychological explanations behind the attacks, to both understand what should be focused on when designing defensive mitigation techniques, or when designing attacks or new techniques.
​
As with any research, the works cited tails the paper. Included are all eighteen sources referenced, with unused pieces labeled as such.
​
If you would like a copy of the paper, the download button below contains the final document I produced for the semester.
​
Our program is designed to operate on a monthly cycle, as research has proven that regular micro-trainings results in a more educated and secure user base. Trainings are assigned at the beginning of the month, and are based on current threats across all of retail, specific threats we are facing, or based on the results of prior simulated phishing campaigns. Certain "free play" modules are also rotated in / out around this time, and are often based on topics associates have been showing interest in, as these modules are optional. Sometime between the beginning of the month and the last week of the month, the simulated phish is delivered, over the course of a week. Simulated phishing campaigns go through multiple rounds of review, and are crafted with both ethical and practical concerns in mind. Threats that have been identified as "sensitive" by our SAT vendor are never delivered as phishing campaigns, and similarly, these sensitive topics, which would likely be very successful, are never selected as options for other campaigns because of their sensitive nature. Campaigns run for a predetermined length of time, which was selected based on baseline testing, but is subject to change. Users who interact with a phishing email are assigned additional training. Borrowing from my experience with HCI and UX, I designed a procedure wherein random users who fail the phish are selected to give feedback on their experience, if they agreed to participate in an interview. Interviews try to understand why a user failed a particular phish, find out if they learned from their mistake, and see if the end users felt the corrective training was relevant and helpful. Associates are also given an opportunity to give further feedback during these interviews. This procedure can also be used for post-incident investigations involving a successful social engineering attack. Lastly, after all the data is been collected, a report is produced for management, and delivered before the monthly security report.
​
Initial buy-in for the program was challenging, and I had to work with not only my team, but multiple other teams within the company to help inform our associates, and leadership, the value of the program. Initial KPIs for the program were shaky, but after only three months, we began to meet our goals. As KPIs improve, I alter the difficulty of my various phishing emails. Since the goal is to harden the defenses as much as possible, it is important to continue to challenge our team like this. A number of simulated campaigns resulted in our end users scoring in the top percentiles of the retail industry as a whole. I am happy to provide redacted and dissected samples of my work to prospective employers. Due to the sensitive nature of quality phishing emails, this information will not be made publicly available at this time.
​
​
Simulated Phishing
Simulated phishing campaigns, and my role as Security
Awareness Trainer.
As a member of the security team at Raymour & Flanigan, my primary role is the Security Awareness Trainer. This role has two major functions: deliver training to over 4,000 associates, and to test the efficacy of the training. We use a Security Awareness Training (SAT) platform that provides training modules including videos and interactive text-based trainings, mechanisms for conducting simulated phishing campaigns, and tools to track various KPIs such as participation in the program, pass / fail rates for phishing campaigns, user vulnerability, and more. This program was launched when I was promoted into the position, and I have the privilege of using these tools to build the program and instill a security culture.
​

Enterprise-Level Identity and Access Management Projects
A curated set of IAM projects I designed and deployed.
1. Created Password Blocklist Automation
Challenge:
When I joined the security team a my organization, we were implementing a password blocklist solution. Our security engineer had a script that collected the NTLM hashes of all the accounts in the organization, and compared this to the rockyou dictionary. I was tasked with creating the script that took action. A second version was created with a 15GB word list, and a third version was created using an 84GB word list.
Solution:
Understanding the input to my script was the first step in accomplishing the goal for the first version of this script. I was given a number of common fields in CSV format for users whose password was in the dictionary file. Of these fields, I identified EmailAddress, SamAccountName, and Enabled for useful data. When pulling the user objects from AD, I included the properties LastLogonDate and ChangePasswordAtLogon.
Having all the data I needed, I proceeded to create a script that checks for a new input file, reads the data from the input file, and for each user, checks to make sure they are enabled and haven’t changed their password since the file was created*. Eventually, a bypass list was created for all service accounts, and other sensitive accounts, since later steps would disrupt business operation. From this point on, if the account met the above conditions, if their account did not already need to change its password the next time a logon occurred, it would either: send a warning email if Monday through Saturday, force a password change if Sunday, or reset the password to a large and random string if the password already needed to be changed on next login and the account had not logged in for 21 or more days.
Between the first and second iteration, a minor implementation bug that resulted in accounts’ passwords being changed too soon had been resolved. This was discovered when I was making changes to include accounts with the default password**, as the original script that was to include this had been decommissioned. Passwords had been being changed as soon as an account with a password that needed to be changed on next login hit the logic for resetting passwords.
The second iteration’s main challenge involved the word list being expanded. Our security engineer needed assistance managing the size of the dictionary, as PowerShell and text editing software could not contain the data in their buffers. To resolve this, I created a Python script, leveraging a Large Language Model to create the base script, which I reviewed and edited for function and clarity, to split the large file into chunks. Much like how data is split for network travel. The script included a function with parameters to specify the in-file and out-file size(s). I chose Python for its ability to process strings effectively, and because PowerShell, even on our server, was still having memory issues due to the size of the file. This resolved the file size problem, and vastly improved our weak password catches, up to around 100 users with breached passwords on the next script execution.
Between the second and third iteration, a custom chunk file was created to house passwords relevant to our organization that were easy to guess, should an attacker choose to password spray, or de-hash any captured password hashes. This was due to end users circumventing the administrative controlled password complexity*** and appearing on the list multiple weeks in a row using predictable passwords and patterns.
The third iteration saw our dictionary expand to 84GB, and billions of passwords, capturing over 500 more users with breached passwords. To facilitate this, I modified the Python script from the second iteration to be fully executable from the terminal, for ease of use and repeatability, where originally, it was less user-friendly. The file splitting was successful, but we needed to search the files for the default password**, which was challenging due to the number of files created; 164. To this end, I created a Python script to search through the files. While this may have taken longer than a simple, manual binary search, this script has proven useful to allow for faster searches during other queries. Knowing that the data was pre-sorted, I was able to optimize this script to complete searches amongst billions of passwords in a handful of minutes by skipping chunks that were irrelevant.
*: I recognize that users may have still chosen a breached password, but this was an intentional design choice to prioritize the user experience, since our end users are very sensitive to IT processes. Losing that small percentage of possible weak passwords meant that nobody would complain for having to reset a password twice in one day, and thus jeopardizing the project.
**: The default password was addressed in a later project.
***: Password complexity was eventually enforced on the domain after legacy restrictions had been removed.

2. The Active Directory Account Audit
Challenge:
While investigating a number of accounts in our AD environment that were likely to be shared by end users, I discovered a number of legacy accounts whose passwords did not expire. This prompted an audit of the entire entire environment and lead to the discovery of over 500 accounts that needed investigation.
Solution:
In order to determine the fate of the accounts, I needed to pre-screen the list for any high priority accounts, determine the ownership, active status, interactive status, remediation, and documentation of findings.
Before sifting through the hundreds of accounts, I first checked for any known VIP accounts. As there were some accounts that were operated by important end-users, I called those individuals directly to resolve their accounts’ passwords first. In this case, since we had already implemented a blocklist for passwords, this involved explaining to the end-users that their accounts would have password rotation enabled, and were reminded of the password complexity requirements.
To determine the ownership of the other accounts, I first coordinated with members of the IT department to find out if any were primarily IT owned and operated accounts, or if the accounts were primarily operated by end-users in the field. This essentially split the task into accounts I could delegate the investigation into usage, interactivity, and remediation to other teams in IT, and accounts that I would need to follow up with in the field.
In order to determine the active status of an account, I used a combination or automated and manual processes. A PowerShell script was used to email the status and owner unknown accounts up to three times, with the warning that the accounts could be disabled and deleted if no response was given. This reduced helped to split the accounts into further more manageable chunks, and helped to prioritize certain accounts.
Once I had a list of likely inactive accounts, I performed lookups in AD for LastLoginDate on the user objects, our SIEM for login events, our email gateway for any recent email activity, and one final email to any likely owners. If all of this indicated an account that was not in use, batches of accounts would be submitted for review to the change control panel, and turned off. Myself and a member of the Systems team would monitor for any indication of process failures, and the Help Desk team was notified to keep an eye out for any of the accounts that had been disabled and immediately escalate to me in those cases. For accounts where ownership was determined, the account owners were instructed to contact myself or the Help Desk in the event any processes suddenly stopped working when a change went through. Of the 150 or so accounts that were disabled, only 1 needed to be turned back on over the course of the months-long part of this process.
For accounts that were active, I determined how many people had access to the account, what the process was when members of the team left the organization, and how often they rotated the password. I also collected documentation information to update AD user object description data, and updated internal documentation to reflect which team owned and operated the account, as well as which member of the owning team would be the best point of contact.
Remaining accounts were either entered into our privileged access management (PAM) solution, CyberArk, which ensures password rotation and that passwords will not be present in breach lists, or were recorded as security exceptions in our risk management portal.

3. Created AD Group Membership Access Delta Calculation Automation
Challenge:
My organization currently uses discretionary access control (DAC), as opposed to role-based access control (RBAC), access control listings (ACL), or attribute-based access control (ABAC). Because of this, it has often been difficult to observe when access creep is occurring, despite title changes wiping permissions. Other teams can add and remove permissions to accounts, and do not log those changes. The organization is not willing to restructure who can grant permissions. This was a gap that I identified when I mapped out the user object life-cycle for our organization.
Solution:
Gaining visibility and being able to define which accounts have gained what access will give the security team the hard data to present to leadership to make more secure decisions such as limiting who can grant permissions, moving away from DAC, gaining visibility on who has what permissions and when they are changing, or any of the above.
​
In order to get this data, I created a PowerShell script. This script collects all user objects in the domain. On the very first execution, the script generates a baseline CSV that details titles, groups, and how many people with that title belong to that group. This is done by making use of a nested hash map object. For each user in the list, their title is used as a key in the outer has map, and for each group the user has, those groups serve as a key for the second hash map. On subsequent runs, the script loads this file’s data into memory and formats it into a nested hash map object. The current list of user objects is then formatted into a nested has map object as well. The previous totals are then subtracted from the current total, with negative numbers indicating a loss of access. This delta calculation is added to an output CSV. Once a month, the monthly deltas, weekly deltas, and current title-group totals are delivered to the InfoSec and Systems teams.
Because this data calculates and includes percentages of user objects included in a particular access group, this script could be further developed to design roles for use in a RBAC framework, or be used to automatically remove group accesses from users that are below a certain percentage threshold, ex: if 1% of users with title A have group X, remove group X from all users with title A. This could be supplemented with an ACL bypass list, in order to facilitate sanctioned testing, one offs, temporary access, or other necessary situations.

4. Password Spraying Detection and Prevention
Challenge:
During an external pentest exercise, our team did not detect the attackers performing a password spray attack against our users while on the domain.
Solution:
I designed an algorithm for the SIEM to use, that our security engineer implemented. Password spraying involves rotating users more frequently than passwords, if passwords are even rotated at all. An attack of this nature evades account lockouts, which is tracked by our SIEM, and is traditionally more stealthy than brute-force or dictionary attacks. Because the SIEM aggregates data, however, and tracks machine info when observing logs, I was proposed that we create an alert for cases where a given IP address generated a certain number of failed logon attempts, regardless of the account generating the attempt. The exact number of attempts was chosen and tweaked by the team’s security engineer.
A second layer of defense against password spraying was to enforce password complexity across the domain. Because of the AD account audit, and the password blocklist automation, along with our PAM solution, CyberArk, we were able to identify and eliminate all legacy passwords that were preventing us from enforcing password complexity for service accounts.
Finally, user accounts were already under enforced password complexity, as we employed an IAM solution, Okta, that, for the purposes of this challenge, serve as a second factor / multi-factor authenticator and as the portal through which users change their password. Because this was already in place, even though the attackers were able to spray undetected, and found working credentials (using passwords that were eliminated in the third iteration of the password blocklist), the attackers did not gain access when successful.
With this rule implemented, we will be able to increase our threat detection capabilities in the IAM domain.


Viral Posts In Social Media
A research project that explores what makes certain posts go viral.
For this project, I identified an area of research that interested me, conducted background research on the subject, formulated a research plan, submitted the plan to the institution's human subjects committee's ethical review board for ethical considerations, conducted the research, analyzed the results, and finally, synthesized the results into a final deliverable document. This project received minimal guidance during the process, and was primarily a solo project. The final product cites sixteen different sources for background research, sixteen pages of content, and ninety pages of statistical results from the analysis of the data (for a total of ninety-eight pages with sources). 243 participants were sampled for this research, and were surveyed for their reactions to various, real-world posts from Twitter.
Participants were surveyed for their demographic information, but due to time constraints and manpower limits, analysis was not conducted based on these factors. Instead, the factors with the strongest evidence for determining the virality of a post were analyzed; happiness or amusement, sadness or anxiety, and anger or disgust, as well as perceived truth and perceived bias. While strong emotional arousal's relationship to the likelihood of a post being shared had already been established, the perception of truth and the perception of bias did not seem to have as much background research. While it is known that the truth value of a post being false is correlated with a post being shared, a number of other factors could contribute to this, which is why the distinction was made here to survey for the perception of truth. Similarly, the participants' perception of bias could have been expected to show a similar trend toward that of their perception of truth, however, perceived bias surprisingly seemed to show no relationship when it came to sharing a post. Perceived truth had a stronger positive correlation with not being viral, and sadness or anxiety showed the strongest positive relationship with a post's virality. Happiness or amusement and anger or disgust both had strong positive correlations with non-viral posts.
Mineswiffer
A GUI-focused programming design project.
This project was for the Graphical User Interface (GUI) course I took during the Human Computer Interaction course at SUNY Oswego. The GUI was developed for an adapted command-line version of Minesweeper I programmed in my spare time between classes one day during my undergraduate program at Oswego. The focus for the course was the GUI code, and so any engine code that caused problems were, for the most part, ignored. That being said, there was only one known issue by the end of the course that was thought to be linked to the back-end code. Unfortunately, in the second to last week of class that semester, my coursework was put on hold due to an act of arson committed against my immediate family. Because of this, small errors exist in this project that otherwise demonstrates my commitment to refined work. Regardless, the project is presented as-is, as an example of work that is exclusively mine, for both front and back-end, as well as my thought process during application design.

Food Follower
An application designed to help minimize food waste in the home.
The Design's Journey
In the Spring of 2018, a software development project for my computer science major saw the inception of a program with the intent to help households manage the food on hand to reduce food waste and save money. Prior to the Spring of 2021, the program had no GUI, but desperately needed one. Throughout the Spring semester, the interface was envisioned, refined, and eventually a prototype was born.
-
Food Follower: A food inventory management application for Android.
-
Designed to be used by anyone to keep track of expiration dates, food on hand, and grocery lists.
-
Clutter in the fridge and forgetting when your food was purchased can lead to tossing food or an upset stomach. This application aims to prevent these issues.
-
My role was creator and designer of the application, working with classmates and research participants to obtain and implement feedback.

Personas
Initial research was performed to get a general idea of the potential average user for Food Follower. Based on the findings, Frankie Foodwaster came into existence. Throughout the process, they helped inform the overall feel and design of the application.
Key Points:
-
The persona was one of the first steps taken on the design journey.
-
Background research was performed in order to figure out what the average household might look like.
-
The persona included information on their background, traits, needs, situations and corresponding use-case scenarios.
-
Frankie made sure the design was kept simple. A help page with combined visual and textual tutorials ensures that new users or those with little time to learn won't feel intimidated.
-
The persona was a guiding star during development, and helped keep the design focus on a simple interface for managing long lists of food.
Sketches
After exploring the tools and concepts available for developing software, the first iterations of the application were lo-fi sketches on paper.
Key Points:
-
This set of screen sketches served to act as an initial brainstorm for the idea of the user interface. Additionally, classmates exchanged development suggestions based off of the lo-fi sketches.
-
Since the average user could have any level of experience, this design was meant to have clear navigational buttons always present, as well as give the user the most important information on the application home screen.
-
An alternative version was sketched for a desktop GUI.
-
Minor changes evolved during the transition from paper to digital, but the overall feel was maintained.
-
One of the features of the design is that buttons were right aligned (or left aligned; not pictured) so as to allow for easy single handed use.

Wireframes
After receiving feedback on the sketches, a digital wireframe was developed and shown to a small sample of potential users and classmates. Their feedback was recorded and developed into the prototype designs.
Key Points:
-
This step helped highlight any potential hiccups that a user could encounter with the design.
-
The free online tool, diagrams.net was used to develop this wireframe.
-
Four participants outside of the class group were shown the different pages of the wireframe and were asked how they thought they would complete certain tasks based on what they were seeing.
-
The participants were also asked to point out and explain why any elements of the design were difficult to understand.
-
The participants primarily helped to evaluate the initial usability, learnability, and error recovery of the design.
-
Shown below are: the initial design shown to users for feedback, the initial design with classmate feedback, the final wireframe design with user and classmate feedback.



User Testing
Finally, a prototype design was developed in the web-based Framer application and shown to users for feedback once again. This version included color and working buttons, but still had some placeholder screens where actual data would have been.
Key Points:
-
These tests were performed in order to evaluate the usability, learnability, and error recovery of the refined design.
-
The final design was also evaluated for other, more specific traits such as the readability of the text, size of the buttons and Fitts's Law, information layout / architecture, and other common UX traits.
-
The test subjects were subjects of convenience and covered a range of demographics, including one colorblind user.
-
Moderated one-on-one testing was performed in which users were asked to complete three tasks within the application: adding a new food item, searching for a food item, and editing a food item.
-
These tests showed which elements worked, and which elements needed help.
-
Pictured below are: part 3 of the tutorial page, the home page, and the search page, with a link to the prototype directly below.
-
Follow this link to see the prototype in action; you will need an account and access approved.



What I Learned
This was my first experience designing a GUI using the methods I learned from the HCI program. It gave me an in-depth look at the steps necessary to create something usable and easy for a user, as well as a good look at the research methods that are employed to get feedback and improve.
-
Doing research and interviews during the design process was new to me, but I enjoyed the chance to explore information that was new to me.
-
Being able to see the design grow and develop was rewarding, and I look forward to my next chance to be a part of another design.
While the final design might look too 2D, I think that it properly conveys the idea of the app, and maintains a simple and easy to use application. With an appropriate asset library, a less flat, and more visually appealing version of this application could be created. As I have the back-end and a front-end prototype completed, this is a project I plan to develop into a published app in the future.