How Interview Cheating Tools Hide from Zoom
Interview Coder has been making waves on my X timeline. The tool promises to quietly deliver AI-generated answers for coding interview questions, evading the screen capture feed your interviewer uses to vibe-check you.
It advertises the following undetectability features:
- Invisible to all screen-sharing software and screenshots.
- Window placement with arrow-keys/hotkeys to keep the app on top of your interview window, thwarting any eye-tracking software that checks if you are looking elsewhere
- Toggling the visibility of the window will not trigger a cursor focus change event, which some platforms detect.
It sounds like this can be achieved in a couple lines of code on both Windows and macOS. These methods have been around for many years but I hadn’t come across them yet. The techniques are quite simple and don't require any advanced functionality to implement.
Windows
Windows offers multiple ways to capture screens. Modern tools like Zoom and Teams typically use the Desktop Duplication API (part of DirectX) for efficient real-time screen sharing. Older methods like GDI's BitBlt copy screen bitmaps but are less common today due to performance limitations.
In the Old Times ™️, software using BitBlt captures often implemented custom diffing to optimize data transfer, comparing frames manually to send only changed regions. The Desktop Duplication API is more efficient because it handles diffing natively at the system level, reducing overhead and providing hardware-accelerated change detection, unlike BitBlt's manual, CPU-intensive approach.
To hide windows from these capture methods, it sounds like we can use the SetWindowDisplayAffinity API call. By applying the WDA_EXCLUDEFROMCAPTURE
flag on a Window, it will be removed from captured images.
- Since Windows 10 Version 2003 (build 10.0.19041), the
SetWindowDisplayAffinity
API has been expanded to include a flag calledWDA_EXCLUDEFROMCAPTURE
(0x00000011
).
SetWindowDisplayAffinity(hwnd, 0x00000011); // WDA_EXCLUDEFROMCAPTURE
Microsoft mentions that one use for this flag is for windows that show video recording controls, excluding them from the capture. This makes sense, as applications like Zoom will hide their own UI controls when recording or sharing the screen.
macOS
On macOS, screen capturing is typically done using APIs like CGWindowListCreateImage
(now deprecated for ScreenCaptureKit), which can capture the entire screen or individual windows.
To hide a window from screen captures, we can set the NSWindow class’s setSharingType to NSWindowSharingTypeNone.
window.sharingType = .none
Demonstration
I asked Sonnet 3.7 to create me a cross-platform demo in Python using PyQt5. After iterating on it a bit and toying with the way global hotkeys were implemented, I got a proof-of-concept going.
- You can move the application's transparent window around using CMD+arrow keys, and you can toggle it's visibility using CMD+B
- The window is entirely hidden from Zoom and Microsoft Teams screen sharing, as well as screenshot capture software on macOS.
- Windows should be supported but I have NOT tested it, the key-binds are mapped to ALT instead of CMD.
- On macOS, if you are running this from the terminal (e.g.
python script.py
), you'll have to add your terminal emulator to Accessibility under Privacy & Security (to allow the global hotkeys to function).
$ pip install PyQt5 pynput pyobjc
You don't need to install pyobjc if you are on Windows.
import sys
import signal
from PyQt5 import QtCore, QtWidgets, QtGui
import ctypes
# Platform-specific screen capture protection
if sys.platform == 'win32':
user32, kernel32 = ctypes.windll.user32, ctypes.windll.kernel32
def set_protection(window):
result = user32.SetWindowDisplayAffinity(int(window.winId()), 0x00000011)
return (True, "Success") if result else (False, f"Error: {kernel32.GetLastError()}")
elif sys.platform == 'darwin':
try:
from Foundation import NSBundle
from Cocoa import NSWindow, NSWindowSharingNone
from AppKit import NSFloatingWindowLevel, NSApp
def set_protection(window):
try:
success = False
for nswindow in NSApp.windows():
try:
nswindow.setSharingType_(NSWindowSharingNone)
nswindow.setLevel_(NSFloatingWindowLevel)
success = True
except: pass
return (True, "Success") if success else (False, "Failed to protect window")
except Exception as e:
return False, str(e)
except ImportError:
def set_protection(window):
return False, "PyObjC not properly installed"
# Import pynput for global hotkeys
try:
from pynput import keyboard
pynput_available = True
except ImportError:
pynput_available = False
# Thread-safe signal intermediary
class HotkeySignals(QtCore.QObject):
move_signal = QtCore.pyqtSignal(str)
toggle_signal = QtCore.pyqtSignal()
class MainWindow(QtWidgets.QWidget):
def __init__(self):
super().__init__()
# Setup window appearance
self.setAttribute(QtCore.Qt.WA_TranslucentBackground)
self.setStyleSheet("background-color: rgba(50, 50, 50, 180);")
self.setWindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.WindowStaysOnTopHint)
self.setAttribute(QtCore.Qt.WA_ShowWithoutActivating)
self.setGeometry(100, 100, 300, 200)
self.move_step = 10
# Create UI
layout = QtWidgets.QVBoxLayout()
self.status_label = QtWidgets.QLabel("✓ Hidden from screen capture")
self.status_label.setStyleSheet("color: white; font-size: 14px; font-weight: bold;")
layout.addWidget(self.status_label)
modifier_key = "Alt" if sys.platform == 'win32' else "⌘"
self.info_label = QtWidgets.QLabel(f"{modifier_key}+Arrow keys to move • {modifier_key}+B to toggle visibility")
self.info_label.setStyleSheet("color: white;")
layout.addWidget(self.info_label)
self.error_label = QtWidgets.QLabel()
self.error_label.setStyleSheet("color: red; font-size: 12px;")
self.error_label.setWordWrap(True)
self.error_label.hide()
layout.addWidget(self.error_label)
self.setLayout(layout)
# Enable mouse controls and setup hotkeys
self.setMouseTracking(True)
self.signals = HotkeySignals()
self.signals.move_signal.connect(self.move_window)
self.signals.toggle_signal.connect(self.toggle_visibility)
self.setup_global_hotkeys()
def setup_global_hotkeys(self):
if not pynput_available:
self.error_label.setText("Global hotkeys unavailable: install pynput package")
self.error_label.show()
return
try:
self.modifier_key = keyboard.Key.alt if sys.platform == 'win32' else keyboard.Key.cmd
self.pressed_keys = set()
self.keyboard_listener = keyboard.Listener(
on_press=self.on_key_press,
on_release=lambda key: self.pressed_keys.discard(key) or True,
suppress=False
)
self.keyboard_listener.start()
except: pass
def mousePressEvent(self, event):
if event.button() == QtCore.Qt.RightButton:
self.toggle_visibility()
elif event.button() == QtCore.Qt.LeftButton:
self.drag_position = event.globalPos() - self.frameGeometry().topLeft()
def mouseMoveEvent(self, event):
if hasattr(self, 'drag_position') and event.buttons() == QtCore.Qt.LeftButton:
self.move(event.globalPos() - self.drag_position)
def on_key_press(self, key):
try:
if key is None: return True
self.pressed_keys.add(key)
if self.modifier_key in self.pressed_keys:
if keyboard.Key.left in self.pressed_keys:
self.signals.move_signal.emit("left")
elif keyboard.Key.right in self.pressed_keys:
self.signals.move_signal.emit("right")
elif keyboard.Key.up in self.pressed_keys:
self.signals.move_signal.emit("up")
elif keyboard.Key.down in self.pressed_keys:
self.signals.move_signal.emit("down")
# Check for B key
elif any((hasattr(k, 'char') and k.char == 'b') or str(k).lower() in ["'b'", "key.b"]
for k in self.pressed_keys):
self.signals.toggle_signal.emit()
except: pass
return True
def move_window(self, direction):
delta = {"left": (-self.move_step, 0),
"right": (self.move_step, 0),
"up": (0, -self.move_step),
"down": (0, self.move_step)}.get(direction, (0, 0))
self.move(self.pos() + QtCore.QPoint(*delta))
def toggle_visibility(self):
if self.isVisible():
self.hide()
else:
flags = self.windowFlags()
self.setVisible(True)
self.setWindowFlags(flags)
self.show()
self.raise_()
def showEvent(self, event):
success, message = set_protection(self)
self.raise_()
if success:
self.status_label.setText("✓ Hidden from screen capture")
self.status_label.setStyleSheet("color: white; font-size: 14px; font-weight: bold;")
self.error_label.hide()
else:
self.status_label.setText("✗ Not hidden from screen capture")
self.status_label.setStyleSheet("color: red; font-size: 14px; font-weight: bold;")
self.error_label.setText(message)
self.error_label.show()
def closeEvent(self, event):
if hasattr(self, 'keyboard_listener') and self.keyboard_listener:
try: self.keyboard_listener.stop()
except: pass
super().closeEvent(event)
def signal_handler(sig, frame):
"""Handle Ctrl+C to gracefully quit the application"""
if QtWidgets.QApplication.instance():
QtWidgets.QApplication.instance().quit()
if __name__ == "__main__":
# Set up signal handler for Ctrl+C
signal.signal(signal.SIGINT, signal_handler)
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
# This allows Ctrl+C processing in the main loop
timer = QtCore.QTimer()
timer.start(100)
timer.timeout.connect(lambda: None)
sys.exit(app.exec_())
Caveats & Ideas
For software such as Zoom, it seems important to enable the “Advanced capture with window filtering” option the Screen Capture settings. This is also mentioned on the Interview Coder Help page.
Advanced capture with window filtering: This method will share your screen, include motion detection (when you move a window or play a movie), and will not show windows from the Zoom desktop app.
I’m not sure what this is doing behind the scenes, but it seems like it may be filtering based on the flag we set above. There is also the “Legacy/Previous operating systems” option which may fall back to older GDI-based methods that respect this flag.
- This isn’t foolproof. DirectX, kernel-level captures, or GPU frame grabs might still see you.
- I was able to capture the hidden window using QuickTime's screen recorder, which is how I made the demonstration video above.
- If capture software tries to evade this somehow, one might consider injecting code directly into the capture software’s process to see if it ignores windows based on process filtering?
If you wanted to extend this into your own Interview Coder style software, you would need to add the following:
- OpenAPI support for OpenAI/Claude LLMs
- A hotkey to capture screenshots and add them to the context of the chat.
- A prompt template to ask the LLM of your choice to read the problem in the screenshot and output code in whatever language you want, along with "thoughts" you can read aloud to the interviewer and anything else you want included.
- Code to ship the images + prompt to the LLM and return the output in the transparent window.