Banner

From Soyjak Wiki, the free ensoyclopedia
(Redirected from Banners)
Jump to navigationJump to search
OH MY JAPANESE CULTURE!!!! THIS IS JUST LIKE MY TRANIME GREENTEXT VIDEOS!!!!!!
FUCK WOJACKSPAMMERS BTW I <3 LESBIAN HENTAI OF TWO DUDES
DESU DESU DESU DESU DESU DESU SUDE

A banner is a small rectangular image that appears at the top of the imageboard. Banners first originated on 4chan because Moot had a hard time selecting all, and chose to instead use a rotation script in JavaScript to cycle all of them. This page lists all the website banners of soyjak.party.

Directory of banners: https://soyjak.st/static/banners/

On October 20, 2024, a page was added to the Sharty where you can see all the banners: https://soyjak.st/banners.html [1][2]

Aside from these, there also board banners.

[edit | edit source]

(((banners.html))) should NOT be trusted; you WILL slightly strain the soyvers by spamming requests to b.php to find them all.

A Bash and Python script that can be used for this purpose can be found in this article's final section.

As of September 9, 2025, there are 245 banners available on soyjak.party. They are sorted numerically by filename.

Gallery[edit | edit source]

Removed Banners[edit | edit source]

The following banners have been removed, likely due to administration changes:

'nner Wrangling Script[edit | edit source]

Bash[edit | edit source]

 
#!/bin/bash
# Usage: ./download_banners.sh [output_dir]

outDir="${1:-$PWD}"
url="https://www.soyjak.st/static/banners/" [needs archive]
userAgent='Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:132.0) Gecko/20100101 Firefox/132.0'

mkdir -p "$outDir"
cd "$outDir" || exit 1

# Temporary files
indexFile=$(mktemp)
listFile=$(mktemp)

# Download index page and extract from it
echo "Fetching directory index..."
curl -A "$userAgent" "$url" > "$indexFile"

grep -oP '(?<=href=")[^"]+' "$indexFile" \
  | grep -vE '^/?\.\./?$' \
  > "$listFile"

echo "Found $(wc -l < "$listFile") files. Downloading..."

wget -nc --user-agent="$userAgent" -i "$listFile" --base="$url"

# Cleanup
rm -f "$indexFile" "$listFile"

echo "Done. Files saved to $outDir"

Python[edit | edit source]

#!/usr/bin/env python3
import os
import re
import requests
import sys
import tempfile
import urllib.parse

from requests import Response
from typing import Dict, List

BASE_URL: str = "https://www.soyjak.st/static/banners/"
USER_AGENT: str = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:132.0) Gecko/20100101 Firefox/132.0"

if __name__ == "__main__":
    out_dir: str = sys.argv[1] if len(sys.argv) > 1 else os.getcwd()
    os.makedirs(out_dir, exist_ok=True)

    print(f"Saving files to: {out_dir}")

    print("Fetching directory index...")
    headers: Dict[str, str] = {"User-Agent": USER_AGENT}
    response: Response = requests.get(BASE_URL, headers=headers)
    response.raise_for_status()

    links: List[str] = re.findall(r'href="([^"]+)"', response.text)
    files: List[str] = [f for f in links if not re.match(r'^/?\.\./?$', f)]

    print(f"Found {len(files)} files. Downloading...")

    for f in files:
        file_url: str = urllib.parse.urljoin(BASE_URL, f)
        dest_path: str = os.path.join(out_dir, os.path.basename(f))
        if os.path.exists(dest_path):
            print(f"Skipping {f} (already exists)")
            continue

        try:
            print(f"Downloading {f}...")
            r: Request = requests.get(file_url, headers=headers, stream=True)
            r.raise_for_status()
            with open(dest_path, "wb") as fp:
                for chunk in r.iter_content(chunk_size=8192):
                    fp.write(chunk)
        except Exception as e:
            print(f"Failed to download {f}: {e}")

    print("Done.")

Citations