Aaron Fernando: We recently explored how blockchain is being used as a force for good and conducted a handful of interviews with practitioners in the industry. Yet the blockchain space is fast-moving and constantly brimming with new projects that could make the sharing economy increasingly accessible to all. This is a list of some other projects to look out for in this space:
Helbiz is a startup creating a marketplace that allows people to share not just cars, but any vehicle — bikes, boats, and other modes of transportation. Using both hardware and software, the idea is to be able to turn entire vehicles into Internet-of-Things (IoT) devices that it can be unlocked with a smartphone, tracked with GPS, and monitored for problems and maintenance. The Helbiz app will allow users to unlock vehicles and pay for vehicle usage in cryptocurrency. Though innovative, there are other organizations developing similar capabilities in the shared mobility space including HireGo and Xain.
Though there are other decentralized marketplaces that use blockchain technology, OpenBazaar was the first of its kind and is still going strong. Users are able to pay for goods in over fifty cryptocurrencies. Since there is no company or entity running it, there are no fees or limits to what can be bought or sold. Though significant technical differences exist, for the casual user, up-and-coming challengers such as Swarm City, Public Market, and Blockmarket will offer similar services with different features.
As a country, Kenya has been one of the fastest countries in the world to wholeheartedly embrace mobile money and mobile payments, with two-thirds of the population using it on a daily basis. So, not surprisingly, it is also among the first countries where blockchain-enabled community currencies are being used by merchants who might otherwise be too cash-strapped to transact with each other. The organization Grassroots Economics has been operating multiple community currencies in Kenya and and South Africa for the past few years to increase the buying and selling power of various communities. Recently, in partnership with blockchain-startup Bancor, Kenyan vendors have been signing up to accept cryptocurrency versions of these community currencies which they already accept in paper form.
Another Kenya-based project making use of Kenyans’ high rates of adoption in mobile banking is Chamapesa, which is using blockchain technology to facilitate lending circles with smartphones. The organization has pointed out that it is not trying to change any core behavior, but rather is using blockchain to facilitate this type of community finance scheme — in one tweet, it said “We’re not changing Chamas behaviour except that instead of using paper books were going to be using smart phones.”
Although the proponents of Holochain proudly stand by the fact that it is not a blockchain, for the layperson Holochain has very many similar use-cases. One main difference is that it comes without the prerequisite of using the enormous amounts of energy required by “proof-of-work” blockchains like Bitcoin and Ethereum, yet it is still possible to run a blockchain exactly like Bitcoin on Holochain. Instead of having only one global agreement about what data is valid (as with a regular blockchain), individual users create their own intermeshed ledgers of valid data and personal histories.
Additionally, whenever we access a website on the Internet, we are really accessing information hosted on servers operated by some third party — usually in a location we don’t care about, owned by a separate private company. What Holochain makes possible is a blockchain-like method and reward structure for storing and accessing data and applications between users themselves, without having to rely on these third parties. This makes it possible to create and run applications of any sort between users, allowing the operation of a truly peer-to-peer internet.
Only about half of the world’s population has regular Internet access, which means billions still do not have the ability to take advantage of web-based peer-to-peer technologies. Right Mesh is working to change that by allowing people can create their own networks with limited access to internet using mesh networking.
In a mesh network, information leaps between phones, computers, and other devices like frogs on lily pads until it gets to its destination. It does this by using these devices’ ability to connect directly with each other via Bluetooth or Wifi without being linked to the Internet itself. By encrypting the information — whether it’s a message, image, or payment — the network can ensure that only the desired recipient can understand and make use of the message, which opens up a host of mesh applications that can run on peer-to-peer devices in the absence of internet. Other mesh networks and mesh software already exist, but by using blockchain, this network will allow users to get paid for providing content to peers and existing as an infrastructure of the network itself, offsetting hardware and battery costs of doing so.
Just like AirBnB, the platform Beenest connects hosts with potential guests and leverages blockchain technology to keep costs down. If using its own cryptocurrency — Bee Token — no fees or commissions are taken by the platform on booking and listing properties and charges lower fees than AirBnb when using other currencies or cryptocurrencies.
Possible is a Netherlands-based project is intended to increase social capital by giving people incentives to do the work that might not normally earn money, yet is crucial for healthy societies. Possible is a time bank, meaning that it enables individuals to offer up and use each other’s services denominated in hours of time rather than in some other unit of currency. Time banks have been around for well over a century, but they are often volunteer run and administering them using a centrally-managed database can, at times, prove difficult. By using blockchain technology to and decentralize who gets to update the database, projects like Possible could make it easier to operate time banks without having to rely on volunteers.
ShareRing is developing an app that uses blockchain technology that will allow users to find and search for nearby services and actually share anything with each other. Like a truly decentralized library of things — both physically and digitally — ShareRing could maximize the usage people get out of physical objects by enabling people to share them easily and fairly. The idea is to have a truly global network, so that the ShareRing app can be used to find and pay for similar services no matter where in the world a person may be. The app will use two of its own cryptocurrencies: one that allows merchants to access the blockchain, and the other as a currency for paying for services on the platform.
Platforms like Amazon, Uber, and AirBnB offer useful services but continuously extract wealth from those who generate it though high fees, which are paid to these companies and their shareholders. Digital Town [one of Shareable’s sponsor] aims to change this business model by linking users in specific geographic localities with the information, resources, and services they need, but instead of extracting high fees, but instead of extracting high fees will ensure more of the profits generated locally, stay local.
With DigitalTown’s blockchain-based solution, merchants and consumers are rewarded for engagement. Merchants receive a free storefront with low commission rates and everyone receives a free SmartWallet, which supports traditional and cryptocurrencies. Businesses pay just a 1% payment processing fee and everyone enjoys free peer-to-peer transfers.
DigitalTown’s tools make it easy for communities to share content, discussions, events, and projects. Users of the platform are rewarded with CommunityPoints. Merchants can use CommunityPoints for marketing campaigns on DigitalTown and consumers can use CommunityPoints with participating merchants.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.OkNoRead more
\ No newline at end of file
diff --git a/pagenames.txt b/pagenames.txt
index b91e087..1f30858 100644
--- a/pagenames.txt
+++ b/pagenames.txt
@@ -5318,7 +5318,6 @@ VP8
BarCamps
Guerilla_Gardening
VODO
-http://blog.p2pfoundation.net/
Taylor,_Mark
Kevin_Kelly_on_the_Past,_Present,_and_Future_of_Publishing_and_Collaboration
Ralph_Nader_at_Occupy_Washington
diff --git a/src/api.py b/src/api.py
index 910167c..93f28e3 100644
--- a/src/api.py
+++ b/src/api.py
@@ -15,6 +15,7 @@ from .config import settings
from .embeddings import WikiVectorStore
from .rag import WikiRAG, RAGResponse
from .ingress import IngressPipeline, get_review_queue, approve_item, reject_item
+from .mediawiki import wiki_client
# Global instances
vector_store: Optional[WikiVectorStore] = None
@@ -107,6 +108,12 @@ class ReviewActionRequest(BaseModel):
action: str # "approve" or "reject"
+class DraftApproveRequest(BaseModel):
+ """Request to approve a draft article."""
+
+ title: str # e.g., "Draft:Article_Name" or just "Article_Name"
+
+
# --- API Endpoints ---
@@ -295,6 +302,65 @@ async def list_articles(limit: int = 100, offset: int = 0):
}
+# --- Wiki Draft Management Endpoints ---
+
+
+@app.get("/wiki/auth")
+async def wiki_auth_status():
+ """Check wiki authentication status."""
+ try:
+ auth_info = await wiki_client.check_auth()
+ return auth_info
+ except Exception as e:
+ return {
+ "authenticated": False,
+ "error": str(e)
+ }
+
+
+@app.get("/wiki/drafts")
+async def list_wiki_drafts():
+ """List draft articles pending review from the wiki."""
+ try:
+ drafts = await wiki_client.list_draft_articles()
+ return {
+ "count": len(drafts),
+ "drafts": drafts
+ }
+ except Exception as e:
+ raise HTTPException(status_code=500, detail=str(e))
+
+
+@app.post("/wiki/approve")
+async def approve_wiki_draft(request: DraftApproveRequest):
+ """
+ Approve a draft article - moves it from Draft: namespace to main namespace.
+
+ Requires authentication with 'move' permission via wiki cookies.
+ """
+ # Check authentication first
+ auth_info = await wiki_client.check_auth()
+ if not auth_info.get("authenticated"):
+ raise HTTPException(
+ status_code=401,
+ detail="Not authenticated. Please ensure wiki cookies are set up."
+ )
+
+ if not auth_info.get("can_move"):
+ raise HTTPException(
+ status_code=403,
+ detail=f"Move permission required. Current user: {auth_info.get('username')}"
+ )
+
+ # Approve the draft
+ result = await wiki_client.approve_draft(request.title)
+
+ if "error" in result:
+ raise HTTPException(status_code=400, detail=result["error"])
+
+ return result
+
+
# --- Static Files (Web UI) ---
web_dir = Path(__file__).parent.parent / "web"
diff --git a/src/blog_parser.py b/src/blog_parser.py
new file mode 100644
index 0000000..d3a2685
--- /dev/null
+++ b/src/blog_parser.py
@@ -0,0 +1,272 @@
+"""Blog HTML parser - extracts blog posts from cached WordPress HTML files."""
+
+import json
+import re
+import zipfile
+from dataclasses import dataclass, field, asdict
+from pathlib import Path
+from typing import Iterator, Optional
+from bs4 import BeautifulSoup
+from rich.progress import Progress
+from rich.console import Console
+
+from .config import settings
+
+console = Console()
+
+
+@dataclass
+class BlogPost:
+ """Represents a parsed blog post."""
+
+ id: int
+ title: str
+ content: str # Raw HTML content
+ plain_text: str # Cleaned plain text for embedding
+ categories: list[str] = field(default_factory=list)
+ links: list[str] = field(default_factory=list) # Internal links
+ external_links: list[str] = field(default_factory=list)
+ timestamp: str = ""
+ contributor: str = ""
+ url: str = ""
+ description: str = ""
+
+ def to_dict(self) -> dict:
+ return asdict(self)
+
+
+def clean_html_text(soup_element) -> str:
+ """Extract clean text from HTML element."""
+ if not soup_element:
+ return ""
+
+ # Get text with space separators
+ text = soup_element.get_text(separator="\n", strip=True)
+
+ # Clean up whitespace
+ text = re.sub(r"\n{3,}", "\n\n", text)
+ text = re.sub(r" {2,}", " ", text)
+
+ return text.strip()
+
+
+def extract_links(soup_element) -> tuple[list[str], list[str]]:
+ """Extract internal and external links from HTML."""
+ internal = []
+ external = []
+
+ if not soup_element:
+ return internal, external
+
+ for a in soup_element.find_all("a", href=True):
+ href = a["href"]
+ if "blog.p2pfoundation.net" in href or "wiki.p2pfoundation.net" in href:
+ internal.append(href)
+ elif href.startswith("http"):
+ external.append(href)
+
+ return list(set(internal)), list(set(external))
+
+
+def parse_blog_html(html_content: str, post_id: int, slug: str) -> Optional[BlogPost]:
+ """Parse a single blog post HTML file."""
+ soup = BeautifulSoup(html_content, "html.parser")
+
+ # Extract title
+ title_tag = soup.find("title")
+ title = title_tag.text.replace(" | P2P Foundation", "").strip() if title_tag else slug.replace("-", " ").title()
+
+ # Extract description
+ desc_meta = soup.find("meta", attrs={"name": "description"})
+ description = desc_meta["content"] if desc_meta and desc_meta.get("content") else ""
+
+ # Extract published time
+ pub_meta = soup.find("meta", attrs={"property": "article:published_time"})
+ timestamp = pub_meta["content"] if pub_meta and pub_meta.get("content") else ""
+
+ # Extract author
+ author = ""
+ author_meta = soup.find("meta", attrs={"name": "author"})
+ if author_meta and author_meta.get("content"):
+ author = author_meta["content"]
+ else:
+ # Try to find in schema.org data
+ schema = soup.find("script", class_="yoast-schema-graph")
+ if schema:
+ try:
+ schema_data = json.loads(schema.string)
+ for item in schema_data.get("@graph", []):
+ if item.get("@type") == "Person":
+ author = item.get("name", "")
+ break
+ except (json.JSONDecodeError, TypeError):
+ pass
+
+ # Extract article content
+ article = soup.find("article")
+ if not article:
+ return None
+
+ # Get HTML content
+ content = str(article)
+
+ # Get plain text
+ plain_text = clean_html_text(article)
+
+ # Skip if too short (likely not a real post)
+ if len(plain_text) < 100:
+ return None
+
+ # Extract links
+ internal_links, external_links = extract_links(article)
+
+ # Extract categories from tags/categories section
+ categories = []
+ cat_links = soup.find_all("a", rel="category tag")
+ for cat in cat_links:
+ categories.append(cat.text.strip())
+
+ tag_links = soup.find_all("a", rel="tag")
+ for tag in tag_links:
+ if tag.text.strip() not in categories:
+ categories.append(tag.text.strip())
+
+ return BlogPost(
+ id=post_id,
+ title=title,
+ content=content,
+ plain_text=plain_text,
+ categories=categories,
+ links=internal_links,
+ external_links=external_links,
+ timestamp=timestamp,
+ contributor=author,
+ url=f"https://blog.p2pfoundation.net/{slug}/",
+ description=description,
+ )
+
+
+def parse_blog_zip(zip_path: Path, output_path: Optional[Path] = None) -> list[BlogPost]:
+ """Parse blog posts from a WordPress cache zip file."""
+ console.print(f"[cyan]Parsing blog posts from {zip_path}...[/cyan]")
+
+ posts = []
+ seen_slugs = set()
+ post_id = 100000 # Start high to avoid conflicts with wiki article IDs
+
+ # Pattern to match main blog post HTML files (not feeds, embeds, or date-specific)
+ post_pattern = re.compile(
+ r"blog\.p2pfoundation\.net/public_html/wp-content/cache/page_enhanced/blog\.p2pfoundation\.net/([^/]+)/_index_ssl\.html$"
+ )
+
+ with zipfile.ZipFile(zip_path, "r") as zf:
+ # Get all matching files
+ html_files = [
+ name for name in zf.namelist()
+ if post_pattern.search(name) and "/feed/" not in name and "/embed/" not in name
+ ]
+
+ console.print(f"[green]Found {len(html_files)} blog post files[/green]")
+
+ with Progress() as progress:
+ task = progress.add_task("[cyan]Parsing blog posts...", total=len(html_files))
+
+ for filepath in html_files:
+ match = post_pattern.search(filepath)
+ if not match:
+ progress.advance(task)
+ continue
+
+ slug = match.group(1)
+
+ # Skip duplicates and special pages
+ if slug in seen_slugs:
+ progress.advance(task)
+ continue
+
+ # Skip non-post pages
+ skip_patterns = ["page", "category", "tag", "author", "feed", "wp-", "uploads"]
+ if any(slug.startswith(p) for p in skip_patterns):
+ progress.advance(task)
+ continue
+
+ seen_slugs.add(slug)
+
+ try:
+ with zf.open(filepath) as f:
+ html_content = f.read().decode("utf-8", errors="replace")
+
+ post = parse_blog_html(html_content, post_id, slug)
+ if post:
+ posts.append(post)
+ post_id += 1
+ except Exception as e:
+ console.print(f"[yellow]Warning: Could not parse {slug}: {e}[/yellow]")
+
+ progress.advance(task)
+
+ console.print(f"[green]Parsed {len(posts)} blog posts[/green]")
+
+ if output_path:
+ console.print(f"[cyan]Saving to {output_path}...[/cyan]")
+ with open(output_path, "w", encoding="utf-8") as f:
+ json.dump([p.to_dict() for p in posts], f, ensure_ascii=False, indent=2)
+ console.print(f"[green]Saved {len(posts)} blog posts to {output_path}[/green]")
+
+ return posts
+
+
+def merge_with_wiki_articles(blog_posts: list[BlogPost], wiki_articles_path: Path, output_path: Path):
+ """Merge blog posts with existing wiki articles."""
+ console.print(f"[cyan]Loading existing wiki articles from {wiki_articles_path}...[/cyan]")
+
+ with open(wiki_articles_path, "r", encoding="utf-8") as f:
+ wiki_articles = json.load(f)
+
+ console.print(f"[green]Loaded {len(wiki_articles)} wiki articles[/green]")
+
+ # Convert blog posts to same format as wiki articles
+ for post in blog_posts:
+ wiki_articles.append({
+ "id": post.id,
+ "title": f"[Blog] {post.title}", # Prefix to distinguish from wiki articles
+ "content": post.content,
+ "plain_text": post.plain_text,
+ "categories": post.categories,
+ "links": post.links,
+ "external_links": post.external_links,
+ "timestamp": post.timestamp,
+ "contributor": post.contributor,
+ })
+
+ console.print(f"[cyan]Saving merged articles to {output_path}...[/cyan]")
+ with open(output_path, "w", encoding="utf-8") as f:
+ json.dump(wiki_articles, f, ensure_ascii=False, indent=2)
+
+ console.print(f"[green]Saved {len(wiki_articles)} total articles[/green]")
+
+
+def main():
+ """CLI entry point for parsing blog content."""
+ # Look for blog zip file
+ blog_zip = Path("/mnt/c/Users/jeffe/Downloads/blog.p2pfoundation.net.zip")
+
+ if not blog_zip.exists():
+ console.print(f"[red]Blog zip not found at {blog_zip}[/red]")
+ return
+
+ # Parse blog posts
+ blog_output = settings.data_dir / "blog_posts.json"
+ posts = parse_blog_zip(blog_zip, blog_output)
+
+ # Merge with wiki articles
+ wiki_articles = settings.data_dir / "articles.json"
+ if wiki_articles.exists():
+ merged_output = settings.data_dir / "articles_with_blog.json"
+ merge_with_wiki_articles(posts, wiki_articles, merged_output)
+ console.print(f"[green]Merged articles saved to {merged_output}[/green]")
+ console.print("[yellow]To use merged articles, rename articles_with_blog.json to articles.json and re-run embeddings[/yellow]")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/src/config.py b/src/config.py
index 042de5d..81c407c 100644
--- a/src/config.py
+++ b/src/config.py
@@ -29,7 +29,8 @@ class Settings(BaseSettings):
use_ollama_for_chat: bool = True # Use Ollama for simple Q&A
# MediaWiki
- mediawiki_api_url: str = "" # Set if you have a live wiki API
+ mediawiki_api_url: str = "https://wiki.p2pfoundation.net/api.php"
+ wiki_cookie_file: Path = Path("/tmp/wiki_cookies.txt")
# Server
host: str = "0.0.0.0"
diff --git a/src/llm.py b/src/llm.py
index d5028f7..0569bad 100644
--- a/src/llm.py
+++ b/src/llm.py
@@ -34,7 +34,7 @@ class LLMClient:
messages.append({"role": "system", "content": system})
messages.append({"role": "user", "content": prompt})
- async with httpx.AsyncClient(timeout=120.0) as client:
+ async with httpx.AsyncClient(timeout=300.0) as client: # 5 min for large content
response = await client.post(
f"{self.ollama_url}/api/chat",
json={
diff --git a/web/index.html b/web/index.html
index 166127f..4bdf7be 100644
--- a/web/index.html
+++ b/web/index.html
@@ -366,6 +366,271 @@
color: var(--text-secondary);
}
+ /* Wiki Drafts Panel */
+ .drafts-container {
+ background: var(--bg-secondary);
+ border-radius: 12px;
+ padding: 30px;
+ }
+
+ .auth-status {
+ display: flex;
+ align-items: center;
+ gap: 10px;
+ padding: 15px;
+ background: var(--bg-primary);
+ border-radius: 8px;
+ margin-bottom: 20px;
+ }
+
+ .auth-status.authenticated {
+ border-left: 4px solid var(--success);
+ }
+
+ .auth-status.not-authenticated {
+ border-left: 4px solid var(--accent);
+ }
+
+ .auth-badge {
+ padding: 4px 12px;
+ border-radius: 20px;
+ font-size: 0.85em;
+ font-weight: 600;
+ }
+
+ .auth-badge.admin {
+ background: var(--success);
+ color: var(--bg-primary);
+ }
+
+ .auth-badge.user {
+ background: var(--bg-tertiary);
+ color: var(--text-primary);
+ }
+
+ .draft-card {
+ background: var(--bg-primary);
+ border-radius: 8px;
+ padding: 20px;
+ margin-bottom: 15px;
+ border: 1px solid var(--border);
+ transition: border-color 0.2s;
+ }
+
+ .draft-card:hover {
+ border-color: var(--accent);
+ }
+
+ .draft-header {
+ display: flex;
+ justify-content: space-between;
+ align-items: flex-start;
+ margin-bottom: 10px;
+ }
+
+ .draft-title {
+ font-size: 1.2em;
+ font-weight: 600;
+ color: var(--text-primary);
+ }
+
+ .draft-title a {
+ color: inherit;
+ text-decoration: none;
+ }
+
+ .draft-title a:hover {
+ color: var(--accent);
+ }
+
+ .draft-meta {
+ color: var(--text-secondary);
+ font-size: 0.85em;
+ margin-bottom: 15px;
+ }
+
+ .draft-preview {
+ background: var(--bg-secondary);
+ border-radius: 6px;
+ padding: 15px;
+ max-height: 200px;
+ overflow-y: auto;
+ font-size: 0.9em;
+ margin-bottom: 15px;
+ display: none;
+ }
+
+ .draft-preview.show {
+ display: block;
+ }
+
+ .draft-actions {
+ display: flex;
+ gap: 10px;
+ align-items: center;
+ }
+
+ .btn-preview {
+ padding: 8px 16px;
+ background: var(--bg-tertiary);
+ border: 1px solid var(--border);
+ border-radius: 6px;
+ color: var(--text-primary);
+ cursor: pointer;
+ font-size: 0.9em;
+ transition: all 0.2s;
+ }
+
+ .btn-preview:hover {
+ background: var(--bg-secondary);
+ border-color: var(--accent);
+ }
+
+ .btn-approve-draft {
+ padding: 10px 24px;
+ background: var(--success);
+ border: none;
+ border-radius: 6px;
+ color: var(--bg-primary);
+ font-weight: 600;
+ cursor: pointer;
+ font-size: 0.95em;
+ transition: all 0.2s;
+ }
+
+ .btn-approve-draft:hover {
+ opacity: 0.9;
+ transform: translateY(-1px);
+ }
+
+ .btn-approve-draft:disabled {
+ opacity: 0.5;
+ cursor: not-allowed;
+ transform: none;
+ }
+
+ .btn-view-wiki {
+ padding: 8px 16px;
+ background: transparent;
+ border: 1px solid var(--accent);
+ border-radius: 6px;
+ color: var(--accent);
+ cursor: pointer;
+ font-size: 0.9em;
+ text-decoration: none;
+ transition: all 0.2s;
+ }
+
+ .btn-view-wiki:hover {
+ background: var(--accent);
+ color: white;
+ }
+
+ /* Confirmation Modal */
+ .modal-overlay {
+ position: fixed;
+ top: 0;
+ left: 0;
+ right: 0;
+ bottom: 0;
+ background: rgba(0, 0, 0, 0.7);
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ z-index: 1000;
+ opacity: 0;
+ visibility: hidden;
+ transition: all 0.2s;
+ }
+
+ .modal-overlay.show {
+ opacity: 1;
+ visibility: visible;
+ }
+
+ .modal {
+ background: var(--bg-secondary);
+ border-radius: 12px;
+ padding: 30px;
+ max-width: 500px;
+ width: 90%;
+ transform: scale(0.9);
+ transition: transform 0.2s;
+ }
+
+ .modal-overlay.show .modal {
+ transform: scale(1);
+ }
+
+ .modal h3 {
+ margin-bottom: 15px;
+ color: var(--text-primary);
+ }
+
+ .modal p {
+ color: var(--text-secondary);
+ margin-bottom: 20px;
+ line-height: 1.6;
+ }
+
+ .modal-actions {
+ display: flex;
+ gap: 10px;
+ justify-content: flex-end;
+ }
+
+ .btn-cancel {
+ padding: 10px 20px;
+ background: var(--bg-tertiary);
+ border: 1px solid var(--border);
+ border-radius: 6px;
+ color: var(--text-primary);
+ cursor: pointer;
+ }
+
+ .btn-confirm {
+ padding: 10px 20px;
+ background: var(--success);
+ border: none;
+ border-radius: 6px;
+ color: var(--bg-primary);
+ font-weight: 600;
+ cursor: pointer;
+ }
+
+ .refresh-btn {
+ padding: 8px 16px;
+ background: var(--bg-tertiary);
+ border: 1px solid var(--border);
+ border-radius: 6px;
+ color: var(--text-primary);
+ cursor: pointer;
+ margin-left: auto;
+ }
+
+ .refresh-btn:hover {
+ border-color: var(--accent);
+ }
+
+ .drafts-header {
+ display: flex;
+ align-items: center;
+ margin-bottom: 20px;
+ }
+
+ .drafts-header h2 {
+ margin-right: 20px;
+ }
+
+ .draft-count {
+ background: var(--accent);
+ color: white;
+ padding: 4px 12px;
+ border-radius: 20px;
+ font-size: 0.85em;
+ font-weight: 600;
+ }
+
/* Markdown-like formatting */
.message-content p { margin-bottom: 10px; }
.message-content ul, .message-content ol { margin-left: 20px; margin-bottom: 10px; }
@@ -381,6 +646,7 @@
Chat
Ingress
Review Queue
+
Wiki Drafts
@@ -430,6 +696,43 @@
+
+
+
+
+
+
Wiki Drafts
+ 0
+
+
+
+
+ Checking authentication...
+
+
+
+ Review and approve draft articles to publish them to the main wiki namespace.
+
+
+
+
Loading drafts from wiki...
+
+
+
+
+
+
+
+
Approve Draft Article?
+
+ This will move the draft to the main wiki namespace and make it publicly visible.
+
Leave A Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.