n8n
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
npx n8nFair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
npx n8nThe agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
git clone https://github.com/affaan-m/everything-claude-code.gitAn open-source AI agent that brings the power of Gemini directly into your terminal.
npx @google/gemini-cliAn accurate Retrieval-Augmented Generation (RAG) system that analyzes multi-language codebases using Tree-sitter, builds comprehensive knowledge graphs, and enables natural language querying of codebase structure and relationships as well as editing capabilities.
github.com to gitcgr.com in any repo URL â that's it, only 3 letters! Get an interactive graph of the entire codebase structure. Try it now: gitcgr.com| Language | Status | Extensions | Functions | Classes/Structs | Modules | Package Detection | Additional Features |
|---|---|---|---|---|---|---|---|
| C | Fully Supported | .c | â | â | â | â | Functions, structs, unions, enums, preprocessor includes |
| C++ | Fully Supported | .cpp, .h, .hpp, .cc, .cxx, .hxx, .hh, .ixx, .cppm, .ccm | â | â | â | â | Constructors, destructors, operator overloading, templates, lambdas, C++20 modules, namespaces |
| Java | Fully Supported | .java | â | â | â | - | Generics, annotations, modern features (records/sealed classes), concurrency, reflection |
| JavaScript | Fully Supported | .js, .jsx | â | â | â | - | ES6 modules, CommonJS, prototype methods, object methods, arrow functions |
| Lua | Fully Supported | .lua | â | - | â | - | Local/global functions, metatables, closures, coroutines |
| PHP | Fully Supported | .php | â | â | â | - | Classes, interfaces, traits, enums, namespaces, PHP 8 attributes |
| Python | Fully Supported | .py | â | â | â | â | Type inference, decorators, nested functions |
| Rust | Fully Supported | .rs | â | â | â | â | impl blocks, associated functions |
| TypeScript | Fully Supported | .ts, .tsx | â | â | â | - | Interfaces, type aliases, enums, namespaces, ES6/CommonJS modules |
| C# | In Development | .cs | â | â | â | - | Classes, interfaces, generics (planned) |
| Go | In Development | .go | â | â | â | - | Methods, type declarations |
| Scala | In Development | .scala, .sc | â | â | â | - | Case classes, objects |
pyproject.toml to understand external dependenciesThe system consists of two main components:
codebase_rag/): Interactive CLI for querying the stored knowledge graphrg) (required for shell command text searching)uv package managerOn macOS:
brew install cmake ripgrep
On Linux (Ubuntu/Debian):
sudo apt-get update
sudo apt-get install cmake ripgrep
On Linux (CentOS/RHEL):
sudo yum install cmake
sudo dnf install ripgrep
# Note: ripgrep may need to be installed from EPEL or via cargo
git clone https://github.com/vitali87/code-graph-rag.git
cd code-graph-rag
For basic Python support:
uv sync
For full multi-language support:
uv sync --extra treesitter-full
For development (including tests and pre-commit hooks):
make dev
This installs all dependencies and sets up pre-commit hooks automatically.
This installs Tree-sitter grammars for all supported languages (see Multi-Language Support section).
cp .env.example .env
# Edit .env with your configuration (see options below)
The new provider-explicit configuration supports mixing different providers for orchestrator and cypher models.
# .env file
ORCHESTRATOR_PROVIDER=ollama
ORCHESTRATOR_MODEL=llama3.2
ORCHESTRATOR_ENDPOINT=http://localhost:11434/v1
CYPHER_PROVIDER=ollama
CYPHER_MODEL=codellama
CYPHER_ENDPOINT=http://localhost:11434/v1
# .env file
ORCHESTRATOR_PROVIDER=openai
ORCHESTRATOR_MODEL=gpt-4o
ORCHESTRATOR_API_KEY=sk-your-openai-key
CYPHER_PROVIDER=openai
CYPHER_MODEL=gpt-4o-mini
CYPHER_API_KEY=sk-your-openai-key
# .env file
ORCHESTRATOR_PROVIDER=google
ORCHESTRATOR_MODEL=gemini-2.5-pro
ORCHESTRATOR_API_KEY=your-google-api-key
CYPHER_PROVIDER=google
CYPHER_MODEL=gemini-2.5-flash
CYPHER_API_KEY=your-google-api-key
# .env file - Google orchestrator + Ollama cypher
ORCHESTRATOR_PROVIDER=google
ORCHESTRATOR_MODEL=gemini-2.5-pro
ORCHESTRATOR_API_KEY=your-google-api-key
CYPHER_PROVIDER=ollama
CYPHER_MODEL=codellama
CYPHER_ENDPOINT=http://localhost:11434/v1
Get your Google API key from Google AI Studio.
Install and run Ollama:
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull required models
ollama pull llama3.2
# Or try other models like:
# ollama pull llama3
# ollama pull mistral
# ollama pull codellama
# Ollama will automatically start serving on localhost:11434
Note: Local models provide privacy and no API costs, but may have lower accuracy compared to cloud models like Gemini.
docker compose up -d
# If installed from PyPI:
cgr --help
# If running from source:
uv run cgr --help
Note: When running from source (cloned repo), prefix all
cgrcommands below withuv run, e.g.,uv run cgr start ...
Use the Makefile for common development tasks:
| Command | Description |
|---|---|
make help | Show this help message |
make all | Install everything for full development environment (deps, grammars, hooks, tests) |
make install | Install project dependencies with full language support |
make python | Install project dependencies for Python only |
make dev | Setup development environment (install deps + pre-commit hooks) |
make test | Run unit tests only (fast, no Docker) |
make test-parallel | Run unit tests in parallel (fast, no Docker) |
make test-integration | Run integration tests (requires Docker) |
make test-all | Run all tests including integration and e2e (requires Docker) |
make test-parallel-all | Run all tests in parallel including integration and e2e (requires Docker) |
make clean | Clean up build artifacts and cache |
make build-grammars | Build grammar submodules |
make watch | Watch repository for changes and update graph in real-time |
make readme | Regenerate README.md from codebase |
The Code-Graph-RAG system offers four main modes of operation:
Parse and ingest a multi-language repository into the knowledge graph:
For the first repository (clean start):
cgr start --repo-path /path/to/repo1 --update-graph --clean
For additional repositories (preserve existing data):
cgr start --repo-path /path/to/repo2 --update-graph
cgr start --repo-path /path/to/repo3 --update-graph
Control Memgraph batch flushing:
# Flush every 5,000 records instead of the default from settings
cgr start --repo-path /path/to/repo --update-graph \
--batch-size 5000
The system automatically detects and processes files for all supported languages (see Multi-Language Support section).
Start the interactive RAG CLI:
cgr start --repo-path /path/to/your/repo
For active development, you can keep your knowledge graph automatically synchronized with code changes using the realtime updater. This is particularly useful when you're actively modifying code and want the AI assistant to always work with the latest codebase structure.
What it does:
.git, node_modules, etc.)How to use:
Run the realtime updater in a separate terminal:
# Using Python directly
python realtime_updater.py /path/to/your/repo
# Or using the Makefile
make watch REPO_PATH=/path/to/your/repo
With custom Memgraph settings:
# Python
python realtime_updater.py /path/to/your/repo --host localhost --port 7687 --batch-size 1000
# Makefile
make watch REPO_PATH=/path/to/your/repo HOST=localhost PORT=7687 BATCH_SIZE=1000
Multi-terminal workflow:
# Terminal 1: Start the realtime updater
python realtime_updater.py ~/my-project
# Terminal 2: Run the AI assistant
cgr start --repo-path ~/my-project
Performance note: The updater currently recalculates all CALLS relationships on every file change to ensure consistency. This prevents "island" problems where changes in one file aren't reflected in relationships from other files, but may impact performance on very large codebases with frequent changes. Note: Optimization of this behavior is a work in progress.
CLI Arguments:
repo_path (required): Path to repository to watch--host: Memgraph host (default: localhost)--port: Memgraph port (default: 7687)--batch-size: Number of buffered nodes/relationships before flushing to MemgraphSpecify Custom Models:
# Use specific local models
cgr start --repo-path /path/to/your/repo \
--orchestrator ollama:llama3.2 \
--cypher ollama:codellama
# Use specific Gemini models
cgr start --repo-path /path/to/your/repo \
--orchestrator google:gemini-2.0-flash-thinking-exp-01-21 \
--cypher google:gemini-2.5-flash-lite-preview-06-17
# Use mixed providers
cgr start --repo-path /path/to/your/repo \
--orchestrator google:gemini-2.0-flash-thinking-exp-01-21 \
--cypher ollama:codellama
Example queries (works across all supported languages):
For programmatic access and integration with other tools, you can export the entire knowledge graph to JSON:
Export during graph update:
cgr start --repo-path /path/to/repo --update-graph --clean -o my_graph.json
Export existing graph without updating:
cgr export -o my_graph.json
Optional: adjust Memgraph batching during export:
cgr export -o my_graph.json --batch-size 5000
Working with exported data:
from codebase_rag.graph_loader import load_graph
# Load the exported graph
graph = load_graph("my_graph.json")
# Get summary statistics
summary = graph.summary()
print(f"Total nodes: {summary['total_nodes']}")
print(f"Total relationships: {summary['total_relationships']}")
# Find specific node types
functions = graph.find_nodes_by_label("Function")
classes = graph.find_nodes_by_label("Class")
# Analyze relationships
for func in functions[:5]:
relationships = graph.get_relationships_for_node(func.node_id)
print(f"Function {func.properties['name']} has {len(relationships)} relationships")
Example analysis script:
python examples/graph_export_example.py my_graph.json
This provides a reliable, programmatic way to access your codebase structure without LLM restrictions, perfect for:
For AI-powered codebase optimization with best practices guidance:
Basic optimization for a specific language:
cgr optimize python --repo-path /path/to/your/repo
Optimization with reference documentation:
cgr optimize python \
--repo-path /path/to/your/repo \
--reference-document /path/to/best_practices.md
Using specific models for optimization:
cgr optimize javascript \
--repo-path /path/to/frontend \
--orchestrator google:gemini-2.0-flash-thinking-exp-01-21
# Optional: override Memgraph batch flushing during optimization
cgr optimize javascript --repo-path /path/to/frontend \
--batch-size 5000
Supported Languages for Optimization:
All supported languages: python, javascript, typescript, rust, go, java, scala, c, cpp
How It Works:
Example Optimization Session:
Starting python optimization session...
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â The agent will analyze your python codebase and propose specific â
â optimizations. You'll be asked to approve each suggestion before â
â implementation. Type 'exit' or 'quit' to end the session. â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
đ Analyzing codebase structure...
đ Found 23 Python modules with potential optimizations
đĄ Optimization Suggestion #1:
File: src/data_processor.py
Issue: Using list comprehension in a loop can be optimized
Suggestion: Replace with generator expression for memory efficiency
[y/n] Do you approve this optimization?
Reference Document Support: You can provide reference documentation (like coding standards, architectural guidelines, or best practices documents) to guide the optimization process:
# Use company coding standards
cgr optimize python \
--reference-document ./docs/coding_standards.md
# Use architectural guidelines
cgr optimize java \
--reference-document ./ARCHITECTURE.md
# Use performance best practices
cgr optimize rust \
--reference-document ./docs/performance_guide.md
The agent will incorporate the guidance from your reference documents when suggesting optimizations, ensuring they align with your project's standards and architectural decisions.
Common CLI Arguments:
--orchestrator: Specify provider:model for main operations (e.g., google:gemini-2.0-flash-thinking-exp-01-21, ollama:llama3.2)--cypher: Specify provider:model for graph queries (e.g., google:gemini-2.5-flash-lite-preview-06-17, ollama:codellama)--repo-path: Path to repository (defaults to current directory)--batch-size: Override Memgraph flush batch size (defaults to MEMGRAPH_BATCH_SIZE in settings)--reference-document: Path to reference documentation (optimization only)Code-Graph-RAG can run as an MCP (Model Context Protocol) server, enabling seamless integration with Claude Code and other MCP clients.
claude mcp add --transport stdio code-graph-rag \
--env TARGET_REPO_PATH=/absolute/path/to/your/project \
--env CYPHER_PROVIDER=openai \
--env CYPHER_MODEL=gpt-4 \
--env CYPHER_API_KEY=your-api-key \
-- uv run --directory /path/to/code-graph-rag code-graph-rag mcp-server
| Tool | Description |
|---|---|
list_projects | List all indexed projects in the knowledge graph database. Returns a list of project names that have been indexed. |
delete_project | Delete a specific project from the knowledge graph database. This removes all nodes associated with the project while preserving other projects. Use list_projects first to see available projects. |
wipe_database | WARNING: Completely wipe the entire database, removing ALL indexed projects. This cannot be undone. Use delete_project for removing individual projects. |
index_repository | WARNING: Clears all data for the current project including its embeddings. Parse and ingest the repository into the Memgraph knowledge graph. Use update_repository for incremental updates. Only use when explicitly requested. |
update_repository | Update the repository in the Memgraph knowledge graph without clearing existing data. Use this for incremental updates. |
query_code_graph | Query the codebase knowledge graph using natural language. Use semantic_search unless you know the exact names of classes/functions you are searching for. Ask questions like 'What functions call UserService.create_user?' or 'Show me all classes that implement the Repository interface'. |
get_code_snippet | Retrieve source code for a function, class, or method by its qualified name. Returns the source code, file path, line numbers, and docstring. |
surgical_replace_code | Surgically replace an exact code block in a file using diff-match-patch. Only modifies the exact target block, leaving the rest unchanged. |
read_file | Read the contents of a file from the project. Supports pagination for large files. |
> Index this repository
> What functions call UserService.create_user?
> Update the login function to add rate limiting
For detailed setup, see Claude Code Setup Guide.
The knowledge graph uses the following node types and relationships:
| Label | Properties |
|---|---|
| Project | {name: string} |
| Package | {qualified_name: string, name: string, path: string, absolute_path: string} |
| Folder | {path: string, name: string, absolute_path: string} |
| File | {path: string, name: string, extension: string, absolute_path: string} |
| Module | {qualified_name: string, name: string, path: string, absolute_path: string} |
| Class | {qualified_name: string, name: string, decorators: list[string], path: string, absolute_path: string} |
| Function | {qualified_name: string, name: string, decorators: list[string], path: string, absolute_path: string} |
| Method | {qualified_name: string, name: string, decorators: list[string], path: string, absolute_path: string} |
| Interface | {qualified_name: string, name: string, path: string, absolute_path: string} |
| Enum | {qualified_name: string, name: string, path: string, absolute_path: string} |
| Type | {qualified_name: string, name: string} |
| Union | {qualified_name: string, name: string} |
| ModuleInterface | {qualified_name: string, name: string, path: string, absolute_path: string} |
enum_specifier, function_definition, struct_specifier, union_specifierclass_specifier, declaration, enum_specifier, field_declaration, function_definition, lambda_expression, struct_specifier, template_declaration, union_specifierannotation_type_declaration, class_declaration, constructor_declaration, enum_declaration, interface_declaration, method_declaration, record_declarationarrow_function, class, class_declaration, function_declaration, function_expression, , | Source | Relationship | Target |
|---|---|---|
| Project, Package, Folder | CONTAINS_PACKAGE | Package |
| Project, Package, Folder | CONTAINS_FOLDER | Folder |
| Project, Package, Folder | CONTAINS_FILE | File |
| Project, Package, Folder | CONTAINS_MODULE | Module |
| Module | DEFINES | Class, Function |
| Class | DEFINES_METHOD | Method |
| Module | IMPORTS | Module |
| Module | EXPORTS | Class, Function |
| Module | EXPORTS_MODULE | ModuleInterface |
| Module | IMPLEMENTS_MODULE | ModuleImplementation |
| Class | INHERITS | Class |
| Class | IMPLEMENTS | Interface |
| Method | OVERRIDES | Method |
| ModuleImplementation | IMPLEMENTS | ModuleInterface |
| Project | DEPENDS_ON_EXTERNAL | ExternalPackage |
| Function, Method | CALLS | Function, Method |
Configuration is managed through environment variables in .env file:
ORCHESTRATOR_PROVIDER: Provider name (google, openai, ollama)ORCHESTRATOR_MODEL: Model ID (e.g., gemini-2.5-pro, gpt-4o, llama3.2)ORCHESTRATOR_API_KEY: API key for the provider (if required)ORCHESTRATOR_ENDPOINT: Custom endpoint URL (if required)ORCHESTRATOR_PROJECT_ID: Google Cloud project ID (for Vertex AI)ORCHESTRATOR_REGION: Google Cloud region (default: us-central1)ORCHESTRATOR_PROVIDER_TYPE: Google provider type (gla or vertex)ORCHESTRATOR_THINKING_BUDGET: Thinking budget for reasoning modelsORCHESTRATOR_SERVICE_ACCOUNT_FILE: Path to service account file (for Vertex AI)CYPHER_PROVIDER: Provider name (google, openai, ollama)CYPHER_MODEL: Model ID (e.g., gemini-2.5-flash, gpt-4o-mini, codellama)CYPHER_API_KEY: API key for the provider (if required)CYPHER_ENDPOINT: Custom endpoint URL (if required)CYPHER_PROJECT_ID: Google Cloud project ID (for Vertex AI)CYPHER_REGION: Google Cloud region (default: us-central1)CYPHER_PROVIDER_TYPE: Google provider type (gla or vertex)CYPHER_THINKING_BUDGET: Thinking budget for reasoning modelsCYPHER_SERVICE_ACCOUNT_FILE: Path to service account file (for Vertex AI)MEMGRAPH_HOST: Memgraph hostname (default: localhost)MEMGRAPH_PORT: Memgraph port (default: 7687)MEMGRAPH_HTTP_PORT: Memgraph HTTP port (default: 7444)LAB_PORT: Memgraph Lab port (default: 3000)MEMGRAPH_BATCH_SIZE: Batch size for Memgraph operations (default: 1000)TARGET_REPO_PATH: Default repository path (default: .)LOCAL_MODEL_ENDPOINT: Fallback endpoint for Ollama (default: http://localhost:11434/v1)You can specify additional directories to exclude by creating a .cgrignore file in your repository root:
# Comments start with #
vendor
.custom_cache
my_build_output
# are comments.cgrignore are merged with --exclude flags and auto-detected directoriesThe agent is designed with a deliberate workflow to ensure it acts with context and precision, especially when modifying the file system.
The agent has access to a suite of tools to understand and interact with the codebase:
| Tool | Description |
|---|---|
query_graph | Query the codebase knowledge graph using natural language questions. Ask in plain English about classes, functions, methods, dependencies, or code structure. Examples: 'Find all functions that call each other', 'What classes are in the user module', 'Show me functions with the longest call chains'. |
read_file | Reads the content of text-based files. For documents like PDFs or images, use the 'analyze_document' tool instead. |
create_file | Creates a new file with content. IMPORTANT: Check file existence first! Overwrites completely WITHOUT showing diff. Use only for new files, not existing file modifications. |
replace_code | Surgically replaces specific code blocks in files. Requires exact target code and replacement. Only modifies the specified block, leaving rest of file unchanged. True surgical patching. |
list_directory | Lists the contents of a directory to explore the codebase. |
analyze_document | Analyzes documents (PDFs, images) to answer questions about their content. |
execute_shell | Executes shell commands from allowlist. Read-only commands run without approval; write operations require user confirmation. |
semantic_search | Performs a semantic search for functions based on a natural language query describing their purpose, returning a list of potential matches with similarity scores. |
get_function_source | Retrieves the source code for a specific function or method using its internal node ID, typically obtained from a semantic search result. |
get_code_snippet | Retrieves the source code for a specific function, class, or method using its full qualified name. |
The agent uses AST-based function targeting with Tree-sitter for precise code modifications. Features include:
Code-Graph-RAG makes it easy to add support for any language that has a Tree-sitter grammar. The system automatically handles grammar compilation and integration.
â ī¸ Recommendation: While you can add languages yourself, we recommend waiting for official full support to ensure optimal parsing quality, comprehensive feature coverage, and robust integration. The languages marked as "In Development" above will receive dedicated optimization and testing.
đĄ Request Support: If you want a specific language to be officially supported, please submit an issue with your language request.
Use the built-in language management tool to add any Tree-sitter supported language:
# Add a language using the standard tree-sitter repository
cgr language add-grammar <language-name>
# Examples:
cgr language add-grammar c-sharp
cgr language add-grammar php
cgr language add-grammar ruby
cgr language add-grammar kotlin
For languages hosted outside the standard tree-sitter organization:
# Add a language from a custom repository
cgr language add-grammar --grammar-url https://github.com/custom/tree-sitter-mylang
When you add a language, the tool automatically:
tree-sitter.jsonmethod_declaration, function_definition, etc.)class_declaration, struct_declaration, etc.)compilation_unit, source_file, etc.)call_expression, method_invocation, etc.)codebase_rag/language_config.py$ cgr language add-grammar c-sharp
đ Using default tree-sitter URL: https://github.com/tree-sitter/tree-sitter-c-sharp
đ Adding submodule from https://github.com/tree-sitter/tree-sitter-c-sharp...
â
Successfully added submodule at grammars/tree-sitter-c-sharp
Auto-detected language: c-sharp
Auto-detected file extensions: ['cs']
Auto-detected node types:
Functions: ['destructor_declaration', 'method_declaration', 'constructor_declaration']
Classes: ['struct_declaration', 'enum_declaration', 'interface_declaration', 'class_declaration']
Modules: ['compilation_unit', 'file_scoped_namespace_declaration', 'namespace_declaration']
Calls: ['invocation_expression']
â
Language 'c-sharp' has been added to the configuration!
đ Updated codebase_rag/language_config.py
# List all configured languages
cgr language list-languages
# Remove a language (this also removes the git submodule unless --keep-submodule is specified)
cgr language remove-language <language-name>
The system uses a configuration-driven approach for language support. Each language is defined in codebase_rag/language_config.py with the following structure:
"language-name": LanguageConfig(
name="language-name",
file_extensions=[".ext1", ".ext2"],
function_node_types=["function_declaration", "method_declaration"],
class_node_types=["class_declaration", "struct_declaration"],
module_node_types=["compilation_unit", "source_file"],
call_node_types=["call_expression", "method_invocation"],
),
Grammar not found: If the automatic URL doesn't work, use a custom URL:
cgr language add-grammar --grammar-url https://github.com/custom/tree-sitter-mylang
Version incompatibility: If you get "Incompatible Language version" errors, update your tree-sitter package:
uv add tree-sitter@latest
Missing node types: The tool automatically detects common node patterns, but you can manually adjust the configuration in language_config.py if needed.
You can build a binary of the application using the build_binary.py script. This script uses PyInstaller to package the application and its dependencies into a single executable.
python build_binary.py
The resulting binary will be located in the dist directory.
Check Memgraph connection:
docker-compose psView database in Memgraph Lab:
For local models:
ollama listollama pull llama3curl http://localhost:11434/v1/modelsollama logsPlease see CONTRIBUTING.md for detailed contribution guidelines.
Good first PRs are from TODO issues.
For issues or questions:
Code-Graph-RAG is open source and free to use. For organizations that need more, we offer fully managed cloud-hosted solutions and on-premise deployments:
We also offer custom development, integration consulting, technical support contracts, and team training.
View plans & pricing at code-graph-rag.com â
make lint | Run ruff check |
make format | Run ruff format |
make typecheck | Run type checking with ty |
make check | Run all checks: lint, typecheck, test |
make pre-commit | Run all pre-commit checks locally (comprehensive test before commit) |
write_file |
| Write content to a file, creating it if it doesn't exist. |
list_directory | List contents of a directory in the project. |
semantic_search | Performs a semantic search for functions based on a natural language query describing their purpose, returning a list of potential matches with similarity scores. Requires the 'semantic' extra to be installed. |
| ModuleImplementation | {qualified_name: string, name: string, path: string, absolute_path: string, implements_module: string} |
| ExternalPackage | {name: string, version_spec: string} |
generator_function_declarationmethod_definitionfunction_declaration, function_definitionanonymous_function, arrow_function, class_declaration, enum_declaration, function_definition, interface_declaration, method_declaration, trait_declarationclass_definition, function_definitionclosure_expression, enum_item, function_item, function_signature_item, impl_item, struct_item, trait_item, type_item, union_itemabstract_class_declaration, arrow_function, class, class_declaration, enum_declaration, function_declaration, function_expression, function_signature, generator_function_declaration, interface_declaration, internal_module, method_definition, type_alias_declarationanonymous_method_expression, class_declaration, constructor_declaration, destructor_declaration, enum_declaration, function_pointer_type, interface_declaration, lambda_expression, local_function_statement, method_declaration, struct_declarationfunction_declaration, method_declaration, type_declarationclass_definition, function_declaration, function_definition, object_definition, trait_definition