Compare commits
No commits in common. "develop" and "v2.1.0" have entirely different histories.
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -6,30 +6,15 @@ labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
<!--
|
||||
|
||||
Are you in the right place?
|
||||
- If you are looking for support on how to get your upstream server forwarding, please consider asking the community on Reddit.
|
||||
- If you are writing code changes to contribute and need to ask about the internals of the software, Gitter is the best place to ask.
|
||||
- If you think you found a bug with NPM (not Nginx, or your upstream server or MySql) then you are in the *right place.*
|
||||
|
||||
-->
|
||||
|
||||
**Checklist**
|
||||
- Have you pulled and found the error with `jc21/nginx-proxy-manager:latest` docker image?
|
||||
- Yes / No
|
||||
- Are you sure you're not using someone else's docker image?
|
||||
- Yes / No
|
||||
- Have you searched for similar issues (both open and closed)?
|
||||
- Yes / No
|
||||
- If having problems with Lets Encrypt, have you made absolutely sure your site is accessible from outside of your network?
|
||||
|
||||
**Describe the bug**
|
||||
<!-- A clear and concise description of what the bug is. -->
|
||||
|
||||
|
||||
**Nginx Proxy Manager Version**
|
||||
<!-- What version of Nginx Proxy Manager is reported on the login page? -->
|
||||
|
||||
- A clear and concise description of what the bug is.
|
||||
- What version of Nginx Proxy Manager is reported on the login page?
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
@ -38,18 +23,14 @@ Steps to reproduce the behavior:
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
|
||||
**Expected behavior**
|
||||
<!-- A clear and concise description of what you expected to happen. -->
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
<!-- If applicable, add screenshots to help explain your problem. -->
|
||||
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Operating System**
|
||||
<!-- Please specify if using a Rpi, Mac, orchestration tool or any other setups that might affect the reproduction of this error. -->
|
||||
|
||||
- Please specify if using a Rpi, Mac, orchestration tool or any other setups that might affect the reproduction of this error.
|
||||
|
||||
**Additional context**
|
||||
<!-- Add any other context about the problem here, docker version, browser version, logs if applicable to the problem. Too much info is better than too little. -->
|
||||
Add any other context about the problem here, docker version, browser version if applicable to the problem. Too much info is better than too little.
|
||||
|
18
.github/ISSUE_TEMPLATE/dns_challenge_request.md
vendored
18
.github/ISSUE_TEMPLATE/dns_challenge_request.md
vendored
@ -1,18 +0,0 @@
|
||||
---
|
||||
name: DNS challenge provider request
|
||||
about: Suggest a new provider to be available for a certificate DNS challenge
|
||||
title: ''
|
||||
labels: dns provider request
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**What provider would you like to see added to NPM?**
|
||||
<!-- What is this provider called? -->
|
||||
|
||||
|
||||
**Have you checked if a certbot plugin exists?**
|
||||
<!--
|
||||
Currently NPM only supports DNS challenge providers for which a certbot plugin exists.
|
||||
You can visit pypi.org, and search for a package with the name `certbot-dns-<privider>`.
|
||||
-->
|
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -7,26 +7,14 @@ assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
|
||||
Are you in the right place?
|
||||
- If you are looking for support on how to get your upstream server forwarding, please consider asking the community on Reddit.
|
||||
- If you are writing code changes to contribute and need to ask about the internals of the software, Gitter is the best place to ask.
|
||||
- If you think you found a bug with NPM (not Nginx, or your upstream server or MySql) then you are in the *right place.*
|
||||
|
||||
-->
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
|
||||
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
<!-- A clear and concise description of what you want to happen. -->
|
||||
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
|
||||
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
<!-- Add any other context or screenshots about the feature request here. -->
|
||||
Add any other context or screenshots about the feature request here.
|
||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -2,4 +2,4 @@
|
||||
.idea
|
||||
._*
|
||||
.vscode
|
||||
certbot-help.txt
|
||||
|
||||
|
10
.jenkins/config.json
Normal file
10
.jenkins/config.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"database": {
|
||||
"engine": "mysql",
|
||||
"host": "db",
|
||||
"name": "npm",
|
||||
"user": "npm",
|
||||
"password": "npm",
|
||||
"port": 3306
|
||||
}
|
||||
}
|
124
Jenkinsfile
vendored
124
Jenkinsfile
vendored
@ -5,7 +5,6 @@ pipeline {
|
||||
options {
|
||||
buildDiscarder(logRotator(numToKeepStr: '5'))
|
||||
disableConcurrentBuilds()
|
||||
ansiColor('xterm')
|
||||
}
|
||||
environment {
|
||||
IMAGE = "nginx-proxy-manager"
|
||||
@ -43,32 +42,24 @@ pipeline {
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Versions') {
|
||||
steps {
|
||||
sh 'cat frontend/package.json | jq --arg BUILD_VERSION "${BUILD_VERSION}" \'.version = $BUILD_VERSION\' | sponge frontend/package.json'
|
||||
sh 'echo -e "\\E[1;36mFrontend Version is:\\E[1;33m $(cat frontend/package.json | jq -r .version)\\E[0m"'
|
||||
sh 'cat backend/package.json | jq --arg BUILD_VERSION "${BUILD_VERSION}" \'.version = $BUILD_VERSION\' | sponge backend/package.json'
|
||||
sh 'echo -e "\\E[1;36mBackend Version is:\\E[1;33m $(cat backend/package.json | jq -r .version)\\E[0m"'
|
||||
sh 'sed -i -E "s/(version-)[0-9]+\\.[0-9]+\\.[0-9]+(-green)/\\1${BUILD_VERSION}\\2/" README.md'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Frontend') {
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
sh './scripts/frontend-build'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Backend') {
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
echo 'Checking Syntax ...'
|
||||
sh 'docker pull nginxproxymanager/nginx-full:certbot-node'
|
||||
// See: https://github.com/yarnpkg/yarn/issues/3254
|
||||
sh '''docker run --rm \\
|
||||
-v "$(pwd)/backend:/app" \\
|
||||
-v "$(pwd)/global:/app/global" \\
|
||||
-w /app \\
|
||||
nginxproxymanager/nginx-full:certbot-node \\
|
||||
node:latest \\
|
||||
sh -c "yarn install && yarn eslint . && rm -rf node_modules"
|
||||
'''
|
||||
|
||||
@ -85,75 +76,32 @@ pipeline {
|
||||
'''
|
||||
}
|
||||
}
|
||||
stage('Integration Tests Sqlite') {
|
||||
}
|
||||
stage('Test') {
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
// Bring up a stack
|
||||
sh 'docker-compose up -d fullstack-sqlite'
|
||||
sh './scripts/wait-healthy $(docker-compose ps -q fullstack-sqlite) 120'
|
||||
sh 'docker-compose up -d fullstack'
|
||||
sh './scripts/wait-healthy $(docker-compose ps -q fullstack) 120'
|
||||
|
||||
// Run tests
|
||||
sh 'rm -rf test/results'
|
||||
sh 'docker-compose up cypress-sqlite'
|
||||
sh 'docker-compose up cypress'
|
||||
// Get results
|
||||
sh 'docker cp -L "$(docker-compose ps -q cypress-sqlite):/test/results" test/'
|
||||
sh 'docker cp -L "$(docker-compose ps -q cypress):/results" test/'
|
||||
}
|
||||
}
|
||||
post {
|
||||
always {
|
||||
// Dumps to analyze later
|
||||
sh 'mkdir -p debug'
|
||||
sh 'docker-compose logs fullstack-sqlite | gzip > debug/docker_fullstack_sqlite.log.gz'
|
||||
sh 'docker-compose logs db | gzip > debug/docker_db.log.gz'
|
||||
junit 'test/results/junit/*'
|
||||
// Cypress videos and screenshot artifacts
|
||||
dir(path: 'test/results') {
|
||||
archiveArtifacts allowEmptyArchive: true, artifacts: '**/*', excludes: '**/*.xml'
|
||||
}
|
||||
junit 'test/results/junit/*'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Integration Tests Mysql') {
|
||||
steps {
|
||||
// Bring up a stack
|
||||
sh 'docker-compose up -d fullstack-mysql'
|
||||
sh './scripts/wait-healthy $(docker-compose ps -q fullstack-mysql) 120'
|
||||
|
||||
// Run tests
|
||||
sh 'rm -rf test/results'
|
||||
sh 'docker-compose up cypress-mysql'
|
||||
// Get results
|
||||
sh 'docker cp -L "$(docker-compose ps -q cypress-mysql):/test/results" test/'
|
||||
}
|
||||
post {
|
||||
always {
|
||||
// Dumps to analyze later
|
||||
sh 'mkdir -p debug'
|
||||
sh 'docker-compose logs fullstack-mysql | gzip > debug/docker_fullstack_mysql.log.gz'
|
||||
sh 'docker-compose logs db | gzip > debug/docker_db.log.gz'
|
||||
// Cypress videos and screenshot artifacts
|
||||
dir(path: 'test/results') {
|
||||
archiveArtifacts allowEmptyArchive: true, artifacts: '**/*', excludes: '**/*.xml'
|
||||
sh 'docker-compose logs fullstack | gzip > debug/docker_fullstack.log.gz'
|
||||
}
|
||||
junit 'test/results/junit/*'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Docs') {
|
||||
when {
|
||||
not {
|
||||
equals expected: 'UNSTABLE', actual: currentBuild.result
|
||||
}
|
||||
}
|
||||
steps {
|
||||
dir(path: 'docs') {
|
||||
sh 'yarn install'
|
||||
sh 'yarn build'
|
||||
}
|
||||
|
||||
dir(path: 'docs/.vuepress/dist') {
|
||||
sh 'tar -czf ../../docs.tgz *'
|
||||
}
|
||||
|
||||
archiveArtifacts(artifacts: 'docs/docs.tgz', allowEmptyArchive: false)
|
||||
}
|
||||
}
|
||||
stage('MultiArch Build') {
|
||||
@ -163,45 +111,14 @@ pipeline {
|
||||
}
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
|
||||
// Docker Login
|
||||
sh "docker login -u '${duser}' -p '${dpass}'"
|
||||
// Buildx with push from cache
|
||||
// Buildx with push
|
||||
sh "./scripts/buildx --push ${BUILDX_PUSH_TAGS}"
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Docs Deploy') {
|
||||
when {
|
||||
allOf {
|
||||
branch 'master'
|
||||
not {
|
||||
equals expected: 'UNSTABLE', actual: currentBuild.result
|
||||
}
|
||||
}
|
||||
}
|
||||
steps {
|
||||
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'npm-s3-docs', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
|
||||
sh """docker run --rm \\
|
||||
--name \${COMPOSE_PROJECT_NAME}-docs-upload \\
|
||||
-e S3_BUCKET=jc21-npm-site \\
|
||||
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \\
|
||||
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \\
|
||||
-v \$(pwd):/app \\
|
||||
-w /app \\
|
||||
jc21/ci-tools \\
|
||||
scripts/docs-upload /app/docs/.vuepress/dist/
|
||||
"""
|
||||
|
||||
sh """docker run --rm \\
|
||||
--name \${COMPOSE_PROJECT_NAME}-docs-invalidate \\
|
||||
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \\
|
||||
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \\
|
||||
jc21/ci-tools \\
|
||||
aws cloudfront create-invalidation --distribution-id EN1G6DEWZUTDT --paths '/*'
|
||||
"""
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('PR Comment') {
|
||||
when {
|
||||
@ -213,24 +130,25 @@ pipeline {
|
||||
}
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
script {
|
||||
def comment = pullRequest.comment("This is an automated message from CI:\n\nDocker Image for build ${BUILD_NUMBER} is available on [DockerHub](https://cloud.docker.com/repository/docker/jc21/${IMAGE}) as `jc21/${IMAGE}:github-${BRANCH_LOWER}`\n\n**Note:** ensure you backup your NPM instance before testing this PR image! Especially if this PR contains database changes.")
|
||||
def comment = pullRequest.comment("Docker Image for build ${BUILD_NUMBER} is available on [DockerHub](https://cloud.docker.com/repository/docker/jc21/${IMAGE}) as `jc21/${IMAGE}:github-${BRANCH_LOWER}`")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
post {
|
||||
always {
|
||||
sh 'docker-compose down --remove-orphans --volumes -t 30'
|
||||
sh 'docker-compose down --rmi all --remove-orphans --volumes -t 30'
|
||||
sh 'echo Reverting ownership'
|
||||
sh 'docker run --rm -v $(pwd):/data jc21/ci-tools chown -R $(id -u):$(id -g) /data'
|
||||
sh 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} chown -R $(id -u):$(id -g) /data'
|
||||
}
|
||||
success {
|
||||
juxtapose event: 'success'
|
||||
sh 'figlet "SUCCESS"'
|
||||
}
|
||||
failure {
|
||||
archiveArtifacts(artifacts: 'debug/**.*', allowEmptyArchive: true)
|
||||
juxtapose event: 'failure'
|
||||
sh 'figlet "FAILURE"'
|
||||
}
|
||||
|
134
README.md
134
README.md
@ -1,21 +1,16 @@
|
||||
<p align="center">
|
||||
<img src="https://nginxproxymanager.com/github.png">
|
||||
<br><br>
|
||||
<img src="https://img.shields.io/badge/version-2.9.19-green.svg?style=for-the-badge">
|
||||
<a href="https://hub.docker.com/repository/docker/jc21/nginx-proxy-manager">
|
||||
<img src="https://img.shields.io/docker/stars/jc21/nginx-proxy-manager.svg?style=for-the-badge">
|
||||
</a>
|
||||
<a href="https://hub.docker.com/repository/docker/jc21/nginx-proxy-manager">
|
||||
<img src="https://img.shields.io/docker/pulls/jc21/nginx-proxy-manager.svg?style=for-the-badge">
|
||||
</a>
|
||||
</p>
|
||||

|
||||
|
||||
# Nginx Proxy Manager
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
[](https://ci.nginxproxymanager.jc21.com/job/nginx-proxy-manager/job/master/)
|
||||
|
||||
This project comes as a pre-built docker image that enables you to easily forward to your websites
|
||||
running at home or otherwise, including free SSL, without having to know too much about Nginx or Letsencrypt.
|
||||
|
||||
- [Quick Setup](#quick-setup)
|
||||
- [Full Setup](https://nginxproxymanager.com/setup/)
|
||||
- [Screenshots](https://nginxproxymanager.com/screenshots/)
|
||||
|
||||
## Project Goal
|
||||
|
||||
@ -37,6 +32,54 @@ so that the barrier for entry here is low.
|
||||
- User management, permissions and audit log
|
||||
|
||||
|
||||
## Screenshots
|
||||
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/login.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/dashboard.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/proxy-hosts.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/proxy-hosts-new1.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/proxy-hosts-new2.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/redirection-hosts.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/redirection-hosts-new1.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/streams.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/streams-new1.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/dead-hosts.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/dead-hosts-new1.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/certificates.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/certificates-new1.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/certificates-new2.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/access-lists.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/access-lists-new1.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/users.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/users-permissions.jpg)
|
||||
[](https://public.jc21.com/nginx-proxy-manager/v2/large/audit-log.jpg)
|
||||
|
||||
|
||||
## Getting started
|
||||
|
||||
Please consult the [installation instructions](doc/INSTALL.md) for a complete guide or
|
||||
if you just want to get up and running in the quickest time possible, grab all the files in the `doc/example/` folder and run `docker-compose up -d`
|
||||
|
||||
|
||||
## Administration
|
||||
|
||||
When your docker container is running, connect to it on port `81` for the admin interface.
|
||||
|
||||
[http://localhost:81](http://localhost:81)
|
||||
|
||||
Note: Requesting SSL Certificates won't work until this project is accessible from the outside world, as explained below.
|
||||
|
||||
|
||||
### Default Administrator User
|
||||
|
||||
```
|
||||
Email: admin@example.com
|
||||
Password: changeme
|
||||
```
|
||||
|
||||
Immediately after logging in with this default user you will be asked to modify your details and change your password.
|
||||
|
||||
|
||||
## Hosting your home network
|
||||
|
||||
I won't go in to too much detail here but here are the basics for someone new to this self-hosted world.
|
||||
@ -46,64 +89,13 @@ I won't go in to too much detail here but here are the basics for someone new to
|
||||
3. Configure your domain name details to point to your home, either with a static ip or a service like DuckDNS or [Amazon Route53](https://github.com/jc21/route53-ddns)
|
||||
4. Use the Nginx Proxy Manager as your gateway to forward to your other web based services
|
||||
|
||||
## Quick Setup
|
||||
|
||||
1. Install Docker and Docker-Compose
|
||||
## Nginx Proxy Manager in the wild
|
||||
|
||||
- [Docker Install documentation](https://docs.docker.com/install/)
|
||||
- [Docker-Compose Install documentation](https://docs.docker.com/compose/install/)
|
||||
As this software gains popularity it's common to see it integrated with other platforms. Please be aware that unless specifically mentioned in the documenation of those
|
||||
integrations, they are *not supported* by me and any donation links on the pages of those integrations will not come to me even though it looks like it.
|
||||
|
||||
2. Create a docker-compose.yml file similar to this:
|
||||
Known integrations:
|
||||
|
||||
```yml
|
||||
version: '3'
|
||||
services:
|
||||
app:
|
||||
image: 'jc21/nginx-proxy-manager:latest'
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- '80:80'
|
||||
- '81:81'
|
||||
- '443:443'
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./letsencrypt:/etc/letsencrypt
|
||||
```
|
||||
|
||||
3. Bring up your stack by running
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
|
||||
# If using docker-compose-plugin
|
||||
docker compose up -d
|
||||
|
||||
```
|
||||
|
||||
4. Log in to the Admin UI
|
||||
|
||||
When your docker container is running, connect to it on port `81` for the admin interface.
|
||||
Sometimes this can take a little bit because of the entropy of keys.
|
||||
|
||||
[http://127.0.0.1:81](http://127.0.0.1:81)
|
||||
|
||||
Default Admin User:
|
||||
```
|
||||
Email: admin@example.com
|
||||
Password: changeme
|
||||
```
|
||||
|
||||
Immediately after logging in with this default user you will be asked to modify your details and change your password.
|
||||
|
||||
|
||||
## Contributors
|
||||
|
||||
Special thanks to [all of our contributors](https://github.com/NginxProxyManager/nginx-proxy-manager/graphs/contributors).
|
||||
|
||||
|
||||
## Getting Support
|
||||
|
||||
1. [Found a bug?](https://github.com/NginxProxyManager/nginx-proxy-manager/issues)
|
||||
2. [Discussions](https://github.com/NginxProxyManager/nginx-proxy-manager/discussions)
|
||||
3. [Development Gitter](https://gitter.im/nginx-proxy-manager/community)
|
||||
4. [Reddit](https://reddit.com/r/nginxproxymanager)
|
||||
- [HomeAssistant Hass.io plugin](https://github.com/hassio-addons/addon-nginx-proxy-manager)
|
||||
- [UnRaid / Synology](https://github.com/jlesage/docker-nginx-proxy-manager)
|
||||
|
2
backend/.gitignore
vendored
2
backend/.gitignore
vendored
@ -4,5 +4,3 @@ yarn-error.log
|
||||
tmp
|
||||
certbot.log
|
||||
node_modules
|
||||
core.*
|
||||
|
||||
|
8
backend/.vscode/settings.json
vendored
8
backend/.vscode/settings.json
vendored
@ -1,8 +0,0 @@
|
||||
{
|
||||
"editor.insertSpaces": false,
|
||||
"editor.formatOnSave": true,
|
||||
"files.trimTrailingWhitespace": true,
|
||||
"editor.codeActionsOnSave": {
|
||||
"source.fixAll.eslint": true
|
||||
}
|
||||
}
|
@ -40,6 +40,7 @@ app.use(function (req, res, next) {
|
||||
}
|
||||
|
||||
res.set({
|
||||
'Strict-Transport-Security': 'includeSubDomains; max-age=631138519; preload',
|
||||
'X-XSS-Protection': '1; mode=block',
|
||||
'X-Content-Type-Options': 'nosniff',
|
||||
'X-Frame-Options': x_frame_options,
|
||||
@ -65,7 +66,7 @@ app.use(function (err, req, res, next) {
|
||||
}
|
||||
};
|
||||
|
||||
if (process.env.NODE_ENV === 'development' || (req.baseUrl + req.path).includes('nginx/certificates')) {
|
||||
if (process.env.NODE_ENV === 'development') {
|
||||
payload.debug = {
|
||||
stack: typeof err.stack !== 'undefined' && err.stack ? err.stack.split('\n') : null,
|
||||
previous: err.previous
|
||||
@ -74,7 +75,7 @@ app.use(function (err, req, res, next) {
|
||||
|
||||
// Not every error is worth logging - but this is good for now until it gets annoying.
|
||||
if (typeof err.stack !== 'undefined' && err.stack) {
|
||||
if (process.env.NODE_ENV === 'development' || process.env.DEBUG) {
|
||||
if (process.env.NODE_ENV === 'development') {
|
||||
log.debug(err.stack);
|
||||
} else if (typeof err.public == 'undefined' || !err.public) {
|
||||
log.warn(err.message);
|
||||
|
@ -1,26 +0,0 @@
|
||||
{
|
||||
"database": {
|
||||
"engine": "knex-native",
|
||||
"knex": {
|
||||
"client": "sqlite3",
|
||||
"connection": {
|
||||
"filename": "/app/config/mydb.sqlite"
|
||||
},
|
||||
"pool": {
|
||||
"min": 0,
|
||||
"max": 1,
|
||||
"createTimeoutMillis": 3000,
|
||||
"acquireTimeoutMillis": 30000,
|
||||
"idleTimeoutMillis": 30000,
|
||||
"reapIntervalMillis": 1000,
|
||||
"createRetryIntervalMillis": 100,
|
||||
"propagateCreateError": false
|
||||
},
|
||||
"migrations": {
|
||||
"tableName": "migrations",
|
||||
"stub": "src/backend/lib/migrate_template.js",
|
||||
"directory": "src/backend/migrations"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -4,11 +4,7 @@ if (!config.has('database')) {
|
||||
throw new Error('Database config does not exist! Please read the instructions: https://github.com/jc21/nginx-proxy-manager/blob/master/doc/INSTALL.md');
|
||||
}
|
||||
|
||||
function generateDbConfig() {
|
||||
if (config.database.engine === 'knex-native') {
|
||||
return config.database.knex;
|
||||
} else
|
||||
return {
|
||||
let data = {
|
||||
client: config.database.engine,
|
||||
connection: {
|
||||
host: config.database.host,
|
||||
@ -20,11 +16,7 @@ function generateDbConfig() {
|
||||
migrations: {
|
||||
tableName: 'migrations'
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
let data = generateDbConfig();
|
||||
};
|
||||
|
||||
if (typeof config.database.version !== 'undefined') {
|
||||
data.version = config.database.version;
|
||||
|
@ -2,10 +2,7 @@
|
||||
|
||||
const logger = require('./logger').global;
|
||||
|
||||
async function appStart () {
|
||||
// Create config file db settings if environment variables have been set
|
||||
await createDbConfigFromEnvironment();
|
||||
|
||||
function appStart () {
|
||||
const migrate = require('./migrate');
|
||||
const setup = require('./setup');
|
||||
const app = require('./app');
|
||||
@ -42,94 +39,9 @@ async function appStart () {
|
||||
});
|
||||
}
|
||||
|
||||
async function createDbConfigFromEnvironment() {
|
||||
return new Promise((resolve, reject) => {
|
||||
const envMysqlHost = process.env.DB_MYSQL_HOST || null;
|
||||
const envMysqlPort = process.env.DB_MYSQL_PORT || null;
|
||||
const envMysqlUser = process.env.DB_MYSQL_USER || null;
|
||||
const envMysqlName = process.env.DB_MYSQL_NAME || null;
|
||||
let envSqliteFile = process.env.DB_SQLITE_FILE || null;
|
||||
|
||||
const fs = require('fs');
|
||||
const filename = (process.env.NODE_CONFIG_DIR || './config') + '/' + (process.env.NODE_ENV || 'default') + '.json';
|
||||
let configData = {};
|
||||
|
||||
try {
|
||||
configData = require(filename);
|
||||
} catch (err) {
|
||||
// do nothing
|
||||
}
|
||||
|
||||
if (configData.database && configData.database.engine && !configData.database.fromEnv) {
|
||||
logger.info('Manual db configuration already exists, skipping config creation from environment variables');
|
||||
resolve();
|
||||
return;
|
||||
}
|
||||
|
||||
if ((!envMysqlHost || !envMysqlPort || !envMysqlUser || !envMysqlName) && !envSqliteFile){
|
||||
envSqliteFile = '/data/database.sqlite';
|
||||
logger.info(`No valid environment variables for database provided, using default SQLite file '${envSqliteFile}'`);
|
||||
}
|
||||
|
||||
if (envMysqlHost && envMysqlPort && envMysqlUser && envMysqlName) {
|
||||
const newConfig = {
|
||||
fromEnv: true,
|
||||
engine: 'mysql',
|
||||
host: envMysqlHost,
|
||||
port: envMysqlPort,
|
||||
user: envMysqlUser,
|
||||
password: process.env.DB_MYSQL_PASSWORD,
|
||||
name: envMysqlName,
|
||||
};
|
||||
|
||||
if (JSON.stringify(configData.database) === JSON.stringify(newConfig)) {
|
||||
// Config is unchanged, skip overwrite
|
||||
resolve();
|
||||
return;
|
||||
}
|
||||
|
||||
logger.info('Generating MySQL knex configuration from environment variables');
|
||||
configData.database = newConfig;
|
||||
|
||||
} else {
|
||||
const newConfig = {
|
||||
fromEnv: true,
|
||||
engine: 'knex-native',
|
||||
knex: {
|
||||
client: 'sqlite3',
|
||||
connection: {
|
||||
filename: envSqliteFile
|
||||
},
|
||||
useNullAsDefault: true
|
||||
}
|
||||
};
|
||||
if (JSON.stringify(configData.database) === JSON.stringify(newConfig)) {
|
||||
// Config is unchanged, skip overwrite
|
||||
resolve();
|
||||
return;
|
||||
}
|
||||
|
||||
logger.info('Generating SQLite knex configuration');
|
||||
configData.database = newConfig;
|
||||
}
|
||||
|
||||
// Write config
|
||||
fs.writeFile(filename, JSON.stringify(configData, null, 2), (err) => {
|
||||
if (err) {
|
||||
logger.error('Could not write db config to config file: ' + filename);
|
||||
reject(err);
|
||||
} else {
|
||||
logger.debug('Wrote db configuration to config file: ' + filename);
|
||||
resolve();
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
try {
|
||||
appStart();
|
||||
} catch (err) {
|
||||
logger.error(err.message, err);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
|
@ -5,7 +5,6 @@ const logger = require('../logger').access;
|
||||
const error = require('../lib/error');
|
||||
const accessListModel = require('../models/access_list');
|
||||
const accessListAuthModel = require('../models/access_list_auth');
|
||||
const accessListClientModel = require('../models/access_list_client');
|
||||
const proxyHostModel = require('../models/proxy_host');
|
||||
const internalAuditLog = require('./audit-log');
|
||||
const internalNginx = require('./nginx');
|
||||
@ -30,17 +29,14 @@ const internalAccessList = {
|
||||
.omit(omissions())
|
||||
.insertAndFetch({
|
||||
name: data.name,
|
||||
satisfy_any: data.satisfy_any,
|
||||
pass_auth: data.pass_auth,
|
||||
owner_user_id: access.token.getUserId(1)
|
||||
});
|
||||
})
|
||||
.then((row) => {
|
||||
data.id = row.id;
|
||||
|
||||
let promises = [];
|
||||
|
||||
// Now add the items
|
||||
let promises = [];
|
||||
data.items.map((item) => {
|
||||
promises.push(accessListAuthModel
|
||||
.query()
|
||||
@ -52,27 +48,13 @@ const internalAccessList = {
|
||||
);
|
||||
});
|
||||
|
||||
// Now add the clients
|
||||
if (typeof data.clients !== 'undefined' && data.clients) {
|
||||
data.clients.map((client) => {
|
||||
promises.push(accessListClientModel
|
||||
.query()
|
||||
.insert({
|
||||
access_list_id: row.id,
|
||||
address: client.address,
|
||||
directive: client.directive
|
||||
})
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
return Promise.all(promises);
|
||||
})
|
||||
.then(() => {
|
||||
// re-fetch with expansions
|
||||
return internalAccessList.get(access, {
|
||||
id: data.id,
|
||||
expand: ['owner', 'items', 'clients', 'proxy_hosts.access_list.[clients,items]']
|
||||
expand: ['owner', 'items']
|
||||
}, true /* <- skip masking */);
|
||||
})
|
||||
.then((row) => {
|
||||
@ -82,7 +64,7 @@ const internalAccessList = {
|
||||
return internalAccessList.build(row)
|
||||
.then(() => {
|
||||
if (row.proxy_host_count) {
|
||||
return internalNginx.bulkGenerateConfigs('proxy_host', row.proxy_hosts);
|
||||
return internalNginx.reload();
|
||||
}
|
||||
})
|
||||
.then(() => {
|
||||
@ -118,6 +100,7 @@ const internalAccessList = {
|
||||
// Sanity check that something crazy hasn't happened
|
||||
throw new error.InternalValidationError('Access List could not be updated, IDs do not match: ' + row.id + ' !== ' + data.id);
|
||||
}
|
||||
|
||||
})
|
||||
.then(() => {
|
||||
// patch name if specified
|
||||
@ -126,9 +109,7 @@ const internalAccessList = {
|
||||
.query()
|
||||
.where({id: data.id})
|
||||
.patch({
|
||||
name: data.name,
|
||||
satisfy_any: data.satisfy_any,
|
||||
pass_auth: data.pass_auth,
|
||||
name: data.name
|
||||
});
|
||||
}
|
||||
})
|
||||
@ -172,39 +153,6 @@ const internalAccessList = {
|
||||
});
|
||||
}
|
||||
})
|
||||
.then(() => {
|
||||
// Check for clients and add/update/remove them
|
||||
if (typeof data.clients !== 'undefined' && data.clients) {
|
||||
let promises = [];
|
||||
|
||||
data.clients.map(function (client) {
|
||||
if (client.address) {
|
||||
promises.push(accessListClientModel
|
||||
.query()
|
||||
.insert({
|
||||
access_list_id: data.id,
|
||||
address: client.address,
|
||||
directive: client.directive
|
||||
})
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
let query = accessListClientModel
|
||||
.query()
|
||||
.delete()
|
||||
.where('access_list_id', data.id);
|
||||
|
||||
return query
|
||||
.then(() => {
|
||||
// Add new items
|
||||
if (promises.length) {
|
||||
return Promise.all(promises);
|
||||
}
|
||||
});
|
||||
}
|
||||
})
|
||||
.then(internalNginx.reload)
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
@ -218,14 +166,14 @@ const internalAccessList = {
|
||||
// re-fetch with expansions
|
||||
return internalAccessList.get(access, {
|
||||
id: data.id,
|
||||
expand: ['owner', 'items', 'clients', 'proxy_hosts.access_list.[clients,items]']
|
||||
expand: ['owner', 'items']
|
||||
}, true /* <- skip masking */);
|
||||
})
|
||||
.then((row) => {
|
||||
return internalAccessList.build(row)
|
||||
.then(() => {
|
||||
if (row.proxy_host_count) {
|
||||
return internalNginx.bulkGenerateConfigs('proxy_host', row.proxy_hosts);
|
||||
return internalNginx.reload();
|
||||
}
|
||||
})
|
||||
.then(() => {
|
||||
@ -256,7 +204,7 @@ const internalAccessList = {
|
||||
.joinRaw('LEFT JOIN `proxy_host` ON `proxy_host`.`access_list_id` = `access_list`.`id` AND `proxy_host`.`is_deleted` = 0')
|
||||
.where('access_list.is_deleted', 0)
|
||||
.andWhere('access_list.id', data.id)
|
||||
.allowEager('[owner,items,clients,proxy_hosts.[*, access_list.[clients,items]]]')
|
||||
.allowEager('[owner,items,proxy_hosts]')
|
||||
.omit(['access_list.is_deleted'])
|
||||
.first();
|
||||
|
||||
@ -298,7 +246,7 @@ const internalAccessList = {
|
||||
delete: (access, data) => {
|
||||
return access.can('access_lists:delete', data.id)
|
||||
.then(() => {
|
||||
return internalAccessList.get(access, {id: data.id, expand: ['proxy_hosts', 'items', 'clients']});
|
||||
return internalAccessList.get(access, {id: data.id, expand: ['proxy_hosts', 'items']});
|
||||
})
|
||||
.then((row) => {
|
||||
if (!row) {
|
||||
@ -382,11 +330,11 @@ const internalAccessList = {
|
||||
.where('access_list.is_deleted', 0)
|
||||
.groupBy('access_list.id')
|
||||
.omit(['access_list.is_deleted'])
|
||||
.allowEager('[owner,items,clients]')
|
||||
.allowEager('[owner,items]')
|
||||
.orderBy('access_list.name', 'ASC');
|
||||
|
||||
if (access_data.permission_visibility !== 'all') {
|
||||
query.andWhere('access_list.owner_user_id', access.token.getUserId(1));
|
||||
query.andWhere('owner_user_id', access.token.getUserId(1));
|
||||
}
|
||||
|
||||
// Query is used for searching
|
||||
|
@ -1,22 +1,18 @@
|
||||
const _ = require('lodash');
|
||||
const fs = require('fs');
|
||||
const https = require('https');
|
||||
const tempWrite = require('temp-write');
|
||||
const moment = require('moment');
|
||||
const _ = require('lodash');
|
||||
const logger = require('../logger').ssl;
|
||||
const error = require('../lib/error');
|
||||
const utils = require('../lib/utils');
|
||||
const certificateModel = require('../models/certificate');
|
||||
const dnsPlugins = require('../global/certbot-dns-plugins');
|
||||
const internalAuditLog = require('./audit-log');
|
||||
const tempWrite = require('temp-write');
|
||||
const utils = require('../lib/utils');
|
||||
const moment = require('moment');
|
||||
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
|
||||
const le_staging = process.env.NODE_ENV !== 'production';
|
||||
const internalNginx = require('./nginx');
|
||||
const internalHost = require('./host');
|
||||
const letsencryptStaging = process.env.NODE_ENV !== 'production';
|
||||
const letsencryptConfig = '/etc/letsencrypt.ini';
|
||||
const certbotCommand = 'certbot';
|
||||
const archiver = require('archiver');
|
||||
const path = require('path');
|
||||
const { isArray } = require('lodash');
|
||||
const certbot_command = '/usr/bin/certbot';
|
||||
const le_config = '/etc/letsencrypt.ini';
|
||||
|
||||
function omissions() {
|
||||
return ['is_deleted'];
|
||||
@ -24,14 +20,14 @@ function omissions() {
|
||||
|
||||
const internalCertificate = {
|
||||
|
||||
allowedSslFiles: ['certificate', 'certificate_key', 'intermediate_certificate'],
|
||||
intervalTimeout: 1000 * 60 * 60, // 1 hour
|
||||
allowed_ssl_files: ['certificate', 'certificate_key', 'intermediate_certificate'],
|
||||
interval_timeout: 1000 * 60 * 60, // 1 hour
|
||||
interval: null,
|
||||
intervalProcessing: false,
|
||||
interval_processing: false,
|
||||
|
||||
initTimer: () => {
|
||||
logger.info('Let\'s Encrypt Renewal Timer initialized');
|
||||
internalCertificate.interval = setInterval(internalCertificate.processExpiringHosts, internalCertificate.intervalTimeout);
|
||||
internalCertificate.interval = setInterval(internalCertificate.processExpiringHosts, internalCertificate.interval_timeout);
|
||||
// And do this now as well
|
||||
internalCertificate.processExpiringHosts();
|
||||
},
|
||||
@ -40,15 +36,15 @@ const internalCertificate = {
|
||||
* Triggered by a timer, this will check for expiring hosts and renew their ssl certs if required
|
||||
*/
|
||||
processExpiringHosts: () => {
|
||||
if (!internalCertificate.intervalProcessing) {
|
||||
internalCertificate.intervalProcessing = true;
|
||||
if (!internalCertificate.interval_processing) {
|
||||
internalCertificate.interval_processing = true;
|
||||
logger.info('Renewing SSL certs close to expiry...');
|
||||
|
||||
const cmd = certbotCommand + ' renew --non-interactive --quiet ' +
|
||||
'--config "' + letsencryptConfig + '" ' +
|
||||
let cmd = certbot_command + ' renew --non-interactive --quiet ' +
|
||||
'--config "' + le_config + '" ' +
|
||||
'--preferred-challenges "dns,http" ' +
|
||||
'--disable-hook-validation ' +
|
||||
(letsencryptStaging ? '--staging' : '');
|
||||
(le_staging ? '--staging' : '');
|
||||
|
||||
return utils.exec(cmd)
|
||||
.then((result) => {
|
||||
@ -81,7 +77,7 @@ const internalCertificate = {
|
||||
.where('id', certificate.id)
|
||||
.andWhere('provider', 'letsencrypt')
|
||||
.patch({
|
||||
expires_on: moment(cert_info.dates.to, 'X').format('YYYY-MM-DD HH:mm:ss')
|
||||
expires_on: certificateModel.raw('FROM_UNIXTIME(' + cert_info.dates.to + ')')
|
||||
});
|
||||
})
|
||||
.catch((err) => {
|
||||
@ -96,11 +92,11 @@ const internalCertificate = {
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
internalCertificate.intervalProcessing = false;
|
||||
internalCertificate.interval_processing = false;
|
||||
})
|
||||
.catch((err) => {
|
||||
logger.error(err);
|
||||
internalCertificate.intervalProcessing = false;
|
||||
internalCertificate.interval_processing = false;
|
||||
});
|
||||
}
|
||||
},
|
||||
@ -116,7 +112,7 @@ const internalCertificate = {
|
||||
data.owner_user_id = access.token.getUserId(1);
|
||||
|
||||
if (data.provider === 'letsencrypt') {
|
||||
data.nice_name = data.domain_names.join(', ');
|
||||
data.nice_name = data.domain_names.sort().join(', ');
|
||||
}
|
||||
|
||||
return certificateModel
|
||||
@ -145,33 +141,9 @@ const internalCertificate = {
|
||||
});
|
||||
})
|
||||
.then((in_use_result) => {
|
||||
// With DNS challenge no config is needed, so skip 3 and 5.
|
||||
if (certificate.meta.dns_challenge) {
|
||||
return internalNginx.reload().then(() => {
|
||||
// 4. Request cert
|
||||
return internalCertificate.requestLetsEncryptSslWithDnsChallenge(certificate);
|
||||
})
|
||||
.then(internalNginx.reload)
|
||||
.then(() => {
|
||||
// 6. Re-instate previously disabled hosts
|
||||
return internalCertificate.enableInUseHosts(in_use_result);
|
||||
})
|
||||
.then(() => {
|
||||
return certificate;
|
||||
})
|
||||
.catch((err) => {
|
||||
// In the event of failure, revert things and throw err back
|
||||
return internalCertificate.enableInUseHosts(in_use_result)
|
||||
.then(internalNginx.reload)
|
||||
.then(() => {
|
||||
throw err;
|
||||
});
|
||||
});
|
||||
} else {
|
||||
// 3. Generate the LE config
|
||||
return internalNginx.generateLetsEncryptRequestConfig(certificate)
|
||||
.then(internalNginx.reload)
|
||||
.then(async() => await new Promise((r) => setTimeout(r, 5000)))
|
||||
.then(() => {
|
||||
// 4. Request cert
|
||||
return internalCertificate.requestLetsEncryptSsl(certificate);
|
||||
@ -199,7 +171,6 @@ const internalCertificate = {
|
||||
throw err;
|
||||
});
|
||||
});
|
||||
}
|
||||
})
|
||||
.then(() => {
|
||||
// At this point, the letsencrypt cert should exist on disk.
|
||||
@ -209,7 +180,7 @@ const internalCertificate = {
|
||||
return certificateModel
|
||||
.query()
|
||||
.patchAndFetchById(certificate.id, {
|
||||
expires_on: moment(cert_info.dates.to, 'X').format('YYYY-MM-DD HH:mm:ss')
|
||||
expires_on: certificateModel.raw('FROM_UNIXTIME(' + cert_info.dates.to + ')')
|
||||
})
|
||||
.then((saved_row) => {
|
||||
// Add cert data for audit log
|
||||
@ -220,13 +191,6 @@ const internalCertificate = {
|
||||
return saved_row;
|
||||
});
|
||||
});
|
||||
}).catch(async (error) => {
|
||||
// Delete the certificate from the database if it was not created successfully
|
||||
await certificateModel
|
||||
.query()
|
||||
.deleteById(certificate.id);
|
||||
|
||||
throw error;
|
||||
});
|
||||
} else {
|
||||
return certificate;
|
||||
@ -340,71 +304,6 @@ const internalCertificate = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @returns {Promise}
|
||||
*/
|
||||
download: (access, data) => {
|
||||
return new Promise((resolve, reject) => {
|
||||
access.can('certificates:get', data)
|
||||
.then(() => {
|
||||
return internalCertificate.get(access, data);
|
||||
})
|
||||
.then((certificate) => {
|
||||
if (certificate.provider === 'letsencrypt') {
|
||||
const zipDirectory = '/etc/letsencrypt/live/npm-' + data.id;
|
||||
|
||||
if (!fs.existsSync(zipDirectory)) {
|
||||
throw new error.ItemNotFoundError('Certificate ' + certificate.nice_name + ' does not exists');
|
||||
}
|
||||
|
||||
let certFiles = fs.readdirSync(zipDirectory)
|
||||
.filter((fn) => fn.endsWith('.pem'))
|
||||
.map((fn) => fs.realpathSync(path.join(zipDirectory, fn)));
|
||||
const downloadName = 'npm-' + data.id + '-' + `${Date.now()}.zip`;
|
||||
const opName = '/tmp/' + downloadName;
|
||||
internalCertificate.zipFiles(certFiles, opName)
|
||||
.then(() => {
|
||||
logger.debug('zip completed : ', opName);
|
||||
const resp = {
|
||||
fileName: opName
|
||||
};
|
||||
resolve(resp);
|
||||
}).catch((err) => reject(err));
|
||||
} else {
|
||||
throw new error.ValidationError('Only Let\'sEncrypt certificates can be downloaded');
|
||||
}
|
||||
}).catch((err) => reject(err));
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {String} source
|
||||
* @param {String} out
|
||||
* @returns {Promise}
|
||||
*/
|
||||
zipFiles(source, out) {
|
||||
const archive = archiver('zip', { zlib: { level: 9 } });
|
||||
const stream = fs.createWriteStream(out);
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
source
|
||||
.map((fl) => {
|
||||
let fileName = path.basename(fl);
|
||||
logger.debug(fl, 'added to certificate zip');
|
||||
archive.file(fl, { name: fileName });
|
||||
});
|
||||
archive
|
||||
.on('error', (err) => reject(err))
|
||||
.pipe(stream);
|
||||
|
||||
stream.on('close', () => resolve());
|
||||
archive.finalize();
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
@ -477,7 +376,7 @@ const internalCertificate = {
|
||||
// Query is used for searching
|
||||
if (typeof search_query === 'string') {
|
||||
query.where(function () {
|
||||
this.where('nice_name', 'like', '%' + search_query + '%');
|
||||
this.where('name', 'like', '%' + search_query + '%');
|
||||
});
|
||||
}
|
||||
|
||||
@ -517,9 +416,11 @@ const internalCertificate = {
|
||||
* @returns {Promise}
|
||||
*/
|
||||
writeCustomCert: (certificate) => {
|
||||
if (debug_mode) {
|
||||
logger.info('Writing Custom Certificate:', certificate);
|
||||
}
|
||||
|
||||
const dir = '/data/custom_ssl/npm-' + certificate.id;
|
||||
let dir = '/data/custom_ssl/npm-' + certificate.id;
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
if (certificate.provider === 'letsencrypt') {
|
||||
@ -527,9 +428,9 @@ const internalCertificate = {
|
||||
return;
|
||||
}
|
||||
|
||||
let certData = certificate.meta.certificate;
|
||||
let cert_data = certificate.meta.certificate;
|
||||
if (typeof certificate.meta.intermediate_certificate !== 'undefined') {
|
||||
certData = certData + '\n' + certificate.meta.intermediate_certificate;
|
||||
cert_data = cert_data + '\n' + certificate.meta.intermediate_certificate;
|
||||
}
|
||||
|
||||
try {
|
||||
@ -541,7 +442,7 @@ const internalCertificate = {
|
||||
return;
|
||||
}
|
||||
|
||||
fs.writeFile(dir + '/fullchain.pem', certData, function (err) {
|
||||
fs.writeFile(dir + '/fullchain.pem', cert_data, function (err) {
|
||||
if (err) {
|
||||
reject(err);
|
||||
} else {
|
||||
@ -591,7 +492,7 @@ const internalCertificate = {
|
||||
// Put file contents into an object
|
||||
let files = {};
|
||||
_.map(data.files, (file, name) => {
|
||||
if (internalCertificate.allowedSslFiles.indexOf(name) !== -1) {
|
||||
if (internalCertificate.allowed_ssl_files.indexOf(name) !== -1) {
|
||||
files[name] = file.data.toString();
|
||||
}
|
||||
});
|
||||
@ -649,7 +550,7 @@ const internalCertificate = {
|
||||
}
|
||||
|
||||
_.map(data.files, (file, name) => {
|
||||
if (internalCertificate.allowedSslFiles.indexOf(name) !== -1) {
|
||||
if (internalCertificate.allowed_ssl_files.indexOf(name) !== -1) {
|
||||
row.meta[name] = file.data.toString();
|
||||
}
|
||||
});
|
||||
@ -657,7 +558,7 @@ const internalCertificate = {
|
||||
// TODO: This uses a mysql only raw function that won't translate to postgres
|
||||
return internalCertificate.update(access, {
|
||||
id: data.id,
|
||||
expires_on: moment(validations.certificate.dates.to, 'X').format('YYYY-MM-DD HH:mm:ss'),
|
||||
expires_on: certificateModel.raw('FROM_UNIXTIME(' + validations.certificate.dates.to + ')'),
|
||||
domain_names: [validations.certificate.cn],
|
||||
meta: _.clone(row.meta) // Prevent the update method from changing this value that we'll use later
|
||||
})
|
||||
@ -668,7 +569,7 @@ const internalCertificate = {
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return _.pick(row.meta, internalCertificate.allowedSslFiles);
|
||||
return _.pick(row.meta, internalCertificate.allowed_ssl_files);
|
||||
});
|
||||
});
|
||||
},
|
||||
@ -682,25 +583,17 @@ const internalCertificate = {
|
||||
checkPrivateKey: (private_key) => {
|
||||
return tempWrite(private_key, '/tmp')
|
||||
.then((filepath) => {
|
||||
return new Promise((resolve, reject) => {
|
||||
const failTimeout = setTimeout(() => {
|
||||
reject(new error.ValidationError('Result Validation Error: Validation timed out. This could be due to the key being passphrase-protected.'));
|
||||
}, 10000);
|
||||
utils
|
||||
.exec('openssl pkey -in ' + filepath + ' -check -noout 2>&1 ')
|
||||
return utils.exec('openssl rsa -in ' + filepath + ' -check -noout')
|
||||
.then((result) => {
|
||||
clearTimeout(failTimeout);
|
||||
if (!result.toLowerCase().includes('key is valid')) {
|
||||
reject(new error.ValidationError('Result Validation Error: ' + result));
|
||||
if (!result.toLowerCase().includes('key ok')) {
|
||||
throw new error.ValidationError(result);
|
||||
}
|
||||
|
||||
fs.unlinkSync(filepath);
|
||||
resolve(true);
|
||||
})
|
||||
.catch((err) => {
|
||||
clearTimeout(failTimeout);
|
||||
return true;
|
||||
}).catch((err) => {
|
||||
fs.unlinkSync(filepath);
|
||||
reject(new error.ValidationError('Certificate Key is not valid (' + err.message + ')', err));
|
||||
});
|
||||
throw new error.ValidationError('Certificate Key is not valid (' + err.message + ')', err);
|
||||
});
|
||||
});
|
||||
},
|
||||
@ -716,9 +609,9 @@ const internalCertificate = {
|
||||
return tempWrite(certificate, '/tmp')
|
||||
.then((filepath) => {
|
||||
return internalCertificate.getCertificateInfoFromFile(filepath, throw_expired)
|
||||
.then((certData) => {
|
||||
.then((cert_data) => {
|
||||
fs.unlinkSync(filepath);
|
||||
return certData;
|
||||
return cert_data;
|
||||
}).catch((err) => {
|
||||
fs.unlinkSync(filepath);
|
||||
throw err;
|
||||
@ -734,33 +627,33 @@ const internalCertificate = {
|
||||
* @param {Boolean} [throw_expired] Throw when the certificate is out of date
|
||||
*/
|
||||
getCertificateInfoFromFile: (certificate_file, throw_expired) => {
|
||||
let certData = {};
|
||||
let cert_data = {};
|
||||
|
||||
return utils.exec('openssl x509 -in ' + certificate_file + ' -subject -noout')
|
||||
.then((result) => {
|
||||
// subject=CN = something.example.com
|
||||
const regex = /(?:subject=)?[^=]+=\s+(\S+)/gim;
|
||||
const match = regex.exec(result);
|
||||
let regex = /(?:subject=)?[^=]+=\s+(\S+)/gim;
|
||||
let match = regex.exec(result);
|
||||
|
||||
if (typeof match[1] === 'undefined') {
|
||||
throw new error.ValidationError('Could not determine subject from certificate: ' + result);
|
||||
}
|
||||
|
||||
certData['cn'] = match[1];
|
||||
cert_data['cn'] = match[1];
|
||||
})
|
||||
.then(() => {
|
||||
return utils.exec('openssl x509 -in ' + certificate_file + ' -issuer -noout');
|
||||
})
|
||||
.then((result) => {
|
||||
// issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
|
||||
const regex = /^(?:issuer=)?(.*)$/gim;
|
||||
const match = regex.exec(result);
|
||||
let regex = /^(?:issuer=)?(.*)$/gim;
|
||||
let match = regex.exec(result);
|
||||
|
||||
if (typeof match[1] === 'undefined') {
|
||||
throw new error.ValidationError('Could not determine issuer from certificate: ' + result);
|
||||
}
|
||||
|
||||
certData['issuer'] = match[1];
|
||||
cert_data['issuer'] = match[1];
|
||||
})
|
||||
.then(() => {
|
||||
return utils.exec('openssl x509 -in ' + certificate_file + ' -dates -noout');
|
||||
@ -768,39 +661,39 @@ const internalCertificate = {
|
||||
.then((result) => {
|
||||
// notBefore=Jul 14 04:04:29 2018 GMT
|
||||
// notAfter=Oct 12 04:04:29 2018 GMT
|
||||
let validFrom = null;
|
||||
let validTo = null;
|
||||
let valid_from = null;
|
||||
let valid_to = null;
|
||||
|
||||
const lines = result.split('\n');
|
||||
let lines = result.split('\n');
|
||||
lines.map(function (str) {
|
||||
const regex = /^(\S+)=(.*)$/gim;
|
||||
const match = regex.exec(str.trim());
|
||||
let regex = /^(\S+)=(.*)$/gim;
|
||||
let match = regex.exec(str.trim());
|
||||
|
||||
if (match && typeof match[2] !== 'undefined') {
|
||||
const date = parseInt(moment(match[2], 'MMM DD HH:mm:ss YYYY z').format('X'), 10);
|
||||
let date = parseInt(moment(match[2], 'MMM DD HH:mm:ss YYYY z').format('X'), 10);
|
||||
|
||||
if (match[1].toLowerCase() === 'notbefore') {
|
||||
validFrom = date;
|
||||
valid_from = date;
|
||||
} else if (match[1].toLowerCase() === 'notafter') {
|
||||
validTo = date;
|
||||
valid_to = date;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (!validFrom || !validTo) {
|
||||
if (!valid_from || !valid_to) {
|
||||
throw new error.ValidationError('Could not determine dates from certificate: ' + result);
|
||||
}
|
||||
|
||||
if (throw_expired && validTo < parseInt(moment().format('X'), 10)) {
|
||||
if (throw_expired && valid_to < parseInt(moment().format('X'), 10)) {
|
||||
throw new error.ValidationError('Certificate has expired');
|
||||
}
|
||||
|
||||
certData['dates'] = {
|
||||
from: validFrom,
|
||||
to: validTo
|
||||
cert_data['dates'] = {
|
||||
from: valid_from,
|
||||
to: valid_to
|
||||
};
|
||||
|
||||
return certData;
|
||||
return cert_data;
|
||||
}).catch((err) => {
|
||||
throw new error.ValidationError('Certificate is not valid (' + err.message + ')', err);
|
||||
});
|
||||
@ -814,7 +707,7 @@ const internalCertificate = {
|
||||
* @returns {Object}
|
||||
*/
|
||||
cleanMeta: function (meta, remove) {
|
||||
internalCertificate.allowedSslFiles.map((key) => {
|
||||
internalCertificate.allowed_ssl_files.map((key) => {
|
||||
if (typeof meta[key] !== 'undefined' && meta[key]) {
|
||||
if (remove) {
|
||||
delete meta[key];
|
||||
@ -828,24 +721,25 @@ const internalCertificate = {
|
||||
},
|
||||
|
||||
/**
|
||||
* Request a certificate using the http challenge
|
||||
* @param {Object} certificate the certificate row
|
||||
* @returns {Promise}
|
||||
*/
|
||||
requestLetsEncryptSsl: (certificate) => {
|
||||
logger.info('Requesting Let\'sEncrypt certificates for Cert #' + certificate.id + ': ' + certificate.domain_names.join(', '));
|
||||
|
||||
const cmd = certbotCommand + ' certonly ' +
|
||||
'--config "' + letsencryptConfig + '" ' +
|
||||
let cmd = certbot_command + ' certonly --non-interactive ' +
|
||||
'--config "' + le_config + '" ' +
|
||||
'--cert-name "npm-' + certificate.id + '" ' +
|
||||
'--agree-tos ' +
|
||||
'--authenticator webroot ' +
|
||||
'--email "' + certificate.meta.letsencrypt_email + '" ' +
|
||||
'--preferred-challenges "dns,http" ' +
|
||||
'--webroot ' +
|
||||
'--domains "' + certificate.domain_names.join(',') + '" ' +
|
||||
(letsencryptStaging ? '--staging' : '');
|
||||
(le_staging ? '--staging' : '');
|
||||
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', cmd);
|
||||
}
|
||||
|
||||
return utils.exec(cmd)
|
||||
.then((result) => {
|
||||
@ -854,81 +748,6 @@ const internalCertificate = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Object} certificate the certificate row
|
||||
* @param {String} dns_provider the dns provider name (key used in `certbot-dns-plugins.js`)
|
||||
* @param {String | null} credentials the content of this providers credentials file
|
||||
* @param {String} propagation_seconds the cloudflare api token
|
||||
* @returns {Promise}
|
||||
*/
|
||||
requestLetsEncryptSslWithDnsChallenge: (certificate) => {
|
||||
const dns_plugin = dnsPlugins[certificate.meta.dns_provider];
|
||||
|
||||
if (!dns_plugin) {
|
||||
throw Error(`Unknown DNS provider '${certificate.meta.dns_provider}'`);
|
||||
}
|
||||
|
||||
logger.info(`Requesting Let'sEncrypt certificates via ${dns_plugin.display_name} for Cert #${certificate.id}: ${certificate.domain_names.join(', ')}`);
|
||||
|
||||
const credentialsLocation = '/etc/letsencrypt/credentials/credentials-' + certificate.id;
|
||||
// Escape single quotes and backslashes
|
||||
const escapedCredentials = certificate.meta.dns_provider_credentials.replaceAll('\'', '\\\'').replaceAll('\\', '\\\\');
|
||||
const credentialsCmd = 'mkdir -p /etc/letsencrypt/credentials 2> /dev/null; echo \'' + escapedCredentials + '\' > \'' + credentialsLocation + '\' && chmod 600 \'' + credentialsLocation + '\'';
|
||||
let prepareCmd = 'pip install ' + dns_plugin.package_name + (dns_plugin.version_requirement || '') + ' ' + dns_plugin.dependencies;
|
||||
|
||||
// Special case for cloudflare
|
||||
if (dns_plugin.package_name === 'certbot-dns-cloudflare') {
|
||||
prepareCmd = 'pip install certbot-dns-cloudflare --index-url https://www.piwheels.org/simple --prefer-binary';
|
||||
}
|
||||
|
||||
// Whether the plugin has a --<name>-credentials argument
|
||||
const hasConfigArg = certificate.meta.dns_provider !== 'route53';
|
||||
|
||||
let mainCmd = certbotCommand + ' certonly ' +
|
||||
'--config "' + letsencryptConfig + '" ' +
|
||||
'--cert-name "npm-' + certificate.id + '" ' +
|
||||
'--agree-tos ' +
|
||||
'--email "' + certificate.meta.letsencrypt_email + '" ' +
|
||||
'--domains "' + certificate.domain_names.join(',') + '" ' +
|
||||
'--authenticator ' + dns_plugin.full_plugin_name + ' ' +
|
||||
(
|
||||
hasConfigArg
|
||||
? '--' + dns_plugin.full_plugin_name + '-credentials "' + credentialsLocation + '"'
|
||||
: ''
|
||||
) +
|
||||
(
|
||||
certificate.meta.propagation_seconds !== undefined
|
||||
? ' --' + dns_plugin.full_plugin_name + '-propagation-seconds ' + certificate.meta.propagation_seconds
|
||||
: ''
|
||||
) +
|
||||
(letsencryptStaging ? ' --staging' : '');
|
||||
|
||||
// Prepend the path to the credentials file as an environment variable
|
||||
if (certificate.meta.dns_provider === 'route53') {
|
||||
mainCmd = 'AWS_CONFIG_FILE=\'' + credentialsLocation + '\' ' + mainCmd;
|
||||
}
|
||||
|
||||
logger.info('Command:', `${credentialsCmd} && ${prepareCmd} && ${mainCmd}`);
|
||||
|
||||
return utils.exec(credentialsCmd)
|
||||
.then(() => {
|
||||
return utils.exec(prepareCmd)
|
||||
.then(() => {
|
||||
return utils.exec(mainCmd)
|
||||
.then(async (result) => {
|
||||
logger.info(result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
}).catch(async (err) => {
|
||||
// Don't fail if file does not exist
|
||||
const delete_credentialsCmd = `rm -f '${credentialsLocation}' || true`;
|
||||
await utils.exec(delete_credentialsCmd);
|
||||
throw err;
|
||||
});
|
||||
},
|
||||
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
@ -942,9 +761,7 @@ const internalCertificate = {
|
||||
})
|
||||
.then((certificate) => {
|
||||
if (certificate.provider === 'letsencrypt') {
|
||||
const renewMethod = certificate.meta.dns_challenge ? internalCertificate.renewLetsEncryptSslWithDnsChallenge : internalCertificate.renewLetsEncryptSsl;
|
||||
|
||||
return renewMethod(certificate)
|
||||
return internalCertificate.renewLetsEncryptSsl(certificate)
|
||||
.then(() => {
|
||||
return internalCertificate.getCertificateInfoFromFile('/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem');
|
||||
})
|
||||
@ -952,7 +769,7 @@ const internalCertificate = {
|
||||
return certificateModel
|
||||
.query()
|
||||
.patchAndFetchById(certificate.id, {
|
||||
expires_on: moment(cert_info.dates.to, 'X').format('YYYY-MM-DD HH:mm:ss')
|
||||
expires_on: certificateModel.raw('FROM_UNIXTIME(' + cert_info.dates.to + ')')
|
||||
});
|
||||
})
|
||||
.then((updated_certificate) => {
|
||||
@ -980,15 +797,16 @@ const internalCertificate = {
|
||||
renewLetsEncryptSsl: (certificate) => {
|
||||
logger.info('Renewing Let\'sEncrypt certificates for Cert #' + certificate.id + ': ' + certificate.domain_names.join(', '));
|
||||
|
||||
const cmd = certbotCommand + ' renew --force-renewal ' +
|
||||
'--config "' + letsencryptConfig + '" ' +
|
||||
let cmd = certbot_command + ' renew --non-interactive ' +
|
||||
'--config "' + le_config + '" ' +
|
||||
'--cert-name "npm-' + certificate.id + '" ' +
|
||||
'--preferred-challenges "dns,http" ' +
|
||||
'--no-random-sleep-on-renew ' +
|
||||
'--disable-hook-validation ' +
|
||||
(letsencryptStaging ? '--staging' : '');
|
||||
(le_staging ? '--staging' : '');
|
||||
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', cmd);
|
||||
}
|
||||
|
||||
return utils.exec(cmd)
|
||||
.then((result) => {
|
||||
@ -997,41 +815,6 @@ const internalCertificate = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Object} certificate the certificate row
|
||||
* @returns {Promise}
|
||||
*/
|
||||
renewLetsEncryptSslWithDnsChallenge: (certificate) => {
|
||||
const dns_plugin = dnsPlugins[certificate.meta.dns_provider];
|
||||
|
||||
if (!dns_plugin) {
|
||||
throw Error(`Unknown DNS provider '${certificate.meta.dns_provider}'`);
|
||||
}
|
||||
|
||||
logger.info(`Renewing Let'sEncrypt certificates via ${dns_plugin.display_name} for Cert #${certificate.id}: ${certificate.domain_names.join(', ')}`);
|
||||
|
||||
let mainCmd = certbotCommand + ' renew ' +
|
||||
'--config "' + letsencryptConfig + '" ' +
|
||||
'--cert-name "npm-' + certificate.id + '" ' +
|
||||
'--disable-hook-validation ' +
|
||||
'--no-random-sleep-on-renew ' +
|
||||
(letsencryptStaging ? ' --staging' : '');
|
||||
|
||||
// Prepend the path to the credentials file as an environment variable
|
||||
if (certificate.meta.dns_provider === 'route53') {
|
||||
const credentialsLocation = '/etc/letsencrypt/credentials/credentials-' + certificate.id;
|
||||
mainCmd = 'AWS_CONFIG_FILE=\'' + credentialsLocation + '\' ' + mainCmd;
|
||||
}
|
||||
|
||||
logger.info('Command:', mainCmd);
|
||||
|
||||
return utils.exec(mainCmd)
|
||||
.then(async (result) => {
|
||||
logger.info(result);
|
||||
return result;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Object} certificate the certificate row
|
||||
* @param {Boolean} [throw_errors]
|
||||
@ -1040,25 +823,28 @@ const internalCertificate = {
|
||||
revokeLetsEncryptSsl: (certificate, throw_errors) => {
|
||||
logger.info('Revoking Let\'sEncrypt certificates for Cert #' + certificate.id + ': ' + certificate.domain_names.join(', '));
|
||||
|
||||
const mainCmd = certbotCommand + ' revoke ' +
|
||||
'--config "' + letsencryptConfig + '" ' +
|
||||
let cmd = certbot_command + ' revoke --non-interactive ' +
|
||||
'--config "' + le_config + '" ' +
|
||||
'--cert-path "/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem" ' +
|
||||
'--delete-after-revoke ' +
|
||||
(letsencryptStaging ? '--staging' : '');
|
||||
(le_staging ? '--staging' : '');
|
||||
|
||||
// Don't fail command if file does not exist
|
||||
const delete_credentialsCmd = `rm -f '/etc/letsencrypt/credentials/credentials-${certificate.id}' || true`;
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', cmd);
|
||||
}
|
||||
|
||||
logger.info('Command:', mainCmd + '; ' + delete_credentialsCmd);
|
||||
|
||||
return utils.exec(mainCmd)
|
||||
.then(async (result) => {
|
||||
await utils.exec(delete_credentialsCmd);
|
||||
return utils.exec(cmd)
|
||||
.then((result) => {
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', cmd);
|
||||
}
|
||||
logger.info(result);
|
||||
return result;
|
||||
})
|
||||
.catch((err) => {
|
||||
if (debug_mode) {
|
||||
logger.error(err.message);
|
||||
}
|
||||
|
||||
if (throw_errors) {
|
||||
throw err;
|
||||
@ -1071,9 +857,9 @@ const internalCertificate = {
|
||||
* @returns {Boolean}
|
||||
*/
|
||||
hasLetsEncryptSslCerts: (certificate) => {
|
||||
const letsencryptPath = '/etc/letsencrypt/live/npm-' + certificate.id;
|
||||
let le_path = '/etc/letsencrypt/live/npm-' + certificate.id;
|
||||
|
||||
return fs.existsSync(letsencryptPath + '/fullchain.pem') && fs.existsSync(letsencryptPath + '/privkey.pem');
|
||||
return fs.existsSync(le_path + '/fullchain.pem') && fs.existsSync(le_path + '/privkey.pem');
|
||||
},
|
||||
|
||||
/**
|
||||
@ -1134,94 +920,6 @@ const internalCertificate = {
|
||||
} else {
|
||||
return Promise.resolve();
|
||||
}
|
||||
},
|
||||
|
||||
testHttpsChallenge: async (access, domains) => {
|
||||
await access.can('certificates:list');
|
||||
|
||||
if (!isArray(domains)) {
|
||||
throw new error.InternalValidationError('Domains must be an array of strings');
|
||||
}
|
||||
if (domains.length === 0) {
|
||||
throw new error.InternalValidationError('No domains provided');
|
||||
}
|
||||
|
||||
// Create a test challenge file
|
||||
const testChallengeDir = '/data/letsencrypt-acme-challenge/.well-known/acme-challenge';
|
||||
const testChallengeFile = testChallengeDir + '/test-challenge';
|
||||
fs.mkdirSync(testChallengeDir, {recursive: true});
|
||||
fs.writeFileSync(testChallengeFile, 'Success', {encoding: 'utf8'});
|
||||
|
||||
async function performTestForDomain (domain) {
|
||||
logger.info('Testing http challenge for ' + domain);
|
||||
const url = `http://${domain}/.well-known/acme-challenge/test-challenge`;
|
||||
const formBody = `method=G&url=${encodeURI(url)}&bodytype=T&requestbody=&headername=User-Agent&headervalue=None&locationid=1&ch=false&cc=false`;
|
||||
const options = {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
'Content-Length': Buffer.byteLength(formBody)
|
||||
}
|
||||
};
|
||||
|
||||
const result = await new Promise((resolve) => {
|
||||
|
||||
const req = https.request('https://www.site24x7.com/tools/restapi-tester', options, function (res) {
|
||||
let responseBody = '';
|
||||
|
||||
res.on('data', (chunk) => responseBody = responseBody + chunk);
|
||||
res.on('end', function () {
|
||||
const parsedBody = JSON.parse(responseBody + '');
|
||||
if (res.statusCode !== 200) {
|
||||
logger.warn(`Failed to test HTTP challenge for domain ${domain}`, res);
|
||||
resolve(undefined);
|
||||
}
|
||||
resolve(parsedBody);
|
||||
});
|
||||
});
|
||||
|
||||
// Make sure to write the request body.
|
||||
req.write(formBody);
|
||||
req.end();
|
||||
req.on('error', function (e) { logger.warn(`Failed to test HTTP challenge for domain ${domain}`, e);
|
||||
resolve(undefined); });
|
||||
});
|
||||
|
||||
if (!result) {
|
||||
// Some error occurred while trying to get the data
|
||||
return 'failed';
|
||||
} else if (`${result.responsecode}` === '200' && result.htmlresponse === 'Success') {
|
||||
// Server exists and has responded with the correct data
|
||||
return 'ok';
|
||||
} else if (`${result.responsecode}` === '200') {
|
||||
// Server exists but has responded with wrong data
|
||||
logger.info(`HTTP challenge test failed for domain ${domain} because of invalid returned data:`, result.htmlresponse);
|
||||
return 'wrong-data';
|
||||
} else if (`${result.responsecode}` === '404') {
|
||||
// Server exists but responded with a 404
|
||||
logger.info(`HTTP challenge test failed for domain ${domain} because code 404 was returned`);
|
||||
return '404';
|
||||
} else if (`${result.responsecode}` === '0' || (typeof result.reason === 'string' && result.reason.toLowerCase() === 'host unavailable')) {
|
||||
// Server does not exist at domain
|
||||
logger.info(`HTTP challenge test failed for domain ${domain} the host was not found`);
|
||||
return 'no-host';
|
||||
} else {
|
||||
// Other errors
|
||||
logger.info(`HTTP challenge test failed for domain ${domain} because code ${result.responsecode} was returned`);
|
||||
return `other:${result.responsecode}`;
|
||||
}
|
||||
}
|
||||
|
||||
const results = {};
|
||||
|
||||
for (const domain of domains){
|
||||
results[domain] = await performTestForDomain(domain);
|
||||
}
|
||||
|
||||
// Remove the test challenge file
|
||||
fs.unlinkSync(testChallengeFile);
|
||||
|
||||
return results;
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -106,7 +106,7 @@ const internalHost = {
|
||||
response_object.total_count += response_object.redirection_hosts.length;
|
||||
}
|
||||
|
||||
if (promises_results[2]) {
|
||||
if (promises_results[1]) {
|
||||
// Dead Hosts
|
||||
response_object.dead_hosts = internalHost._getHostsWithDomains(promises_results[2], domain_names);
|
||||
response_object.total_count += response_object.dead_hosts.length;
|
||||
@ -158,7 +158,7 @@ const internalHost = {
|
||||
}
|
||||
}
|
||||
|
||||
if (promises_results[2]) {
|
||||
if (promises_results[1]) {
|
||||
// Dead Hosts
|
||||
if (internalHost._checkHostnameRecordsTaken(hostname, promises_results[2], ignore_type === 'dead' && ignore_id ? ignore_id : 0)) {
|
||||
is_taken = true;
|
||||
|
@ -3,15 +3,12 @@ const fs = require('fs');
|
||||
const logger = require('../logger').ip_ranges;
|
||||
const error = require('../lib/error');
|
||||
const internalNginx = require('./nginx');
|
||||
const { Liquid } = require('liquidjs');
|
||||
const Liquid = require('liquidjs');
|
||||
|
||||
const CLOUDFRONT_URL = 'https://ip-ranges.amazonaws.com/ip-ranges.json';
|
||||
const CLOUDFARE_V4_URL = 'https://www.cloudflare.com/ips-v4';
|
||||
const CLOUDFARE_V6_URL = 'https://www.cloudflare.com/ips-v6';
|
||||
|
||||
const regIpV4 = /^(\d+\.?){4}\/\d+/;
|
||||
const regIpV6 = /^(([\da-fA-F]+)?:)+\/\d+/;
|
||||
|
||||
const internalIpRanges = {
|
||||
|
||||
interval_timeout: 1000 * 60 * 60 * 6, // 6 hours
|
||||
@ -77,14 +74,14 @@ const internalIpRanges = {
|
||||
return internalIpRanges.fetchUrl(CLOUDFARE_V4_URL);
|
||||
})
|
||||
.then((cloudfare_data) => {
|
||||
let items = cloudfare_data.split('\n').filter((line) => regIpV4.test(line));
|
||||
let items = cloudfare_data.split('\n');
|
||||
ip_ranges = [... ip_ranges, ... items];
|
||||
})
|
||||
.then(() => {
|
||||
return internalIpRanges.fetchUrl(CLOUDFARE_V6_URL);
|
||||
})
|
||||
.then((cloudfare_data) => {
|
||||
let items = cloudfare_data.split('\n').filter((line) => regIpV6.test(line));
|
||||
let items = cloudfare_data.split('\n');
|
||||
ip_ranges = [... ip_ranges, ... items];
|
||||
})
|
||||
.then(() => {
|
||||
@ -119,7 +116,7 @@ const internalIpRanges = {
|
||||
* @returns {Promise}
|
||||
*/
|
||||
generateConfig: (ip_ranges) => {
|
||||
let renderEngine = new Liquid({
|
||||
let renderEngine = Liquid({
|
||||
root: __dirname + '/../templates/'
|
||||
});
|
||||
|
||||
|
@ -1,9 +1,9 @@
|
||||
const _ = require('lodash');
|
||||
const fs = require('fs');
|
||||
const Liquid = require('liquidjs');
|
||||
const logger = require('../logger').nginx;
|
||||
const utils = require('../lib/utils');
|
||||
const error = require('../lib/error');
|
||||
const { Liquid } = require('liquidjs');
|
||||
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
|
||||
|
||||
const internalNginx = {
|
||||
@ -136,8 +136,6 @@ const internalNginx = {
|
||||
* @returns {Promise}
|
||||
*/
|
||||
renderLocations: (host) => {
|
||||
|
||||
//logger.info('host = ' + JSON.stringify(host, null, 2));
|
||||
return new Promise((resolve, reject) => {
|
||||
let template;
|
||||
|
||||
@ -148,18 +146,12 @@ const internalNginx = {
|
||||
return;
|
||||
}
|
||||
|
||||
let renderer = new Liquid({
|
||||
root: __dirname + '/../templates/'
|
||||
});
|
||||
let renderer = new Liquid();
|
||||
let renderedLocations = '';
|
||||
|
||||
const locationRendering = async () => {
|
||||
for (let i = 0; i < host.locations.length; i++) {
|
||||
let locationCopy = Object.assign({}, {access_list_id: host.access_list_id}, {certificate_id: host.certificate_id},
|
||||
{ssl_forced: host.ssl_forced}, {caching_enabled: host.caching_enabled}, {block_exploits: host.block_exploits},
|
||||
{allow_websocket_upgrade: host.allow_websocket_upgrade}, {http2_support: host.http2_support},
|
||||
{hsts_enabled: host.hsts_enabled}, {hsts_subdomains: host.hsts_subdomains}, {access_list: host.access_list},
|
||||
{certificate: host.certificate}, host.locations[i]);
|
||||
let locationCopy = Object.assign({}, host.locations[i]);
|
||||
|
||||
if (locationCopy.forward_host.indexOf('/') > -1) {
|
||||
const splitted = locationCopy.forward_host.split('/');
|
||||
@ -168,16 +160,12 @@ const internalNginx = {
|
||||
locationCopy.forward_path = `/${splitted.join('/')}`;
|
||||
}
|
||||
|
||||
//logger.info('locationCopy = ' + JSON.stringify(locationCopy, null, 2));
|
||||
|
||||
// eslint-disable-next-line
|
||||
renderedLocations += await renderer.parseAndRender(template, locationCopy);
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
locationRendering().then(() => resolve(renderedLocations));
|
||||
|
||||
});
|
||||
},
|
||||
|
||||
@ -193,9 +181,7 @@ const internalNginx = {
|
||||
logger.info('Generating ' + host_type + ' Config:', host);
|
||||
}
|
||||
|
||||
// logger.info('host = ' + JSON.stringify(host, null, 2));
|
||||
|
||||
let renderEngine = new Liquid({
|
||||
let renderEngine = Liquid({
|
||||
root: __dirname + '/../templates/'
|
||||
});
|
||||
|
||||
@ -222,7 +208,6 @@ const internalNginx = {
|
||||
}
|
||||
|
||||
if (host.locations) {
|
||||
//logger.info ('host.locations = ' + JSON.stringify(host.locations, null, 2));
|
||||
origLocations = [].concat(host.locations);
|
||||
locationsPromise = internalNginx.renderLocations(host).then((renderedLocations) => {
|
||||
host.locations = renderedLocations;
|
||||
@ -239,9 +224,6 @@ const internalNginx = {
|
||||
locationsPromise = Promise.resolve();
|
||||
}
|
||||
|
||||
// Set the IPv6 setting for the host
|
||||
host.ipv6 = internalNginx.ipv6Enabled();
|
||||
|
||||
locationsPromise.then(() => {
|
||||
renderEngine
|
||||
.parseAndRender(template, host)
|
||||
@ -281,14 +263,13 @@ const internalNginx = {
|
||||
logger.info('Generating LetsEncrypt Request Config:', certificate);
|
||||
}
|
||||
|
||||
let renderEngine = new Liquid({
|
||||
let renderEngine = Liquid({
|
||||
root: __dirname + '/../templates/'
|
||||
});
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
let template = null;
|
||||
let filename = '/data/nginx/temp/letsencrypt_' + certificate.id + '.conf';
|
||||
|
||||
try {
|
||||
template = fs.readFileSync(__dirname + '/../templates/letsencrypt-request.conf', {encoding: 'utf8'});
|
||||
} catch (err) {
|
||||
@ -296,8 +277,6 @@ const internalNginx = {
|
||||
return;
|
||||
}
|
||||
|
||||
certificate.ipv6 = internalNginx.ipv6Enabled();
|
||||
|
||||
renderEngine
|
||||
.parseAndRender(template, certificate)
|
||||
.then((config_text) => {
|
||||
@ -417,18 +396,6 @@ const internalNginx = {
|
||||
*/
|
||||
advancedConfigHasDefaultLocation: function (config) {
|
||||
return !!config.match(/^(?:.*;)?\s*?location\s*?\/\s*?{/im);
|
||||
},
|
||||
|
||||
/**
|
||||
* @returns {boolean}
|
||||
*/
|
||||
ipv6Enabled: function () {
|
||||
if (typeof process.env.DISABLE_IPV6 !== 'undefined') {
|
||||
const disabled = process.env.DISABLE_IPV6.toLowerCase();
|
||||
return !(disabled === 'on' || disabled === 'true' || disabled === '1' || disabled === 'yes');
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -73,7 +73,7 @@ const internalProxyHost = {
|
||||
// re-fetch with cert
|
||||
return internalProxyHost.get(access, {
|
||||
id: row.id,
|
||||
expand: ['certificate', 'owner', 'access_list.[clients,items]']
|
||||
expand: ['certificate', 'owner', 'access_list']
|
||||
});
|
||||
})
|
||||
.then((row) => {
|
||||
@ -186,13 +186,9 @@ const internalProxyHost = {
|
||||
.then(() => {
|
||||
return internalProxyHost.get(access, {
|
||||
id: data.id,
|
||||
expand: ['owner', 'certificate', 'access_list.[clients,items]']
|
||||
expand: ['owner', 'certificate', 'access_list']
|
||||
})
|
||||
.then((row) => {
|
||||
if (!row.enabled) {
|
||||
// No need to add nginx config if host is disabled
|
||||
return row;
|
||||
}
|
||||
// Configure nginx
|
||||
return internalNginx.configure(proxyHostModel, 'proxy_host', row)
|
||||
.then((new_meta) => {
|
||||
@ -223,7 +219,7 @@ const internalProxyHost = {
|
||||
.query()
|
||||
.where('is_deleted', 0)
|
||||
.andWhere('id', data.id)
|
||||
.allowEager('[owner,access_list,access_list.[clients,items],certificate]')
|
||||
.allowEager('[owner,access_list,certificate]')
|
||||
.first();
|
||||
|
||||
if (access_data.permission_visibility !== 'all') {
|
||||
|
@ -4,21 +4,11 @@ module.exports = function (req, res, next) {
|
||||
|
||||
if (req.headers.origin) {
|
||||
|
||||
const originSchema = {
|
||||
oneOf: [
|
||||
{
|
||||
// very relaxed validation....
|
||||
validator({
|
||||
type: 'string',
|
||||
pattern: '^[a-z\\-]+:\\/\\/(?:[\\w\\-\\.]+(:[0-9]+)?/?)?$'
|
||||
},
|
||||
{
|
||||
type: 'string',
|
||||
pattern: '^[a-z\\-]+:\\/\\/(?:\\[([a-z0-9]{0,4}\\:?)+\\])?/?(:[0-9]+)?$'
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
// very relaxed validation....
|
||||
validator(originSchema, req.headers.origin)
|
||||
}, req.headers.origin)
|
||||
.then(function () {
|
||||
res.set({
|
||||
'Access-Control-Allow-Origin': req.headers.origin,
|
||||
|
@ -22,6 +22,22 @@ exports.up = function (knex/*, Promise*/) {
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] setting Table created');
|
||||
|
||||
// TODO: add settings
|
||||
let settingModel = require('../models/setting');
|
||||
|
||||
return settingModel
|
||||
.query()
|
||||
.insert({
|
||||
id: 'default-site',
|
||||
name: 'Default Site',
|
||||
description: 'What to show when Nginx is hit with an unknown Host',
|
||||
value: 'congratulations',
|
||||
meta: {}
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] Default settings added');
|
||||
});
|
||||
};
|
||||
|
||||
|
@ -1,53 +0,0 @@
|
||||
const migrate_name = 'access_list_client';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.createTable('access_list_client', (table) => {
|
||||
table.increments().primary();
|
||||
table.dateTime('created_on').notNull();
|
||||
table.dateTime('modified_on').notNull();
|
||||
table.integer('access_list_id').notNull().unsigned();
|
||||
table.string('address').notNull();
|
||||
table.string('directive').notNull();
|
||||
table.json('meta').notNull();
|
||||
|
||||
})
|
||||
.then(function () {
|
||||
logger.info('[' + migrate_name + '] access_list_client Table created');
|
||||
|
||||
return knex.schema.table('access_list', function (access_list) {
|
||||
access_list.integer('satify_any').notNull().defaultTo(0);
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] access_list Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Down...');
|
||||
|
||||
return knex.schema.dropTable('access_list_client')
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] access_list_client Table dropped');
|
||||
});
|
||||
};
|
@ -1,34 +0,0 @@
|
||||
const migrate_name = 'access_list_client_fix';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('access_list', function (access_list) {
|
||||
access_list.renameColumn('satify_any', 'satisfy_any');
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] access_list Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex, Promise) {
|
||||
logger.warn('[' + migrate_name + '] You can\'t migrate down this one.');
|
||||
return Promise.resolve(true);
|
||||
};
|
@ -1,41 +0,0 @@
|
||||
const migrate_name = 'pass_auth';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('access_list', function (access_list) {
|
||||
access_list.integer('pass_auth').notNull().defaultTo(1);
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] access_list Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Down...');
|
||||
|
||||
return knex.schema.table('access_list', function (access_list) {
|
||||
access_list.dropColumn('pass_auth');
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] access_list pass_auth Column dropped');
|
||||
});
|
||||
};
|
@ -1,41 +0,0 @@
|
||||
const migrate_name = 'redirection_scheme';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('redirection_host', (table) => {
|
||||
table.string('forward_scheme').notNull().defaultTo('$scheme');
|
||||
})
|
||||
.then(function () {
|
||||
logger.info('[' + migrate_name + '] redirection_host Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Down...');
|
||||
|
||||
return knex.schema.table('redirection_host', (table) => {
|
||||
table.dropColumn('forward_scheme');
|
||||
})
|
||||
.then(function () {
|
||||
logger.info('[' + migrate_name + '] redirection_host Table altered');
|
||||
});
|
||||
};
|
@ -1,41 +0,0 @@
|
||||
const migrate_name = 'redirection_status_code';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('redirection_host', (table) => {
|
||||
table.integer('forward_http_code').notNull().unsigned().defaultTo(302);
|
||||
})
|
||||
.then(function () {
|
||||
logger.info('[' + migrate_name + '] redirection_host Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Down...');
|
||||
|
||||
return knex.schema.table('redirection_host', (table) => {
|
||||
table.dropColumn('forward_http_code');
|
||||
})
|
||||
.then(function () {
|
||||
logger.info('[' + migrate_name + '] redirection_host Table altered');
|
||||
});
|
||||
};
|
@ -1,40 +0,0 @@
|
||||
const migrate_name = 'stream_domain';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('stream', (table) => {
|
||||
table.renameColumn('forward_ip', 'forwarding_host');
|
||||
})
|
||||
.then(function () {
|
||||
logger.info('[' + migrate_name + '] stream Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Down...');
|
||||
|
||||
return knex.schema.table('stream', (table) => {
|
||||
table.renameColumn('forwarding_host', 'forward_ip');
|
||||
})
|
||||
.then(function () {
|
||||
logger.info('[' + migrate_name + '] stream Table altered');
|
||||
});
|
||||
};
|
@ -1,50 +0,0 @@
|
||||
const migrate_name = 'stream_domain';
|
||||
const logger = require('../logger').migrate;
|
||||
const internalNginx = require('../internal/nginx');
|
||||
|
||||
async function regenerateDefaultHost(knex) {
|
||||
const row = await knex('setting').select('*').where('id', 'default-site').first();
|
||||
|
||||
if (!row) {
|
||||
return Promise.resolve();
|
||||
}
|
||||
|
||||
return internalNginx.deleteConfig('default')
|
||||
.then(() => {
|
||||
return internalNginx.generateConfig('default', row);
|
||||
})
|
||||
.then(() => {
|
||||
return internalNginx.test();
|
||||
})
|
||||
.then(() => {
|
||||
return internalNginx.reload();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return regenerateDefaultHost(knex);
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex) {
|
||||
logger.info('[' + migrate_name + '] Migrating Down...');
|
||||
|
||||
return regenerateDefaultHost(knex);
|
||||
};
|
@ -5,15 +5,13 @@ const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const AccessListAuth = require('./access_list_auth');
|
||||
const AccessListClient = require('./access_list_client');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class AccessList extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for meta
|
||||
if (typeof this.meta === 'undefined') {
|
||||
@ -22,7 +20,7 @@ class AccessList extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
static get name () {
|
||||
@ -64,17 +62,6 @@ class AccessList extends Model {
|
||||
qb.omit(['id', 'created_on', 'modified_on', 'access_list_id', 'meta']);
|
||||
}
|
||||
},
|
||||
clients: {
|
||||
relation: Model.HasManyRelation,
|
||||
modelClass: AccessListClient,
|
||||
join: {
|
||||
from: 'access_list.id',
|
||||
to: 'access_list_client.access_list_id'
|
||||
},
|
||||
modify: function (qb) {
|
||||
qb.omit(['id', 'created_on', 'modified_on', 'access_list_id', 'meta']);
|
||||
}
|
||||
},
|
||||
proxy_hosts: {
|
||||
relation: Model.HasManyRelation,
|
||||
modelClass: ProxyHost,
|
||||
@ -89,14 +76,6 @@ class AccessList extends Model {
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
get satisfy() {
|
||||
return this.satisfy_any ? 'satisfy any' : 'satisfy all';
|
||||
}
|
||||
|
||||
get passauth() {
|
||||
return this.pass_auth ? '' : 'proxy_set_header Authorization "";';
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = AccessList;
|
||||
|
@ -3,14 +3,13 @@
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class AccessListAuth extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for meta
|
||||
if (typeof this.meta === 'undefined') {
|
||||
@ -19,7 +18,7 @@ class AccessListAuth extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
static get name () {
|
||||
|
@ -1,59 +0,0 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class AccessListClient extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
|
||||
// Default for meta
|
||||
if (typeof this.meta === 'undefined') {
|
||||
this.meta = {};
|
||||
}
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
}
|
||||
|
||||
static get name () {
|
||||
return 'AccessListClient';
|
||||
}
|
||||
|
||||
static get tableName () {
|
||||
return 'access_list_client';
|
||||
}
|
||||
|
||||
static get jsonAttributes () {
|
||||
return ['meta'];
|
||||
}
|
||||
|
||||
static get relationMappings () {
|
||||
return {
|
||||
access_list: {
|
||||
relation: Model.HasOneRelation,
|
||||
modelClass: require('./access_list'),
|
||||
join: {
|
||||
from: 'access_list_client.access_list_id',
|
||||
to: 'access_list.id'
|
||||
},
|
||||
modify: function (qb) {
|
||||
qb.where('access_list.is_deleted', 0);
|
||||
qb.omit(['created_on', 'modified_on', 'is_deleted', 'access_list_id']);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
get rule() {
|
||||
return `${this.directive} ${this.address}`;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = AccessListClient;
|
@ -4,14 +4,13 @@
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class AuditLog extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for meta
|
||||
if (typeof this.meta === 'undefined') {
|
||||
@ -20,7 +19,7 @@ class AuditLog extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
static get name () {
|
||||
|
@ -5,7 +5,6 @@ const bcrypt = require('bcrypt');
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
@ -25,8 +24,8 @@ function encryptPassword () {
|
||||
|
||||
class Auth extends Model {
|
||||
$beforeInsert (queryContext) {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for meta
|
||||
if (typeof this.meta === 'undefined') {
|
||||
@ -37,7 +36,7 @@ class Auth extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate (queryContext) {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
return encryptPassword.apply(this, queryContext);
|
||||
}
|
||||
|
||||
|
@ -4,18 +4,17 @@
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class Certificate extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for expires_on
|
||||
if (typeof this.expires_on === 'undefined') {
|
||||
this.expires_on = now();
|
||||
this.expires_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
// Default for domain_names
|
||||
@ -32,7 +31,7 @@ class Certificate extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Sort domain_names
|
||||
if (typeof this.domain_names !== 'undefined') {
|
||||
|
@ -5,14 +5,13 @@ const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const Certificate = require('./certificate');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class DeadHost extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for domain_names
|
||||
if (typeof this.domain_names === 'undefined') {
|
||||
@ -28,7 +27,7 @@ class DeadHost extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Sort domain_names
|
||||
if (typeof this.domain_names !== 'undefined') {
|
||||
|
@ -1,13 +0,0 @@
|
||||
const db = require('../db');
|
||||
const config = require('config');
|
||||
const Model = require('objection').Model;
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
module.exports = function () {
|
||||
if (config.database.knex && config.database.knex.client === 'sqlite3') {
|
||||
return Model.raw('datetime(\'now\',\'localtime\')');
|
||||
} else {
|
||||
return Model.raw('NOW()');
|
||||
}
|
||||
};
|
@ -6,14 +6,13 @@ const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const AccessList = require('./access_list');
|
||||
const Certificate = require('./certificate');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class ProxyHost extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for domain_names
|
||||
if (typeof this.domain_names === 'undefined') {
|
||||
@ -29,7 +28,7 @@ class ProxyHost extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Sort domain_names
|
||||
if (typeof this.domain_names !== 'undefined') {
|
||||
|
@ -5,14 +5,13 @@ const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const Certificate = require('./certificate');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class RedirectionHost extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for domain_names
|
||||
if (typeof this.domain_names === 'undefined') {
|
||||
@ -28,7 +27,7 @@ class RedirectionHost extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Sort domain_names
|
||||
if (typeof this.domain_names !== 'undefined') {
|
||||
|
@ -4,14 +4,13 @@
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class Stream extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for meta
|
||||
if (typeof this.meta === 'undefined') {
|
||||
@ -20,7 +19,7 @@ class Stream extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
static get name () {
|
||||
|
@ -4,23 +4,15 @@
|
||||
*/
|
||||
|
||||
const _ = require('lodash');
|
||||
const config = require('config');
|
||||
const jwt = require('jsonwebtoken');
|
||||
const crypto = require('crypto');
|
||||
const error = require('../lib/error');
|
||||
const ALGO = 'RS256';
|
||||
|
||||
let public_key = null;
|
||||
let private_key = null;
|
||||
|
||||
function checkJWTKeyPair() {
|
||||
if (!public_key || !private_key) {
|
||||
let config = require('config');
|
||||
public_key = config.get('jwt.pub');
|
||||
private_key = config.get('jwt.key');
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = function () {
|
||||
const public_key = config.get('jwt.pub');
|
||||
const private_key = config.get('jwt.key');
|
||||
|
||||
let token_data = {};
|
||||
|
||||
@ -40,8 +32,6 @@ module.exports = function () {
|
||||
.toString('base64')
|
||||
.substr(-8);
|
||||
|
||||
checkJWTKeyPair();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
jwt.sign(payload, private_key, options, (err, token) => {
|
||||
if (err) {
|
||||
@ -63,7 +53,6 @@ module.exports = function () {
|
||||
*/
|
||||
load: function (token) {
|
||||
return new Promise((resolve, reject) => {
|
||||
checkJWTKeyPair();
|
||||
try {
|
||||
if (!token || token === null || token === 'null') {
|
||||
reject(new error.AuthError('Empty token'));
|
||||
|
@ -4,14 +4,13 @@
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const UserPermission = require('./user_permission');
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class User extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
|
||||
// Default for roles
|
||||
if (typeof this.roles === 'undefined') {
|
||||
@ -20,7 +19,7 @@ class User extends Model {
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
static get name () {
|
||||
|
@ -3,18 +3,17 @@
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const now = require('./now_helper');
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class UserPermission extends Model {
|
||||
$beforeInsert () {
|
||||
this.created_on = now();
|
||||
this.modified_on = now();
|
||||
this.created_on = Model.raw('NOW()');
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
$beforeUpdate () {
|
||||
this.modified_on = now();
|
||||
this.modified_on = Model.raw('NOW()');
|
||||
}
|
||||
|
||||
static get name () {
|
||||
|
@ -1,33 +1,38 @@
|
||||
{
|
||||
"name": "nginx-proxy-manager",
|
||||
"version": "0.0.0",
|
||||
"version": "2.1.0",
|
||||
"description": "A beautiful interface for creating Nginx endpoints",
|
||||
"main": "js/index.js",
|
||||
"dependencies": {
|
||||
"ajv": "^6.12.0",
|
||||
"archiver": "^5.3.0",
|
||||
"ajv": "^6.5.1",
|
||||
"batchflow": "^0.4.0",
|
||||
"bcrypt": "^5.0.0",
|
||||
"body-parser": "^1.19.0",
|
||||
"compression": "^1.7.4",
|
||||
"config": "^3.3.1",
|
||||
"express": "^4.17.1",
|
||||
"express-fileupload": "^1.1.9",
|
||||
"gravatar": "^1.8.0",
|
||||
"json-schema-ref-parser": "^8.0.0",
|
||||
"jsonwebtoken": "^8.5.1",
|
||||
"knex": "^0.20.13",
|
||||
"liquidjs": "^9.11.10",
|
||||
"lodash": "^4.17.21",
|
||||
"moment": "^2.29.4",
|
||||
"mysql": "^2.18.1",
|
||||
"node-rsa": "^1.0.8",
|
||||
"bcrypt": "^3.0.0",
|
||||
"body-parser": "^1.18.3",
|
||||
"compression": "^1.7.2",
|
||||
"config": "^2.0.1",
|
||||
"diskdb": "^0.1.17",
|
||||
"express": "^4.16.3",
|
||||
"express-fileupload": "^0.4.0",
|
||||
"gravatar": "^1.6.0",
|
||||
"html-entities": "^1.2.1",
|
||||
"json-schema-ref-parser": "^5.0.3",
|
||||
"jsonwebtoken": "^8.3.0",
|
||||
"knex": "^0.19.5",
|
||||
"liquidjs": "^5.1.1",
|
||||
"lodash": "^4.17.10",
|
||||
"moment": "^2.22.2",
|
||||
"mysql": "^2.15.0",
|
||||
"node-rsa": "^1.0.0",
|
||||
"nodemon": "^2.0.2",
|
||||
"objection": "^2.2.16",
|
||||
"objection": "^1.1.10",
|
||||
"path": "^0.12.7",
|
||||
"signale": "^1.4.0",
|
||||
"sqlite3": "^4.1.1",
|
||||
"temp-write": "^4.0.0"
|
||||
"restler": "^3.4.0",
|
||||
"signale": "^1.2.1",
|
||||
"temp-write": "^3.4.0",
|
||||
"unix-timestamp": "^0.2.0"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "webpack --mode production"
|
||||
},
|
||||
"signale": {
|
||||
"displayDate": true,
|
||||
@ -38,6 +43,6 @@
|
||||
"devDependencies": {
|
||||
"eslint": "^6.8.0",
|
||||
"eslint-plugin-align-assignments": "^1.1.2",
|
||||
"prettier": "^2.0.4"
|
||||
"prettier": "^1.19.1"
|
||||
}
|
||||
}
|
||||
|
@ -58,7 +58,6 @@ router
|
||||
.post((req, res, next) => {
|
||||
apiValidator({$ref: 'endpoints/certificates#/links/1/schema'}, req.body)
|
||||
.then((payload) => {
|
||||
req.setTimeout(900000); // 15 minutes timeout
|
||||
return internalCertificate.create(res.locals.access, payload);
|
||||
})
|
||||
.then((result) => {
|
||||
@ -68,32 +67,6 @@ router
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Test HTTP challenge for domains
|
||||
*
|
||||
* /api/nginx/certificates/test-http
|
||||
*/
|
||||
router
|
||||
.route('/test-http')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/certificates/test-http
|
||||
*
|
||||
* Test HTTP challenge for domains
|
||||
*/
|
||||
.get((req, res, next) => {
|
||||
internalCertificate.testHttpsChallenge(res.locals.access, JSON.parse(req.query.domains))
|
||||
.then((result) => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Specific certificate
|
||||
*
|
||||
@ -224,7 +197,6 @@ router
|
||||
* Renew certificate
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
req.setTimeout(900000); // 15 minutes timeout
|
||||
internalCertificate.renew(res.locals.access, {
|
||||
id: parseInt(req.params.certificate_id, 10)
|
||||
})
|
||||
@ -235,34 +207,6 @@ router
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Download LE Certs
|
||||
*
|
||||
* /api/nginx/certificates/123/download
|
||||
*/
|
||||
router
|
||||
.route('/:certificate_id/download')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/certificates/123/download
|
||||
*
|
||||
* Renew certificate
|
||||
*/
|
||||
.get((req, res, next) => {
|
||||
internalCertificate.download(res.locals.access, {
|
||||
id: parseInt(req.params.certificate_id, 10)
|
||||
})
|
||||
.then((result) => {
|
||||
res.status(200)
|
||||
.download(result.fileName);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Validate Certs before saving
|
||||
*
|
||||
|
@ -153,7 +153,7 @@
|
||||
"example": "john@example.com",
|
||||
"format": "email",
|
||||
"type": "string",
|
||||
"minLength": 6,
|
||||
"minLength": 8,
|
||||
"maxLength": 100
|
||||
},
|
||||
"password": {
|
||||
@ -179,19 +179,6 @@
|
||||
"pattern": "^(?:\\*\\.)?(?:[^.*]+\\.?)+[^.]$"
|
||||
}
|
||||
},
|
||||
"http_code": {
|
||||
"description": "Redirect HTTP Status Code",
|
||||
"example": 302,
|
||||
"type": "integer",
|
||||
"minimum": 300,
|
||||
"maximum": 308
|
||||
},
|
||||
"scheme": {
|
||||
"description": "RFC Protocol",
|
||||
"example": "HTTPS or $scheme",
|
||||
"type": "string",
|
||||
"minLength": 4
|
||||
},
|
||||
"enabled": {
|
||||
"description": "Is Enabled",
|
||||
"example": true,
|
||||
|
@ -19,32 +19,6 @@
|
||||
"type": "string",
|
||||
"description": "Name of the Access List"
|
||||
},
|
||||
"directive": {
|
||||
"type": "string",
|
||||
"enum": ["allow", "deny"]
|
||||
},
|
||||
"address": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "string",
|
||||
"pattern": "^([0-9]{1,3}\\.){3}[0-9]{1,3}(/([0-9]|[1-2][0-9]|3[0-2]))?$"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"pattern": "^s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:)))(%.+)?s*(/([0-9]|[1-9][0-9]|1[0-1][0-9]|12[0-8]))?$"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"pattern": "^all$"
|
||||
}
|
||||
]
|
||||
},
|
||||
"satisfy_any": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"pass_auth": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"meta": {
|
||||
"type": "object"
|
||||
}
|
||||
@ -97,20 +71,16 @@
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["name"],
|
||||
"required": [
|
||||
"name"
|
||||
],
|
||||
"properties": {
|
||||
"name": {
|
||||
"$ref": "#/definitions/name"
|
||||
},
|
||||
"satisfy_any": {
|
||||
"$ref": "#/definitions/satisfy_any"
|
||||
},
|
||||
"pass_auth": {
|
||||
"$ref": "#/definitions/pass_auth"
|
||||
},
|
||||
"items": {
|
||||
"type": "array",
|
||||
"minItems": 0,
|
||||
"minItems": 1,
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
@ -126,22 +96,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"clients": {
|
||||
"type": "array",
|
||||
"minItems": 0,
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"address": {
|
||||
"$ref": "#/definitions/address"
|
||||
},
|
||||
"directive": {
|
||||
"$ref": "#/definitions/directive"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
}
|
||||
@ -170,15 +124,9 @@
|
||||
"name": {
|
||||
"$ref": "#/definitions/name"
|
||||
},
|
||||
"satisfy_any": {
|
||||
"$ref": "#/definitions/satisfy_any"
|
||||
},
|
||||
"pass_auth": {
|
||||
"$ref": "#/definitions/pass_auth"
|
||||
},
|
||||
"items": {
|
||||
"type": "array",
|
||||
"minItems": 0,
|
||||
"minItems": 1,
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
@ -193,22 +141,6 @@
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"clients": {
|
||||
"type": "array",
|
||||
"minItems": 0,
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"address": {
|
||||
"$ref": "#/definitions/address"
|
||||
},
|
||||
"directive": {
|
||||
"$ref": "#/definitions/directive"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -41,24 +41,6 @@
|
||||
},
|
||||
"letsencrypt_agree": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"dns_challenge": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"dns_provider": {
|
||||
"type": "string"
|
||||
},
|
||||
"dns_provider_credentials": {
|
||||
"type": "string"
|
||||
},
|
||||
"propagation_seconds": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "integer",
|
||||
"minimum": 0
|
||||
}
|
||||
]
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -157,17 +139,6 @@
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Test HTTP Challenge",
|
||||
"description": "Tests whether the HTTP challenge should work",
|
||||
"href": "/nginx/certificates/{definitions.identity.example}/test-http",
|
||||
"access": "private",
|
||||
"method": "GET",
|
||||
"rel": "info",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@ -25,7 +25,7 @@
|
||||
"forward_host": {
|
||||
"type": "string",
|
||||
"minLength": 1,
|
||||
"maxLength": 255
|
||||
"maxLength": 50
|
||||
},
|
||||
"forward_port": {
|
||||
"type": "integer",
|
||||
|
@ -18,12 +18,6 @@
|
||||
"domain_names": {
|
||||
"$ref": "../definitions.json#/definitions/domain_names"
|
||||
},
|
||||
"forward_http_code": {
|
||||
"$ref": "../definitions.json#/definitions/http_code"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "../definitions.json#/definitions/scheme"
|
||||
},
|
||||
"forward_domain_name": {
|
||||
"$ref": "../definitions.json#/definitions/domain_name"
|
||||
},
|
||||
@ -73,12 +67,6 @@
|
||||
"domain_names": {
|
||||
"$ref": "#/definitions/domain_names"
|
||||
},
|
||||
"forward_http_code": {
|
||||
"$ref": "#/definitions/forward_http_code"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "#/definitions/forward_scheme"
|
||||
},
|
||||
"forward_domain_name": {
|
||||
"$ref": "#/definitions/forward_domain_name"
|
||||
},
|
||||
@ -146,20 +134,12 @@
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"domain_names",
|
||||
"forward_scheme",
|
||||
"forward_http_code",
|
||||
"forward_domain_name"
|
||||
],
|
||||
"properties": {
|
||||
"domain_names": {
|
||||
"$ref": "#/definitions/domain_names"
|
||||
},
|
||||
"forward_http_code": {
|
||||
"$ref": "#/definitions/forward_http_code"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "#/definitions/forward_scheme"
|
||||
},
|
||||
"forward_domain_name": {
|
||||
"$ref": "#/definitions/forward_domain_name"
|
||||
},
|
||||
@ -215,12 +195,6 @@
|
||||
"domain_names": {
|
||||
"$ref": "#/definitions/domain_names"
|
||||
},
|
||||
"forward_http_code": {
|
||||
"$ref": "#/definitions/forward_http_code"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "#/definitions/forward_scheme"
|
||||
},
|
||||
"forward_domain_name": {
|
||||
"$ref": "#/definitions/forward_domain_name"
|
||||
},
|
||||
|
@ -20,21 +20,10 @@
|
||||
"minimum": 1,
|
||||
"maximum": 65535
|
||||
},
|
||||
"forwarding_host": {
|
||||
"anyOf": [
|
||||
{
|
||||
"$ref": "../definitions.json#/definitions/domain_name"
|
||||
},
|
||||
{
|
||||
"forward_ip": {
|
||||
"type": "string",
|
||||
"format": "ipv4"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"format": "ipv6"
|
||||
}
|
||||
]
|
||||
},
|
||||
"forwarding_port": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
@ -66,8 +55,8 @@
|
||||
"incoming_port": {
|
||||
"$ref": "#/definitions/incoming_port"
|
||||
},
|
||||
"forwarding_host": {
|
||||
"$ref": "#/definitions/forwarding_host"
|
||||
"forward_ip": {
|
||||
"$ref": "#/definitions/forward_ip"
|
||||
},
|
||||
"forwarding_port": {
|
||||
"$ref": "#/definitions/forwarding_port"
|
||||
@ -118,15 +107,15 @@
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"incoming_port",
|
||||
"forwarding_host",
|
||||
"forward_ip",
|
||||
"forwarding_port"
|
||||
],
|
||||
"properties": {
|
||||
"incoming_port": {
|
||||
"$ref": "#/definitions/incoming_port"
|
||||
},
|
||||
"forwarding_host": {
|
||||
"$ref": "#/definitions/forwarding_host"
|
||||
"forward_ip": {
|
||||
"$ref": "#/definitions/forward_ip"
|
||||
},
|
||||
"forwarding_port": {
|
||||
"$ref": "#/definitions/forwarding_port"
|
||||
@ -165,8 +154,8 @@
|
||||
"incoming_port": {
|
||||
"$ref": "#/definitions/incoming_port"
|
||||
},
|
||||
"forwarding_host": {
|
||||
"$ref": "#/definitions/forwarding_host"
|
||||
"forward_ip": {
|
||||
"$ref": "#/definitions/forward_ip"
|
||||
},
|
||||
"forwarding_port": {
|
||||
"$ref": "#/definitions/forwarding_port"
|
||||
|
163
backend/setup.js
163
backend/setup.js
@ -2,21 +2,12 @@ const fs = require('fs');
|
||||
const NodeRSA = require('node-rsa');
|
||||
const config = require('config');
|
||||
const logger = require('./logger').setup;
|
||||
const certificateModel = require('./models/certificate');
|
||||
const userModel = require('./models/user');
|
||||
const userPermissionModel = require('./models/user_permission');
|
||||
const utils = require('./lib/utils');
|
||||
const authModel = require('./models/auth');
|
||||
const settingModel = require('./models/setting');
|
||||
const dns_plugins = require('./global/certbot-dns-plugins');
|
||||
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
|
||||
|
||||
/**
|
||||
* Creates a new JWT RSA Keypair if not alread set on the config
|
||||
*
|
||||
* @returns {Promise}
|
||||
*/
|
||||
const setupJwt = () => {
|
||||
module.exports = function () {
|
||||
return new Promise((resolve, reject) => {
|
||||
// Now go and check if the jwt gpg keys have been created and if not, create them
|
||||
if (!config.has('jwt') || !config.has('jwt.key') || !config.has('jwt.pub')) {
|
||||
@ -36,12 +27,12 @@ const setupJwt = () => {
|
||||
}
|
||||
|
||||
// Now create the keys and save them in the config.
|
||||
let key = new NodeRSA({ b: 2048 });
|
||||
let key = new NodeRSA({b: 2048});
|
||||
key.generateKeyPair();
|
||||
|
||||
config_data.jwt = {
|
||||
key: key.exportKey('private').toString(),
|
||||
pub: key.exportKey('public').toString(),
|
||||
pub: key.exportKey('public').toString()
|
||||
};
|
||||
|
||||
// Write config
|
||||
@ -51,10 +42,12 @@ const setupJwt = () => {
|
||||
reject(err);
|
||||
} else {
|
||||
logger.info('Wrote JWT key pair to config file: ' + filename);
|
||||
delete require.cache[require.resolve('config')];
|
||||
resolve();
|
||||
|
||||
logger.warn('Restarting interface to apply new configuration');
|
||||
process.exit(0);
|
||||
}
|
||||
});
|
||||
|
||||
} else {
|
||||
// JWT key pair exists
|
||||
if (debug_mode) {
|
||||
@ -63,20 +56,14 @@ const setupJwt = () => {
|
||||
|
||||
resolve();
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a default admin users if one doesn't already exist in the database
|
||||
*
|
||||
* @returns {Promise}
|
||||
*/
|
||||
const setupDefaultUser = () => {
|
||||
})
|
||||
.then(() => {
|
||||
return userModel
|
||||
.query()
|
||||
.select(userModel.raw('COUNT(`id`) as `count`'))
|
||||
.where('is_deleted', 0)
|
||||
.first()
|
||||
.first();
|
||||
})
|
||||
.then((row) => {
|
||||
if (!row.count) {
|
||||
// Create a new user and set password
|
||||
@ -88,7 +75,7 @@ const setupDefaultUser = () => {
|
||||
name: 'Administrator',
|
||||
nickname: 'Admin',
|
||||
avatar: '',
|
||||
roles: ['admin'],
|
||||
roles: ['admin']
|
||||
};
|
||||
|
||||
return userModel
|
||||
@ -101,10 +88,12 @@ const setupDefaultUser = () => {
|
||||
user_id: user.id,
|
||||
type: 'password',
|
||||
secret: 'changeme',
|
||||
meta: {},
|
||||
meta: {}
|
||||
})
|
||||
.then(() => {
|
||||
return userPermissionModel.query().insert({
|
||||
return userPermissionModel
|
||||
.query()
|
||||
.insert({
|
||||
user_id: user.id,
|
||||
visibility: 'all',
|
||||
proxy_hosts: 'manage',
|
||||
@ -112,131 +101,15 @@ const setupDefaultUser = () => {
|
||||
dead_hosts: 'manage',
|
||||
streams: 'manage',
|
||||
access_lists: 'manage',
|
||||
certificates: 'manage',
|
||||
certificates: 'manage'
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('Initial admin setup completed');
|
||||
logger.info('Initial setup completed');
|
||||
});
|
||||
} else if (debug_mode) {
|
||||
logger.debug('Admin user setup not required');
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates default settings if they don't already exist in the database
|
||||
*
|
||||
* @returns {Promise}
|
||||
*/
|
||||
const setupDefaultSettings = () => {
|
||||
return settingModel
|
||||
.query()
|
||||
.select(settingModel.raw('COUNT(`id`) as `count`'))
|
||||
.where({id: 'default-site'})
|
||||
.first()
|
||||
.then((row) => {
|
||||
if (!row.count) {
|
||||
settingModel
|
||||
.query()
|
||||
.insert({
|
||||
id: 'default-site',
|
||||
name: 'Default Site',
|
||||
description: 'What to show when Nginx is hit with an unknown Host',
|
||||
value: 'congratulations',
|
||||
meta: {},
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('Default settings added');
|
||||
});
|
||||
}
|
||||
if (debug_mode) {
|
||||
logger.debug('Default setting setup not required');
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Installs all Certbot plugins which are required for an installed certificate
|
||||
*
|
||||
* @returns {Promise}
|
||||
*/
|
||||
const setupCertbotPlugins = () => {
|
||||
return certificateModel
|
||||
.query()
|
||||
.where('is_deleted', 0)
|
||||
.andWhere('provider', 'letsencrypt')
|
||||
.then((certificates) => {
|
||||
if (certificates && certificates.length) {
|
||||
let plugins = [];
|
||||
let promises = [];
|
||||
let install_cloudflare_plugin = false;
|
||||
|
||||
certificates.map(function (certificate) {
|
||||
if (certificate.meta && certificate.meta.dns_challenge === true) {
|
||||
const dns_plugin = dns_plugins[certificate.meta.dns_provider];
|
||||
|
||||
if (dns_plugin.package_name === 'certbot-dns-cloudflare') {
|
||||
install_cloudflare_plugin = true;
|
||||
} else {
|
||||
const packages_to_install = `${dns_plugin.package_name}${dns_plugin.version_requirement || ''} ${dns_plugin.dependencies}`;
|
||||
if (plugins.indexOf(packages_to_install) === -1) plugins.push(packages_to_install);
|
||||
}
|
||||
|
||||
// Make sure credentials file exists
|
||||
const credentials_loc = '/etc/letsencrypt/credentials/credentials-' + certificate.id;
|
||||
// Escape single quotes and backslashes
|
||||
const escapedCredentials = certificate.meta.dns_provider_credentials.replaceAll('\'', '\\\'').replaceAll('\\', '\\\\');
|
||||
const credentials_cmd = '[ -f \'' + credentials_loc + '\' ] || { mkdir -p /etc/letsencrypt/credentials 2> /dev/null; echo \'' + escapedCredentials + '\' > \'' + credentials_loc + '\' && chmod 600 \'' + credentials_loc + '\'; }';
|
||||
promises.push(utils.exec(credentials_cmd));
|
||||
}
|
||||
});
|
||||
|
||||
if (plugins.length) {
|
||||
const install_cmd = 'pip install ' + plugins.join(' ');
|
||||
promises.push(utils.exec(install_cmd));
|
||||
}
|
||||
|
||||
if (install_cloudflare_plugin) {
|
||||
promises.push(utils.exec('pip install certbot-dns-cloudflare --index-url https://www.piwheels.org/simple --prefer-binary'));
|
||||
}
|
||||
|
||||
if (promises.length) {
|
||||
return Promise.all(promises)
|
||||
.then(() => {
|
||||
logger.info('Added Certbot plugins ' + plugins.join(', '));
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
|
||||
/**
|
||||
* Starts a timer to call run the logrotation binary every two days
|
||||
* @returns {Promise}
|
||||
*/
|
||||
const setupLogrotation = () => {
|
||||
const intervalTimeout = 1000 * 60 * 60 * 24 * 2; // 2 days
|
||||
|
||||
const runLogrotate = async () => {
|
||||
try {
|
||||
await utils.exec('logrotate /etc/logrotate.d/nginx-proxy-manager');
|
||||
logger.info('Logrotate completed.');
|
||||
} catch (e) { logger.warn(e); }
|
||||
};
|
||||
|
||||
logger.info('Logrotate Timer initialized');
|
||||
setInterval(runLogrotate, intervalTimeout);
|
||||
// And do this now as well
|
||||
return runLogrotate();
|
||||
};
|
||||
|
||||
module.exports = function () {
|
||||
return setupJwt()
|
||||
.then(setupDefaultUser)
|
||||
.then(setupDefaultSettings)
|
||||
.then(setupCertbotPlugins)
|
||||
.then(setupLogrotation);
|
||||
};
|
||||
|
@ -1,8 +1,8 @@
|
||||
{% if certificate and certificate_id > 0 -%}
|
||||
{% if ssl_forced == 1 or ssl_forced == true %}
|
||||
{% if hsts_enabled == 1 or hsts_enabled == true %}
|
||||
# HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
|
||||
add_header Strict-Transport-Security "max-age=63072000;{% if hsts_subdomains == 1 or hsts_subdomains == true -%} includeSubDomains;{% endif %} preload" always;
|
||||
# HSTS (ngx_http_headers_module is required) (31536000 seconds = 1 year)
|
||||
add_header Strict-Transport-Security "max-age=31536000;{% if hsts_subdomains == 1 or hsts_subdomains == true -%} includeSubDomains;{% endif %} preload" always;
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endif %}
|
@ -1,15 +1,5 @@
|
||||
listen 80;
|
||||
{% if ipv6 -%}
|
||||
listen [::]:80;
|
||||
{% else -%}
|
||||
#listen [::]:80;
|
||||
{% endif %}
|
||||
{% if certificate -%}
|
||||
listen 443 ssl{% if http2_support %} http2{% endif %};
|
||||
{% if ipv6 -%}
|
||||
listen [::]:443 ssl{% if http2_support %} http2{% endif %};
|
||||
{% else -%}
|
||||
#listen [::]:443;
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
server_name {{ domain_names | join: " " }};
|
||||
|
@ -3,43 +3,7 @@
|
||||
proxy_set_header X-Forwarded-Scheme $scheme;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_pass {{ forward_scheme }}://{{ forward_host }}:{{ forward_port }}{{ forward_path }};
|
||||
|
||||
{% if access_list_id > 0 %}
|
||||
{% if access_list.items.length > 0 %}
|
||||
# Authorization
|
||||
auth_basic "Authorization required";
|
||||
auth_basic_user_file /data/access/{{ access_list_id }};
|
||||
|
||||
{{ access_list.passauth }}
|
||||
{% endif %}
|
||||
|
||||
# Access Rules
|
||||
{% for client in access_list.clients %}
|
||||
{{- client.rule -}};
|
||||
{% endfor %}deny all;
|
||||
|
||||
# Access checks must...
|
||||
{% if access_list.satisfy %}
|
||||
{{ access_list.satisfy }};
|
||||
{% endif %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% include "_assets.conf" %}
|
||||
{% include "_exploits.conf" %}
|
||||
|
||||
{% include "_forced_ssl.conf" %}
|
||||
{% include "_hsts.conf" %}
|
||||
|
||||
{% if allow_websocket_upgrade == 1 or allow_websocket_upgrade == true %}
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
proxy_http_version 1.1;
|
||||
{% endif %}
|
||||
|
||||
|
||||
{{ advanced_config }}
|
||||
}
|
||||
|
||||
|
@ -5,15 +5,14 @@ server {
|
||||
{% include "_listen.conf" %}
|
||||
{% include "_certificates.conf" %}
|
||||
{% include "_hsts.conf" %}
|
||||
{% include "_forced_ssl.conf" %}
|
||||
|
||||
access_log /data/logs/dead-host-{{ id }}_access.log standard;
|
||||
error_log /data/logs/dead-host-{{ id }}_error.log warn;
|
||||
access_log /data/logs/dead_host-{{ id }}.log standard;
|
||||
|
||||
{{ advanced_config }}
|
||||
|
||||
{% if use_default_location %}
|
||||
location / {
|
||||
{% include "_forced_ssl.conf" %}
|
||||
{% include "_hsts.conf" %}
|
||||
return 404;
|
||||
}
|
||||
|
@ -6,18 +6,10 @@
|
||||
{%- else %}
|
||||
server {
|
||||
listen 80 default;
|
||||
{% if ipv6 -%}
|
||||
listen [::]:80 default;
|
||||
{% else -%}
|
||||
#listen [::]:80 default;
|
||||
{% endif %}
|
||||
server_name default-host.localhost;
|
||||
access_log /data/logs/default-host_access.log combined;
|
||||
error_log /data/logs/default-host_error.log warn;
|
||||
access_log /data/logs/default_host.log combined;
|
||||
{% include "_exploits.conf" %}
|
||||
|
||||
include conf.d/include/letsencrypt-acme-challenge.conf;
|
||||
|
||||
{%- if value == "404" %}
|
||||
location / {
|
||||
return 404;
|
||||
|
@ -2,14 +2,9 @@
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
{% if ipv6 -%}
|
||||
listen [::]:80;
|
||||
{% endif %}
|
||||
|
||||
server_name {{ domain_names | join: " " }};
|
||||
|
||||
access_log /data/logs/letsencrypt-requests_access.log standard;
|
||||
error_log /data/logs/letsencrypt-requests_error.log warn;
|
||||
access_log /data/logs/letsencrypt-requests.log standard;
|
||||
|
||||
include conf.d/include/letsencrypt-acme-challenge.conf;
|
||||
|
||||
|
@ -11,16 +11,8 @@ server {
|
||||
{% include "_assets.conf" %}
|
||||
{% include "_exploits.conf" %}
|
||||
{% include "_hsts.conf" %}
|
||||
{% include "_forced_ssl.conf" %}
|
||||
|
||||
{% if allow_websocket_upgrade == 1 or allow_websocket_upgrade == true %}
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
proxy_http_version 1.1;
|
||||
{% endif %}
|
||||
|
||||
access_log /data/logs/proxy-host-{{ id }}_access.log proxy;
|
||||
error_log /data/logs/proxy-host-{{ id }}_error.log warn;
|
||||
access_log /data/logs/proxy_host-{{ id }}.log proxy;
|
||||
|
||||
{{ advanced_config }}
|
||||
|
||||
@ -29,33 +21,18 @@ proxy_http_version 1.1;
|
||||
{% if use_default_location %}
|
||||
|
||||
location / {
|
||||
|
||||
{% if access_list_id > 0 %}
|
||||
{% if access_list.items.length > 0 %}
|
||||
# Authorization
|
||||
{%- if access_list_id > 0 -%}
|
||||
# Access List
|
||||
auth_basic "Authorization required";
|
||||
auth_basic_user_file /data/access/{{ access_list_id }};
|
||||
{%- endif %}
|
||||
|
||||
{{ access_list.passauth }}
|
||||
{% endif %}
|
||||
|
||||
# Access Rules
|
||||
{% for client in access_list.clients %}
|
||||
{{- client.rule -}};
|
||||
{% endfor %}deny all;
|
||||
|
||||
# Access checks must...
|
||||
{% if access_list.satisfy %}
|
||||
{{ access_list.satisfy }};
|
||||
{% endif %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% include "_forced_ssl.conf" %}
|
||||
{% include "_hsts.conf" %}
|
||||
|
||||
{% if allow_websocket_upgrade == 1 or allow_websocket_upgrade == true %}
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_http_version 1.1;
|
||||
{% endif %}
|
||||
|
||||
|
@ -7,21 +7,20 @@ server {
|
||||
{% include "_assets.conf" %}
|
||||
{% include "_exploits.conf" %}
|
||||
{% include "_hsts.conf" %}
|
||||
{% include "_forced_ssl.conf" %}
|
||||
|
||||
access_log /data/logs/redirection-host-{{ id }}_access.log standard;
|
||||
error_log /data/logs/redirection-host-{{ id }}_error.log warn;
|
||||
access_log /data/logs/redirection_host-{{ id }}.log standard;
|
||||
|
||||
{{ advanced_config }}
|
||||
|
||||
{% if use_default_location %}
|
||||
location / {
|
||||
{% include "_forced_ssl.conf" %}
|
||||
{% include "_hsts.conf" %}
|
||||
|
||||
{% if preserve_path == 1 or preserve_path == true %}
|
||||
return {{ forward_http_code }} {{ forward_scheme }}://{{ forward_domain_name }}$request_uri;
|
||||
return 301 $scheme://{{ forward_domain_name }}$request_uri;
|
||||
{% else %}
|
||||
return {{ forward_http_code }} {{ forward_scheme }}://{{ forward_domain_name }};
|
||||
return 301 $scheme://{{ forward_domain_name }};
|
||||
{% endif %}
|
||||
}
|
||||
{% endif %}
|
||||
|
@ -6,13 +6,7 @@
|
||||
{% if tcp_forwarding == 1 or tcp_forwarding == true -%}
|
||||
server {
|
||||
listen {{ incoming_port }};
|
||||
{% if ipv6 -%}
|
||||
listen [::]:{{ incoming_port }};
|
||||
{% else -%}
|
||||
#listen [::]:{{ incoming_port }};
|
||||
{% endif %}
|
||||
|
||||
proxy_pass {{ forwarding_host }}:{{ forwarding_port }};
|
||||
proxy_pass {{ forward_ip }}:{{ forwarding_port }};
|
||||
|
||||
# Custom
|
||||
include /data/nginx/custom/server_stream[.]conf;
|
||||
@ -22,12 +16,7 @@ server {
|
||||
{% if udp_forwarding == 1 or udp_forwarding == true %}
|
||||
server {
|
||||
listen {{ incoming_port }} udp;
|
||||
{% if ipv6 -%}
|
||||
listen [::]:{{ incoming_port }} udp;
|
||||
{% else -%}
|
||||
#listen [::]:{{ incoming_port }} udp;
|
||||
{% endif %}
|
||||
proxy_pass {{ forwarding_host }}:{{ forwarding_port }};
|
||||
proxy_pass {{ forward_ip }}:{{ forwarding_port }};
|
||||
|
||||
# Custom
|
||||
include /data/nginx/custom/server_stream[.]conf;
|
||||
|
1905
backend/yarn.lock
1905
backend/yarn.lock
File diff suppressed because it is too large
Load Diff
17
doc/ADVANCED_NGINX.md
Normal file
17
doc/ADVANCED_NGINX.md
Normal file
@ -0,0 +1,17 @@
|
||||
## Advanced Nginx Configuration
|
||||
|
||||
If you are a more advanced user, you might be itching for extra Nginx customizability.
|
||||
|
||||
NPM has the ability to include different custom configuration snippets in different places.
|
||||
|
||||
You can add your custom configuration snippet files at `/data/nginx/custom` as follow:
|
||||
|
||||
`/data/nginx/custom/root.conf`: Included at the very end of nginx.conf
|
||||
`/data/nginx/custom/http.conf`: Included at the end of the main http block
|
||||
`/data/nginx/custom/server_proxy.conf`: Included at the end of every proxy server block
|
||||
`/data/nginx/custom/server_redirect.conf`: Included at the end of every redirection server block
|
||||
`/data/nginx/custom/server_stream.conf`: Included at the end of every stream server block
|
||||
`/data/nginx/custom/server_stream_tcp.conf`: Included at the end of every TCP stream server block
|
||||
`/data/nginx/custom/server_stream_udp.conf`: Included at the end of every UDP stream server block
|
||||
|
||||
Every file is optional.
|
150
doc/INSTALL.md
Normal file
150
doc/INSTALL.md
Normal file
@ -0,0 +1,150 @@
|
||||
## Installation and Configuration
|
||||
|
||||
If you just want to get up and running in the quickest time possible, grab all the files in
|
||||
the [doc/example/](https://github.com/jc21/nginx-proxy-manager/tree/master/doc/example)
|
||||
folder and run:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
|
||||
### Configuration File
|
||||
|
||||
**The configuration file needs to be provided by you!**
|
||||
|
||||
Don't worry, this is easy to do.
|
||||
|
||||
The app requires a configuration file to let it know what database you're using.
|
||||
|
||||
Here's an example configuration for `mysql` (or mariadb) that is compatible with the docker-compose example below:
|
||||
|
||||
```json
|
||||
{
|
||||
"database": {
|
||||
"engine": "mysql",
|
||||
"host": "db",
|
||||
"name": "npm",
|
||||
"user": "npm",
|
||||
"password": "npm",
|
||||
"port": 3306
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Once you've created your configuration file it's easy to mount it in the docker container.
|
||||
|
||||
**Note:** After the first run of the application, the config file will be altered to include generated encryption keys unique to your installation. These keys
|
||||
affect the login and session management of the application. If these keys change for any reason, all users will be logged out.
|
||||
|
||||
|
||||
### Database
|
||||
|
||||
This app doesn't come with a database, you have to provide one yourself. Currently only `mysql/mariadb` is supported for the minimum versions:
|
||||
|
||||
- MySQL v5.7.8+
|
||||
- MariaDB v10.2.7+
|
||||
|
||||
It's easy to use another docker container for your database also and link it as part of the docker stack, so that's what the following examples
|
||||
are going to use.
|
||||
|
||||
|
||||
### Running the App
|
||||
|
||||
Via `docker-compose`:
|
||||
|
||||
```yml
|
||||
version: "3"
|
||||
services:
|
||||
app:
|
||||
image: jc21/nginx-proxy-manager:2
|
||||
restart: always
|
||||
ports:
|
||||
# Public HTTP Port:
|
||||
- 80:80
|
||||
# Public HTTPS Port:
|
||||
- 443:443
|
||||
# Admin Web Port:
|
||||
- 81:81
|
||||
volumes:
|
||||
# Make sure this config.json file exists as per instructions above:
|
||||
- ./config.json:/app/config/production.json
|
||||
- ./data:/data
|
||||
- ./letsencrypt:/etc/letsencrypt
|
||||
depends_on:
|
||||
- db
|
||||
db:
|
||||
image: mariadb:latest
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "npm"
|
||||
MYSQL_DATABASE: "npm"
|
||||
MYSQL_USER: "npm"
|
||||
MYSQL_PASSWORD: "npm"
|
||||
volumes:
|
||||
- ./data/mysql:/var/lib/mysql
|
||||
```
|
||||
|
||||
Then:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
|
||||
### Running on Raspberry PI / ARM devices
|
||||
|
||||
The docker images support the following architectures:
|
||||
- amd64
|
||||
- arm64
|
||||
- armv7
|
||||
|
||||
The docker images are a manifest of all the architecture docker builds supported, so this means
|
||||
you don't have to worry about doing anything special and you can follow the common instructions above.
|
||||
|
||||
Check out the [dockerhub tags](https://cloud.docker.com/repository/registry-1.docker.io/jc21/nginx-proxy-manager/tags)
|
||||
for a list of supported architectures and if you want one that doesn't exist,
|
||||
[create a feature request](https://github.com/jc21/nginx-proxy-manager/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=).
|
||||
|
||||
Also, if you don't know how to already, follow [this guide to install docker and docker-compose](https://manre-universe.net/how-to-run-docker-and-docker-compose-on-raspbian/)
|
||||
on Raspbian.
|
||||
|
||||
|
||||
### Initial Run
|
||||
|
||||
After the app is running for the first time, the following will happen:
|
||||
|
||||
- The database will initialize with table structures
|
||||
- GPG keys will be generated and saved in the configuration file
|
||||
- A default admin user will be created
|
||||
|
||||
This process can take a couple of minutes depending on your machine.
|
||||
|
||||
|
||||
### Default Administrator User
|
||||
|
||||
```
|
||||
Email: admin@example.com
|
||||
Password: changeme
|
||||
```
|
||||
|
||||
Immediately after logging in with this default user you will be asked to modify your details and change your password.
|
||||
|
||||
|
||||
### Advanced Options
|
||||
|
||||
#### X-FRAME-OPTIONS Header
|
||||
|
||||
You can configure the [`X-FRAME-OPTIONS`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options) header
|
||||
value by specifying it as a Docker environment variable. The default if not specified is `deny`.
|
||||
|
||||
```yml
|
||||
...
|
||||
environment:
|
||||
X_FRAME_OPTIONS: "sameorigin"
|
||||
...
|
||||
```
|
||||
|
||||
```
|
||||
... -e "X_FRAME_OPTIONS=sameorigin" ...
|
||||
```
|
10
doc/example/config.json
Normal file
10
doc/example/config.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"database": {
|
||||
"engine": "mysql",
|
||||
"host": "db",
|
||||
"name": "npm",
|
||||
"user": "npm",
|
||||
"password": "npm",
|
||||
"port": 3306
|
||||
}
|
||||
}
|
28
doc/example/docker-compose.yml
Normal file
28
doc/example/docker-compose.yml
Normal file
@ -0,0 +1,28 @@
|
||||
version: "3"
|
||||
services:
|
||||
app:
|
||||
image: jc21/nginx-proxy-manager:latest
|
||||
restart: always
|
||||
ports:
|
||||
- 80:80
|
||||
- 81:81
|
||||
- 443:443
|
||||
volumes:
|
||||
- ./config.json:/app/config/production.json
|
||||
- ./data:/data
|
||||
- ./letsencrypt:/etc/letsencrypt
|
||||
depends_on:
|
||||
- db
|
||||
environment:
|
||||
# if you want pretty colors in your docker logs:
|
||||
- FORCE_COLOR=1
|
||||
db:
|
||||
image: mariadb:latest
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "npm"
|
||||
MYSQL_DATABASE: "npm"
|
||||
MYSQL_USER: "npm"
|
||||
MYSQL_PASSWORD: "npm"
|
||||
volumes:
|
||||
- ./data/mysql:/var/lib/mysql
|
@ -1,14 +0,0 @@
|
||||
rules:
|
||||
# If the efficiency is measured below X%, mark as failed.
|
||||
# Expressed as a ratio between 0-1.
|
||||
lowestEfficiency: 0.99
|
||||
|
||||
# If the amount of wasted space is at least X or larger than X, mark as failed.
|
||||
# Expressed in B, KB, MB, and GB.
|
||||
highestWastedBytes: 15MB
|
||||
|
||||
# If the amount of wasted space makes up for X% or more of the image, mark as failed.
|
||||
# Note: the base image layer is NOT included in the total image size.
|
||||
# Expressed as a ratio between 0-1; fails if the threshold is met or crossed.
|
||||
highestUserWastedPercent: 0.02
|
||||
|
@ -3,61 +3,44 @@
|
||||
|
||||
# This file assumes that the frontend has been built using ./scripts/frontend-build
|
||||
|
||||
FROM nginxproxymanager/nginx-full:certbot-node
|
||||
FROM --platform=${TARGETPLATFORM:-linux/amd64} jc21/alpine-nginx-full:node
|
||||
|
||||
ARG TARGETPLATFORM
|
||||
ARG BUILD_VERSION
|
||||
ARG BUILD_COMMIT
|
||||
ARG BUILD_DATE
|
||||
|
||||
ENV SUPPRESS_NO_CONFIG_WARNING=1 \
|
||||
S6_FIX_ATTRS_HIDDEN=1 \
|
||||
S6_BEHAVIOUR_IF_STAGE2_FAILS=1 \
|
||||
NODE_ENV=production \
|
||||
NPM_BUILD_VERSION="${BUILD_VERSION}" \
|
||||
NPM_BUILD_COMMIT="${BUILD_COMMIT}" \
|
||||
NPM_BUILD_DATE="${BUILD_DATE}"
|
||||
ENV SUPPRESS_NO_CONFIG_WARNING=1
|
||||
ENV S6_FIX_ATTRS_HIDDEN=1
|
||||
ENV NODE_ENV=production
|
||||
|
||||
RUN echo "fs.file-max = 65535" > /etc/sysctl.conf \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y --no-install-recommends jq logrotate \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
&& rm -rf /etc/nginx \
|
||||
&& apk update \
|
||||
&& apk add python2 certbot jq \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
ENV NPM_BUILD_VERSION="${BUILD_VERSION}" NPM_BUILD_COMMIT="${BUILD_COMMIT}" NPM_BUILD_DATE="${BUILD_DATE}"
|
||||
|
||||
# s6 overlay
|
||||
COPY scripts/install-s6 /tmp/install-s6
|
||||
RUN /tmp/install-s6 "${TARGETPLATFORM}" && rm -f /tmp/install-s6
|
||||
RUN curl -L -o /tmp/s6-overlay-amd64.tar.gz "https://github.com/just-containers/s6-overlay/releases/download/v1.22.1.0/s6-overlay-amd64.tar.gz" \
|
||||
&& tar -xzf /tmp/s6-overlay-amd64.tar.gz -C /
|
||||
|
||||
EXPOSE 80 81 443
|
||||
EXPOSE 80
|
||||
EXPOSE 81
|
||||
EXPOSE 443
|
||||
EXPOSE 9876
|
||||
|
||||
COPY backend /app
|
||||
COPY frontend/dist /app/frontend
|
||||
COPY global /app/global
|
||||
COPY docker/rootfs /
|
||||
ADD backend /app
|
||||
ADD frontend/dist /app/frontend
|
||||
|
||||
WORKDIR /app
|
||||
RUN yarn install
|
||||
|
||||
# add late to limit cache-busting by modifications
|
||||
COPY docker/rootfs /
|
||||
|
||||
# Remove frontend service not required for prod, dev nginx config as well
|
||||
RUN rm -rf /etc/services.d/frontend /etc/nginx/conf.d/dev.conf
|
||||
|
||||
# Change permission of logrotate config file
|
||||
RUN chmod 644 /etc/logrotate.d/nginx-proxy-manager
|
||||
|
||||
# fix for pip installs
|
||||
# https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1769
|
||||
RUN pip uninstall --yes setuptools \
|
||||
&& pip install "setuptools==58.0.0"
|
||||
RUN rm -rf /etc/services.d/frontend RUN rm -f /etc/nginx/conf.d/dev.conf
|
||||
|
||||
VOLUME [ "/data", "/etc/letsencrypt" ]
|
||||
ENTRYPOINT [ "/init" ]
|
||||
CMD [ "/init" ]
|
||||
|
||||
LABEL org.label-schema.schema-version="1.0" \
|
||||
org.label-schema.license="MIT" \
|
||||
org.label-schema.name="nginx-proxy-manager" \
|
||||
org.label-schema.description="Docker container for managing Nginx proxy hosts with a simple, powerful interface " \
|
||||
org.label-schema.url="https://github.com/jc21/nginx-proxy-manager" \
|
||||
org.label-schema.vcs-url="https://github.com/jc21/nginx-proxy-manager.git" \
|
||||
org.label-schema.cmd="docker run --rm -ti jc21/nginx-proxy-manager:latest"
|
||||
HEALTHCHECK --interval=5s --timeout=3s CMD /bin/check-health
|
@ -1,15 +1,15 @@
|
||||
FROM nginxproxymanager/nginx-full:certbot-node
|
||||
FROM jc21/alpine-nginx-full:node
|
||||
LABEL maintainer="Jamie Curnow <jc@jc21.com>"
|
||||
|
||||
ENV S6_LOGGING=0 \
|
||||
SUPPRESS_NO_CONFIG_WARNING=1 \
|
||||
S6_FIX_ATTRS_HIDDEN=1
|
||||
ENV S6_LOGGING=0
|
||||
ENV SUPPRESS_NO_CONFIG_WARNING=1
|
||||
ENV S6_FIX_ATTRS_HIDDEN=1
|
||||
|
||||
RUN echo "fs.file-max = 65535" > /etc/sysctl.conf \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y certbot jq python3-pip logrotate \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
&& rm -rf /etc/nginx \
|
||||
&& apk update \
|
||||
&& apk add python2 certbot jq \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
# Task
|
||||
RUN cd /usr \
|
||||
@ -18,12 +18,15 @@ RUN cd /usr \
|
||||
|
||||
COPY rootfs /
|
||||
RUN rm -f /etc/nginx/conf.d/production.conf
|
||||
RUN chmod 644 /etc/logrotate.d/nginx-proxy-manager
|
||||
|
||||
# s6 overlay
|
||||
RUN curl -L -o /tmp/s6-overlay-amd64.tar.gz "https://github.com/just-containers/s6-overlay/releases/download/v1.22.1.0/s6-overlay-amd64.tar.gz" \
|
||||
&& tar -xzf /tmp/s6-overlay-amd64.tar.gz -C /
|
||||
|
||||
EXPOSE 80 81 443
|
||||
ENTRYPOINT [ "/init" ]
|
||||
EXPOSE 80
|
||||
EXPOSE 81
|
||||
EXPOSE 443
|
||||
|
||||
CMD [ "/init" ]
|
||||
|
||||
HEALTHCHECK --interval=5s --timeout=3s CMD /bin/check-health
|
||||
|
@ -2,45 +2,20 @@
|
||||
version: "3"
|
||||
services:
|
||||
|
||||
fullstack-mysql:
|
||||
fullstack:
|
||||
image: ${IMAGE}:ci-${BUILD_NUMBER}
|
||||
environment:
|
||||
NODE_ENV: "development"
|
||||
FORCE_COLOR: 1
|
||||
DB_MYSQL_HOST: "db"
|
||||
DB_MYSQL_PORT: 3306
|
||||
DB_MYSQL_USER: "npm"
|
||||
DB_MYSQL_PASSWORD: "npm"
|
||||
DB_MYSQL_NAME: "npm"
|
||||
- NODE_ENV=development
|
||||
- FORCE_COLOR=1
|
||||
volumes:
|
||||
- npm_data:/data
|
||||
- ../.jenkins/config.json:/app/config/production.json
|
||||
expose:
|
||||
- 81
|
||||
- 80
|
||||
- 443
|
||||
depends_on:
|
||||
- db
|
||||
healthcheck:
|
||||
test: ["CMD", "/bin/check-health"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
|
||||
fullstack-sqlite:
|
||||
image: ${IMAGE}:ci-${BUILD_NUMBER}
|
||||
environment:
|
||||
NODE_ENV: "development"
|
||||
FORCE_COLOR: 1
|
||||
DB_SQLITE_FILE: "/data/database.sqlite"
|
||||
volumes:
|
||||
- npm_data:/data
|
||||
expose:
|
||||
- 81
|
||||
- 80
|
||||
- 443
|
||||
healthcheck:
|
||||
test: ["CMD", "/bin/check-health"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
|
||||
db:
|
||||
image: jc21/mariadb-aria
|
||||
@ -52,24 +27,13 @@ services:
|
||||
volumes:
|
||||
- db_data:/var/lib/mysql
|
||||
|
||||
cypress-mysql:
|
||||
cypress:
|
||||
image: ${IMAGE}-cypress:ci-${BUILD_NUMBER}
|
||||
build:
|
||||
context: ../test/
|
||||
dockerfile: cypress/Dockerfile
|
||||
context: ../
|
||||
dockerfile: test/cypress/Dockerfile
|
||||
environment:
|
||||
CYPRESS_baseUrl: "http://fullstack-mysql:81"
|
||||
volumes:
|
||||
- cypress-logs:/results
|
||||
command: cypress run --browser chrome --config-file=${CYPRESS_CONFIG:-cypress/config/ci.json}
|
||||
|
||||
cypress-sqlite:
|
||||
image: ${IMAGE}-cypress:ci-${BUILD_NUMBER}
|
||||
build:
|
||||
context: ../test/
|
||||
dockerfile: cypress/Dockerfile
|
||||
environment:
|
||||
CYPRESS_baseUrl: "http://fullstack-sqlite:81"
|
||||
CYPRESS_baseUrl: "http://fullstack:81"
|
||||
volumes:
|
||||
- cypress-logs:/results
|
||||
command: cypress run --browser chrome --config-file=${CYPRESS_CONFIG:-cypress/config/ci.json}
|
||||
|
@ -1,9 +1,9 @@
|
||||
# WARNING: This is a DEVELOPMENT docker-compose file, it should not be used for production.
|
||||
version: "3.5"
|
||||
version: "3"
|
||||
services:
|
||||
|
||||
npm:
|
||||
image: nginxproxymanager:dev
|
||||
container_name: npm_core
|
||||
build:
|
||||
context: ./
|
||||
dockerfile: ./dev/Dockerfile
|
||||
@ -11,36 +11,20 @@ services:
|
||||
- 3080:80
|
||||
- 3081:81
|
||||
- 3443:443
|
||||
networks:
|
||||
- nginx_proxy_manager
|
||||
environment:
|
||||
NODE_ENV: "development"
|
||||
FORCE_COLOR: 1
|
||||
DEVELOPMENT: "true"
|
||||
DB_MYSQL_HOST: "db"
|
||||
DB_MYSQL_PORT: 3306
|
||||
DB_MYSQL_USER: "npm"
|
||||
DB_MYSQL_PASSWORD: "npm"
|
||||
DB_MYSQL_NAME: "npm"
|
||||
# DB_SQLITE_FILE: "/data/database.sqlite"
|
||||
# DISABLE_IPV6: "true"
|
||||
- NODE_ENV=development
|
||||
- FORCE_COLOR=1
|
||||
- DEVELOPMENT=true
|
||||
volumes:
|
||||
- npm_data:/data
|
||||
- le_data:/etc/letsencrypt
|
||||
- ../backend:/app
|
||||
- ../frontend:/app/frontend
|
||||
- ../global:/app/global
|
||||
- ..:/app
|
||||
depends_on:
|
||||
- db
|
||||
working_dir: /app
|
||||
|
||||
db:
|
||||
image: jc21/mariadb-aria
|
||||
container_name: npm_db
|
||||
ports:
|
||||
- 33306:3306
|
||||
networks:
|
||||
- nginx_proxy_manager
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "npm"
|
||||
MYSQL_DATABASE: "npm"
|
||||
@ -49,14 +33,17 @@ services:
|
||||
volumes:
|
||||
- db_data:/var/lib/mysql
|
||||
|
||||
swagger:
|
||||
image: 'swaggerapi/swagger-ui:latest'
|
||||
ports:
|
||||
- 3001:80
|
||||
environment:
|
||||
URL: "http://127.0.0.1:3081/api/schema"
|
||||
PORT: '80'
|
||||
depends_on:
|
||||
- npm
|
||||
|
||||
volumes:
|
||||
npm_data:
|
||||
name: npm_core_data
|
||||
le_data:
|
||||
name: npm_le_data
|
||||
db_data:
|
||||
name: npm_db_data
|
||||
|
||||
networks:
|
||||
nginx_proxy_manager:
|
||||
name: npm_network
|
||||
|
@ -1,46 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This command reads the `DISABLE_IPV6` env var and will either enable
|
||||
# or disable ipv6 in all nginx configs based on this setting.
|
||||
|
||||
# Lowercase
|
||||
DISABLE_IPV6=$(echo "${DISABLE_IPV6:-}" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
CYAN='\E[1;36m'
|
||||
BLUE='\E[1;34m'
|
||||
YELLOW='\E[1;33m'
|
||||
RED='\E[1;31m'
|
||||
RESET='\E[0m'
|
||||
|
||||
FOLDER=$1
|
||||
if [ "$FOLDER" == "" ]; then
|
||||
echo -e "${RED}❯ $0 requires a absolute folder path as the first argument!${RESET}"
|
||||
echo -e "${YELLOW} ie: $0 /data/nginx${RESET}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
FILES=$(find "$FOLDER" -type f -name "*.conf")
|
||||
if [ "$DISABLE_IPV6" == "true" ] || [ "$DISABLE_IPV6" == "on" ] || [ "$DISABLE_IPV6" == "1" ] || [ "$DISABLE_IPV6" == "yes" ]; then
|
||||
# IPV6 is disabled
|
||||
echo "Disabling IPV6 in hosts"
|
||||
echo -e "${BLUE}❯ ${CYAN}Disabling IPV6 in hosts: ${YELLOW}${FOLDER}${RESET}"
|
||||
|
||||
# Iterate over configs and run the regex
|
||||
for FILE in $FILES
|
||||
do
|
||||
echo -e " ${BLUE}❯ ${YELLOW}${FILE}${RESET}"
|
||||
sed -E -i 's/^([^#]*)listen \[::\]/\1#listen [::]/g' "$FILE"
|
||||
done
|
||||
|
||||
else
|
||||
# IPV6 is enabled
|
||||
echo -e "${BLUE}❯ ${CYAN}Enabling IPV6 in hosts: ${YELLOW}${FOLDER}${RESET}"
|
||||
|
||||
# Iterate over configs and run the regex
|
||||
for FILE in $FILES
|
||||
do
|
||||
echo -e " ${BLUE}❯ ${YELLOW}${FILE}${RESET}"
|
||||
sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' "$FILE"
|
||||
done
|
||||
|
||||
fi
|
1
docker/rootfs/etc/cont-init.d/.gitignore
vendored
1
docker/rootfs/etc/cont-init.d/.gitignore
vendored
@ -1,3 +1,2 @@
|
||||
*
|
||||
!.gitignore
|
||||
!*.sh
|
||||
|
@ -1,7 +0,0 @@
|
||||
#!/usr/bin/with-contenv bash
|
||||
set -e
|
||||
|
||||
mkdir -p /data/logs
|
||||
echo "Changing ownership of /data/logs to $(id -u):$(id -g)"
|
||||
chown -R "$(id -u):$(id -g)" /data/logs
|
||||
|
@ -1,29 +0,0 @@
|
||||
#!/usr/bin/with-contenv bash
|
||||
# ref: https://github.com/linuxserver/docker-baseimage-alpine/blob/master/root/etc/cont-init.d/01-envfile
|
||||
|
||||
# in s6, environmental variables are written as text files for s6 to monitor
|
||||
# search through full-path filenames for files ending in "__FILE"
|
||||
for FILENAME in $(find /var/run/s6/container_environment/ | grep "__FILE$"); do
|
||||
echo "[secret-init] Evaluating ${FILENAME##*/} ..."
|
||||
|
||||
# set SECRETFILE to the contents of the full-path textfile
|
||||
SECRETFILE=$(cat ${FILENAME})
|
||||
# SECRETFILE=${FILENAME}
|
||||
# echo "[secret-init] Set SECRETFILE to ${SECRETFILE}" # DEBUG - rm for prod!
|
||||
|
||||
# if SECRETFILE exists / is not null
|
||||
if [[ -f ${SECRETFILE} ]]; then
|
||||
# strip the appended "__FILE" from environmental variable name ...
|
||||
STRIPFILE=$(echo ${FILENAME} | sed "s/__FILE//g")
|
||||
# echo "[secret-init] Set STRIPFILE to ${STRIPFILE}" # DEBUG - rm for prod!
|
||||
|
||||
# ... and set value to contents of secretfile
|
||||
# since s6 uses text files, this is effectively "export ..."
|
||||
printf $(cat ${SECRETFILE}) > ${STRIPFILE}
|
||||
# echo "[secret-init] Set ${STRIPFILE##*/} to $(cat ${STRIPFILE})" # DEBUG - rm for prod!"
|
||||
echo "[secret-init] Success! ${STRIPFILE##*/} set from ${FILENAME##*/}"
|
||||
|
||||
else
|
||||
echo "[secret-init] cannot find secret in ${FILENAME}"
|
||||
fi
|
||||
done
|
@ -1,6 +1,4 @@
|
||||
text = True
|
||||
non-interactive = True
|
||||
authenticator = webroot
|
||||
webroot-path = /data/letsencrypt-acme-challenge
|
||||
key-type = ecdsa
|
||||
elliptic-curve = secp384r1
|
||||
preferred-chain = ISRG Root X1
|
||||
|
@ -1,25 +0,0 @@
|
||||
/data/logs/*_access.log /data/logs/*/access.log {
|
||||
create 0644 root root
|
||||
weekly
|
||||
rotate 4
|
||||
missingok
|
||||
notifempty
|
||||
compress
|
||||
sharedscripts
|
||||
postrotate
|
||||
/bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true
|
||||
endscript
|
||||
}
|
||||
|
||||
/data/logs/*_error.log /data/logs/*/error.log {
|
||||
create 0644 root root
|
||||
weekly
|
||||
rotate 10
|
||||
missingok
|
||||
notifempty
|
||||
compress
|
||||
sharedscripts
|
||||
postrotate
|
||||
/bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true
|
||||
endscript
|
||||
}
|
@ -8,11 +8,10 @@ server {
|
||||
set $port "80";
|
||||
|
||||
server_name localhost-nginx-proxy-manager;
|
||||
access_log /data/logs/fallback_access.log standard;
|
||||
error_log /data/logs/fallback_error.log warn;
|
||||
access_log /data/logs/default.log standard;
|
||||
error_log /dev/null crit;
|
||||
include conf.d/include/assets.conf;
|
||||
include conf.d/include/block-exploits.conf;
|
||||
include conf.d/include/letsencrypt-acme-challenge.conf;
|
||||
|
||||
location / {
|
||||
index index.html;
|
||||
@ -30,7 +29,7 @@ server {
|
||||
set $port "443";
|
||||
|
||||
server_name localhost;
|
||||
access_log /data/logs/fallback_access.log standard;
|
||||
access_log /data/logs/default.log standard;
|
||||
error_log /dev/null crit;
|
||||
ssl_certificate /data/nginx/dummycert.pem;
|
||||
ssl_certificate_key /data/nginx/dummykey.pem;
|
||||
|
@ -17,9 +17,6 @@ server {
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_pass http://127.0.0.1:3000/;
|
||||
|
||||
proxy_read_timeout 15m;
|
||||
proxy_send_timeout 15m;
|
||||
}
|
||||
|
||||
location / {
|
||||
|
@ -1,4 +1,4 @@
|
||||
location ~* ^.*\.(css|js|jpe?g|gif|png|webp|woff|eot|ttf|svg|ico|css\.map|js\.map)$ {
|
||||
location ~* ^.*\.(css|js|jpe?g|gif|png|woff|eot|ttf|svg|ico|css\.map|js\.map)$ {
|
||||
if_modified_since off;
|
||||
|
||||
# use the public cache
|
||||
|
@ -1,2 +1,196 @@
|
||||
# This should be left blank is it is populated programatically
|
||||
# by the application backend.
|
||||
|
||||
set_real_ip_from 144.220.0.0/16;
|
||||
|
||||
set_real_ip_from 52.124.128.0/17;
|
||||
|
||||
set_real_ip_from 54.230.0.0/16;
|
||||
|
||||
set_real_ip_from 54.239.128.0/18;
|
||||
|
||||
set_real_ip_from 52.82.128.0/19;
|
||||
|
||||
set_real_ip_from 99.84.0.0/16;
|
||||
|
||||
set_real_ip_from 204.246.172.0/24;
|
||||
|
||||
set_real_ip_from 205.251.192.0/19;
|
||||
|
||||
set_real_ip_from 54.239.192.0/19;
|
||||
|
||||
set_real_ip_from 70.132.0.0/18;
|
||||
|
||||
set_real_ip_from 13.32.0.0/15;
|
||||
|
||||
set_real_ip_from 13.224.0.0/14;
|
||||
|
||||
set_real_ip_from 13.35.0.0/16;
|
||||
|
||||
set_real_ip_from 204.246.164.0/22;
|
||||
|
||||
set_real_ip_from 204.246.168.0/22;
|
||||
|
||||
set_real_ip_from 71.152.0.0/17;
|
||||
|
||||
set_real_ip_from 216.137.32.0/19;
|
||||
|
||||
set_real_ip_from 205.251.249.0/24;
|
||||
|
||||
set_real_ip_from 99.86.0.0/16;
|
||||
|
||||
set_real_ip_from 52.46.0.0/18;
|
||||
|
||||
set_real_ip_from 52.84.0.0/15;
|
||||
|
||||
set_real_ip_from 204.246.173.0/24;
|
||||
|
||||
set_real_ip_from 130.176.0.0/16;
|
||||
|
||||
set_real_ip_from 64.252.64.0/18;
|
||||
|
||||
set_real_ip_from 204.246.174.0/23;
|
||||
|
||||
set_real_ip_from 64.252.128.0/18;
|
||||
|
||||
set_real_ip_from 205.251.254.0/24;
|
||||
|
||||
set_real_ip_from 143.204.0.0/16;
|
||||
|
||||
set_real_ip_from 205.251.252.0/23;
|
||||
|
||||
set_real_ip_from 204.246.176.0/20;
|
||||
|
||||
set_real_ip_from 13.249.0.0/16;
|
||||
|
||||
set_real_ip_from 54.240.128.0/18;
|
||||
|
||||
set_real_ip_from 205.251.250.0/23;
|
||||
|
||||
set_real_ip_from 52.222.128.0/17;
|
||||
|
||||
set_real_ip_from 54.182.0.0/16;
|
||||
|
||||
set_real_ip_from 54.192.0.0/16;
|
||||
|
||||
set_real_ip_from 13.124.199.0/24;
|
||||
|
||||
set_real_ip_from 34.226.14.0/24;
|
||||
|
||||
set_real_ip_from 52.15.127.128/26;
|
||||
|
||||
set_real_ip_from 35.158.136.0/24;
|
||||
|
||||
set_real_ip_from 52.57.254.0/24;
|
||||
|
||||
set_real_ip_from 18.216.170.128/25;
|
||||
|
||||
set_real_ip_from 13.52.204.0/23;
|
||||
|
||||
set_real_ip_from 13.54.63.128/26;
|
||||
|
||||
set_real_ip_from 13.59.250.0/26;
|
||||
|
||||
set_real_ip_from 13.210.67.128/26;
|
||||
|
||||
set_real_ip_from 35.167.191.128/26;
|
||||
|
||||
set_real_ip_from 52.47.139.0/24;
|
||||
|
||||
set_real_ip_from 52.199.127.192/26;
|
||||
|
||||
set_real_ip_from 52.212.248.0/26;
|
||||
|
||||
set_real_ip_from 52.66.194.128/26;
|
||||
|
||||
set_real_ip_from 13.113.203.0/24;
|
||||
|
||||
set_real_ip_from 99.79.168.0/23;
|
||||
|
||||
set_real_ip_from 34.195.252.0/24;
|
||||
|
||||
set_real_ip_from 35.162.63.192/26;
|
||||
|
||||
set_real_ip_from 34.223.12.224/27;
|
||||
|
||||
set_real_ip_from 52.56.127.0/25;
|
||||
|
||||
set_real_ip_from 34.223.80.192/26;
|
||||
|
||||
set_real_ip_from 13.228.69.0/24;
|
||||
|
||||
set_real_ip_from 34.216.51.0/25;
|
||||
|
||||
set_real_ip_from 3.231.2.0/25;
|
||||
|
||||
set_real_ip_from 54.233.255.128/26;
|
||||
|
||||
set_real_ip_from 18.200.212.0/23;
|
||||
|
||||
set_real_ip_from 52.52.191.128/26;
|
||||
|
||||
set_real_ip_from 3.234.232.224/27;
|
||||
|
||||
set_real_ip_from 52.78.247.128/26;
|
||||
|
||||
set_real_ip_from 52.220.191.0/26;
|
||||
|
||||
set_real_ip_from 34.232.163.208/29;
|
||||
|
||||
set_real_ip_from 2600:9000:eee::/48;
|
||||
|
||||
set_real_ip_from 2600:9000:4000::/36;
|
||||
|
||||
set_real_ip_from 2600:9000:3000::/36;
|
||||
|
||||
set_real_ip_from 2600:9000:f000::/36;
|
||||
|
||||
set_real_ip_from 2600:9000:fff::/48;
|
||||
|
||||
set_real_ip_from 2600:9000:2000::/36;
|
||||
|
||||
set_real_ip_from 2600:9000:1000::/36;
|
||||
|
||||
set_real_ip_from 2600:9000:ddd::/48;
|
||||
|
||||
set_real_ip_from 2600:9000:5300::/40;
|
||||
|
||||
set_real_ip_from 173.245.48.0/20;
|
||||
|
||||
set_real_ip_from 103.21.244.0/22;
|
||||
|
||||
set_real_ip_from 103.22.200.0/22;
|
||||
|
||||
set_real_ip_from 103.31.4.0/22;
|
||||
|
||||
set_real_ip_from 141.101.64.0/18;
|
||||
|
||||
set_real_ip_from 108.162.192.0/18;
|
||||
|
||||
set_real_ip_from 190.93.240.0/20;
|
||||
|
||||
set_real_ip_from 188.114.96.0/20;
|
||||
|
||||
set_real_ip_from 197.234.240.0/22;
|
||||
|
||||
set_real_ip_from 198.41.128.0/17;
|
||||
|
||||
set_real_ip_from 162.158.0.0/15;
|
||||
|
||||
set_real_ip_from 104.16.0.0/12;
|
||||
|
||||
set_real_ip_from 172.64.0.0/13;
|
||||
|
||||
set_real_ip_from 131.0.72.0/22;
|
||||
|
||||
set_real_ip_from 2400:cb00::/32;
|
||||
|
||||
set_real_ip_from 2606:4700::/32;
|
||||
|
||||
set_real_ip_from 2803:f800::/32;
|
||||
|
||||
set_real_ip_from 2405:b500::/32;
|
||||
|
||||
set_real_ip_from 2405:8100::/32;
|
||||
|
||||
set_real_ip_from 2a06:98c0::/29;
|
||||
|
||||
set_real_ip_from 2c0f:f248::/32;
|
||||
|
@ -5,7 +5,6 @@ location ^~ /.well-known/acme-challenge/ {
|
||||
# Since this is for letsencrypt authentication of a domain and they do not give IP ranges of their infrastructure
|
||||
# we need to open up access by turning off auth and IP ACL for this location.
|
||||
auth_basic off;
|
||||
auth_request off;
|
||||
allow all;
|
||||
|
||||
# Set correct content type. According to this:
|
||||
|
@ -2,7 +2,5 @@ add_header X-Served-By $host;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Scheme $scheme;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_pass $forward_scheme://$server:$port$request_uri;
|
||||
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_pass $forward_scheme://$server:$port;
|
||||
|
@ -3,5 +3,7 @@ ssl_session_cache shared:SSL:50m;
|
||||
|
||||
# intermediate configuration. tweak to your needs.
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
|
||||
ssl_prefer_server_ciphers off;
|
||||
ssl_ciphers 'EECDH+AESGCM:AES256+EECDH:AES256+EDH:EDH+AESGCM:ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-
|
||||
ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AE
|
||||
S128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES';
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
@ -18,9 +18,6 @@ server {
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_pass http://127.0.0.1:3000/;
|
||||
|
||||
proxy_read_timeout 15m;
|
||||
proxy_send_timeout 15m;
|
||||
}
|
||||
|
||||
location / {
|
||||
|
@ -9,7 +9,7 @@ worker_processes auto;
|
||||
# Enables the use of JIT for regular expressions to speed-up their processing.
|
||||
pcre_jit on;
|
||||
|
||||
error_log /data/logs/fallback_error.log warn;
|
||||
error_log /data/logs/error.log warn;
|
||||
|
||||
# Includes files with directives to load dynamic modules.
|
||||
include /etc/nginx/modules/*.conf;
|
||||
@ -26,15 +26,12 @@ http {
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
client_body_temp_path /tmp/nginx/body 1 2;
|
||||
keepalive_timeout 90s;
|
||||
proxy_connect_timeout 90s;
|
||||
proxy_send_timeout 90s;
|
||||
proxy_read_timeout 90s;
|
||||
keepalive_timeout 65;
|
||||
ssl_prefer_server_ciphers on;
|
||||
gzip on;
|
||||
proxy_ignore_client_abort off;
|
||||
client_max_body_size 2000m;
|
||||
server_names_hash_bucket_size 1024;
|
||||
server_names_hash_bucket_size 64;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header X-Forwarded-Scheme $scheme;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
@ -46,7 +43,8 @@ http {
|
||||
log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"';
|
||||
log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"';
|
||||
|
||||
access_log /data/logs/fallback_access.log proxy;
|
||||
|
||||
access_log /data/logs/default.log proxy;
|
||||
|
||||
# Dynamically generated resolvers file
|
||||
include /etc/nginx/conf.d/include/resolvers.conf;
|
||||
@ -57,20 +55,14 @@ http {
|
||||
}
|
||||
|
||||
# Real IP Determination
|
||||
|
||||
# Local subnets:
|
||||
set_real_ip_from 10.0.0.0/8;
|
||||
set_real_ip_from 172.16.0.0/12; # Includes Docker subnet
|
||||
set_real_ip_from 192.168.0.0/16;
|
||||
# Docker subnet:
|
||||
set_real_ip_from 172.0.0.0/8;
|
||||
# NPM generated CDN ip ranges:
|
||||
include conf.d/include/ip_ranges.conf;
|
||||
# always put the following 2 lines after ip subnets:
|
||||
real_ip_header X-Real-IP;
|
||||
real_ip_header X-Forwarded-For;
|
||||
real_ip_recursive on;
|
||||
|
||||
# Custom
|
||||
include /data/nginx/custom/http_top[.]conf;
|
||||
|
||||
# Files generated by NPM
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /data/nginx/default_host/*.conf;
|
||||
@ -86,9 +78,6 @@ http {
|
||||
stream {
|
||||
# Files generated by NPM
|
||||
include /data/nginx/stream/*.conf;
|
||||
|
||||
# Custom
|
||||
include /data/nginx/custom/stream[.]conf;
|
||||
}
|
||||
|
||||
# Custom
|
||||
|
@ -4,7 +4,6 @@
|
||||
|
||||
if [ "$DEVELOPMENT" == "true" ]; then
|
||||
cd /app/frontend || exit 1
|
||||
# If yarn install fails: add --verbose --network-concurrency 1
|
||||
yarn install
|
||||
yarn watch
|
||||
else
|
||||
|
@ -5,8 +5,7 @@ mkdir -p /data/letsencrypt-acme-challenge
|
||||
cd /app || echo
|
||||
|
||||
if [ "$DEVELOPMENT" == "true" ]; then
|
||||
cd /app || exit 1
|
||||
# If yarn install fails: add --verbose --network-concurrency 1
|
||||
cd /app/backend || exit 1
|
||||
yarn install
|
||||
node --max_old_space_size=250 --abort_on_uncaught_exception node_modules/nodemon/bin/nodemon.js
|
||||
else
|
||||
|
@ -24,12 +24,8 @@ chown root /tmp/nginx
|
||||
|
||||
# Dynamically generate resolvers file, if resolver is IPv6, enclose in `[]`
|
||||
# thanks @tfmm
|
||||
if [ "$DISABLE_IPV6" == "true" ] || [ "$DISABLE_IPV6" == "on" ] || [ "$DISABLE_IPV6" == "1" ] || [ "$DISABLE_IPV6" == "yes" ];
|
||||
then
|
||||
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" { sub(/%.*$/,"",$2); print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf) ipv6=off valid=10s;" > /etc/nginx/conf.d/include/resolvers.conf
|
||||
else
|
||||
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" { sub(/%.*$/,"",$2); print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf) valid=10s;" > /etc/nginx/conf.d/include/resolvers.conf
|
||||
fi
|
||||
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" {print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf);" > /etc/nginx/conf.d/include/resolvers.conf
|
||||
|
||||
# Generate dummy self-signed certificate.
|
||||
if [ ! -f /data/nginx/dummycert.pem ] || [ ! -f /data/nginx/dummykey.pem ]
|
||||
then
|
||||
@ -40,14 +36,10 @@ then
|
||||
-days 3650 \
|
||||
-nodes \
|
||||
-x509 \
|
||||
-subj '/O=localhost/OU=localhost/CN=localhost' \
|
||||
-subj '/O=Nginx Proxy Manager/OU=Dummy Certificate/CN=localhost' \
|
||||
-keyout /data/nginx/dummykey.pem \
|
||||
-out /data/nginx/dummycert.pem
|
||||
echo "Complete"
|
||||
fi
|
||||
|
||||
# Handle IPV6 settings
|
||||
/bin/handle-ipv6-setting /etc/nginx/conf.d
|
||||
/bin/handle-ipv6-setting /data/nginx
|
||||
|
||||
exec nginx
|
||||
|
@ -16,7 +16,5 @@ alias h='cd ~;clear;'
|
||||
|
||||
echo -e -n '\E[1;34m'
|
||||
figlet -w 120 "NginxProxyManager"
|
||||
echo -e "\E[1;36mVersion \E[1;32m${NPM_BUILD_VERSION:-2.0.0-dev} (${NPM_BUILD_COMMIT:-dev}) ${NPM_BUILD_DATE:-0000-00-00}\E[1;36m, OpenResty \E[1;32m${OPENRESTY_VERSION:-unknown}\E[1;36m, ${ID:-debian} \E[1;32m${VERSION:-unknown}\E[1;36m, Certbot \E[1;32m$(certbot --version)\E[0m"
|
||||
echo -e -n '\E[1;34m'
|
||||
cat /built-for-arch
|
||||
echo -e '\E[0m'
|
||||
echo -e "\E[1;36mVersion \E[1;32m${NPM_BUILD_VERSION:-2.0.0-dev}\E[1;36m (${NPM_BUILD_COMMIT:-dev}) ${NPM_BUILD_DATE:-0000-00-00}, Nginx \E[1;32m${NGINX_VERSION:-unknown}\E[1;36m, Alpine \E[1;32m${VERSION_ID:-unknown}\E[1;36m, Kernel \E[1;32m$(uname -r)\E[0m"
|
||||
echo
|
||||
|
3
docs/.gitignore
vendored
3
docs/.gitignore
vendored
@ -1,3 +0,0 @@
|
||||
.vuepress/dist
|
||||
node_modules
|
||||
ts
|
@ -1,82 +0,0 @@
|
||||
module.exports = {
|
||||
locales: {
|
||||
"/": {
|
||||
lang: "en-US",
|
||||
title: "Nginx Proxy Manager",
|
||||
description: "Expose your services easily and securely"
|
||||
}
|
||||
},
|
||||
head: [
|
||||
["link", { rel: "icon", href: "/icon.png" }],
|
||||
["meta", { name: "description", content: "Docker container and built in Web Application for managing Nginx proxy hosts with a simple, powerful interface, providing free SSL support via Let's Encrypt" }],
|
||||
["meta", { property: "og:title", content: "Nginx Proxy Manager" }],
|
||||
["meta", { property: "og:description", content: "Docker container and built in Web Application for managing Nginx proxy hosts with a simple, powerful interface, providing free SSL support via Let's Encrypt"}],
|
||||
["meta", { property: "og:type", content: "website" }],
|
||||
["meta", { property: "og:url", content: "https://nginxproxymanager.com/" }],
|
||||
["meta", { property: "og:image", content: "https://nginxproxymanager.com/icon.png" }],
|
||||
["meta", { name: "twitter:card", content: "summary"}],
|
||||
["meta", { name: "twitter:title", content: "Nginx Proxy Manager"}],
|
||||
["meta", { name: "twitter:description", content: "Docker container and built in Web Application for managing Nginx proxy hosts with a simple, powerful interface, providing free SSL support via Let's Encrypt"}],
|
||||
["meta", { name: "twitter:image", content: "https://nginxproxymanager.com/icon.png"}],
|
||||
["meta", { name: "twitter:alt", content: "Nginx Proxy Manager"}],
|
||||
],
|
||||
themeConfig: {
|
||||
logo: "/icon.png",
|
||||
// the GitHub repo path
|
||||
repo: "jc21/nginx-proxy-manager",
|
||||
// the label linking to the repo
|
||||
repoLabel: "GitHub",
|
||||
// if your docs are not at the root of the repo:
|
||||
docsDir: "docs",
|
||||
// defaults to false, set to true to enable
|
||||
editLinks: true,
|
||||
locales: {
|
||||
"/": {
|
||||
// text for the language dropdown
|
||||
selectText: "Languages",
|
||||
// label for this locale in the language dropdown
|
||||
label: "English",
|
||||
// Custom text for edit link. Defaults to "Edit this page"
|
||||
editLinkText: "Edit this page on GitHub",
|
||||
// Custom navbar values
|
||||
nav: [{ text: "Setup", link: "/setup/" }],
|
||||
// Custom sidebar values
|
||||
sidebar: [
|
||||
"/",
|
||||
["/guide/", "Guide"],
|
||||
["/screenshots/", "Screenshots"],
|
||||
["/setup/", "Setup Instructions"],
|
||||
["/advanced-config/", "Advanced Configuration"],
|
||||
["/upgrading/", "Upgrading"],
|
||||
["/faq/", "Frequently Asked Questions"],
|
||||
["/third-party/", "Third Party"]
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
plugins: [
|
||||
[
|
||||
"@vuepress/google-analytics",
|
||||
{
|
||||
ga: "UA-99675467-4"
|
||||
}
|
||||
],
|
||||
[
|
||||
"sitemap",
|
||||
{
|
||||
hostname: "https://nginxproxymanager.com"
|
||||
}
|
||||
],
|
||||
[
|
||||
'vuepress-plugin-zooming',
|
||||
{
|
||||
selector: '.zooming',
|
||||
delay: 1000,
|
||||
options: {
|
||||
bgColor: 'black',
|
||||
zIndex: 10000,
|
||||
},
|
||||
},
|
||||
],
|
||||
]
|
||||
};
|
Binary file not shown.
Before Width: | Height: | Size: 13 KiB |
Binary file not shown.
Before Width: | Height: | Size: 18 KiB |
Binary file not shown.
Before Width: | Height: | Size: 57 KiB |
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 13 KiB |
Binary file not shown.
@ -1,2 +0,0 @@
|
||||
User-agent: *
|
||||
Disallow:
|
Binary file not shown.
Before Width: | Height: | Size: 106 KiB |
Binary file not shown.
Before Width: | Height: | Size: 178 KiB |
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user