Compare commits
78 Commits
Author | SHA1 | Date | |
---|---|---|---|
0bc12f3bdf | |||
31aa9c9644 | |||
ddbfdf6f6e | |||
43c7063538 | |||
3f089fb239 | |||
2d0f7d5126 | |||
06272d3d2c | |||
3885c0ad6d | |||
099ec00155 | |||
92fcae9c54 | |||
22e8961c80 | |||
4d5adefa41 | |||
feaa0e51bd | |||
af83cb57d0 | |||
8b4f3507c3 | |||
bda3dba369 | |||
beb313af40 | |||
4fad9d672f | |||
0fca64929e | |||
9e476e5b24 | |||
0819a265f5 | |||
ad8eac4f07 | |||
b49de0e23e | |||
efbd024da9 | |||
e7ddcb91fc | |||
3095cff7d9 | |||
6d8f5aa3a7 | |||
27a06850ff | |||
dce6423c85 | |||
d79fcbf447 | |||
631d9ae4eb | |||
0ac349ba67 | |||
1b0563a4a6 | |||
1db2a29d49 | |||
14e62a0830 | |||
2280a61c2b | |||
f3e6f64c0c | |||
d04b7a0d88 | |||
71dfd5d8f8 | |||
133d66c2fe | |||
6f1d38a0e2 | |||
aad9ecde6b | |||
ae9324295c | |||
0acec1105b | |||
5a9a716ca6 | |||
418899d425 | |||
e7379e3683 | |||
29bebcc73e | |||
26064b20b8 | |||
3dc9b20543 | |||
444dbd5160 | |||
c2f99e253c | |||
5c7fb7b698 | |||
733d7d9583 | |||
6d2f532806 | |||
f76c9226c8 | |||
ecbc41b622 | |||
4f60d3e7df | |||
7d86fd223e | |||
e3ed216a70 | |||
2a3d792591 | |||
4d754275ab | |||
44e5f0957c | |||
83ef426b93 | |||
8b8f5fac69 | |||
424ccce43c | |||
ad41cc985d | |||
981d5a199f | |||
48f2bb4cd8 | |||
aa270925e9 | |||
3836f7c40a | |||
9fcd32c2ca | |||
2657bcf30c | |||
86ad7d6238 | |||
c97e6ada5b | |||
cd40ca7f0a | |||
e2ac3b4880 | |||
7f8b185e48 |
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@ -0,0 +1,36 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Checklist**
|
||||
- Have you pulled and found the error with `jc21/nginx-proxy-manager:latest` docker image?
|
||||
- Are you sure you're not using someone else's docker image?
|
||||
- If having problems with Lets Encrypt, have you made absolutely sure your site is accessible from outside of your network?
|
||||
|
||||
**Describe the bug**
|
||||
- A clear and concise description of what the bug is.
|
||||
- What version of Nginx Proxy Manager is reported on the login page?
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Operating System**
|
||||
- Please specify if using a Rpi, Mac, orchestration tool or any other setups that might affect the reproduction of this error.
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here, docker version, browser version if applicable to the problem. Too much info is better than too little.
|
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
38
Dockerfile.arm64
Normal file
38
Dockerfile.arm64
Normal file
@ -0,0 +1,38 @@
|
||||
FROM jc21/nginx-proxy-manager-base:arm64
|
||||
|
||||
MAINTAINER Jamie Curnow <jc@jc21.com>
|
||||
LABEL maintainer="Jamie Curnow <jc@jc21.com>"
|
||||
|
||||
ENV SUPPRESS_NO_CONFIG_WARNING=1
|
||||
ENV S6_FIX_ATTRS_HIDDEN=1
|
||||
RUN echo "fs.file-max = 65535" > /etc/sysctl.conf
|
||||
|
||||
# Nginx, Node and required packages should already be installed from the base image
|
||||
|
||||
# root filesystem
|
||||
COPY rootfs /
|
||||
|
||||
# s6 overlay
|
||||
RUN curl -L -o /tmp/s6-overlay-aarch64.tar.gz "https://github.com/just-containers/s6-overlay/releases/download/v1.21.8.0/s6-overlay-aarch64.tar.gz" \
|
||||
&& tar xzf /tmp/s6-overlay-aarch64.tar.gz -C /
|
||||
|
||||
# App
|
||||
ENV NODE_ENV=production
|
||||
|
||||
ADD dist /app/dist
|
||||
ADD node_modules /app/node_modules
|
||||
ADD src/backend /app/src/backend
|
||||
ADD package.json /app/package.json
|
||||
ADD knexfile.js /app/knexfile.js
|
||||
|
||||
# Volumes
|
||||
VOLUME [ "/data", "/etc/letsencrypt" ]
|
||||
CMD [ "/init" ]
|
||||
|
||||
# Ports
|
||||
EXPOSE 80
|
||||
EXPOSE 81
|
||||
EXPOSE 443
|
||||
EXPOSE 9876
|
||||
|
||||
HEALTHCHECK --interval=15s --timeout=3s CMD curl -f http://localhost:9876/health || exit 1
|
38
Dockerfile.armv6l
Normal file
38
Dockerfile.armv6l
Normal file
@ -0,0 +1,38 @@
|
||||
FROM jc21/nginx-proxy-manager-base:armv6
|
||||
|
||||
MAINTAINER Jamie Curnow <jc@jc21.com>
|
||||
LABEL maintainer="Jamie Curnow <jc@jc21.com>"
|
||||
|
||||
ENV SUPPRESS_NO_CONFIG_WARNING=1
|
||||
ENV S6_FIX_ATTRS_HIDDEN=1
|
||||
RUN echo "fs.file-max = 65535" > /etc/sysctl.conf
|
||||
|
||||
# Nginx, Node and required packages should already be installed from the base image
|
||||
|
||||
# root filesystem
|
||||
COPY rootfs /
|
||||
|
||||
# s6 overlay
|
||||
RUN curl -L -o /tmp/s6-overlay-arm.tar.gz "https://github.com/just-containers/s6-overlay/releases/download/v1.21.8.0/s6-overlay-arm.tar.gz" \
|
||||
&& tar xzf /tmp/s6-overlay-arm.tar.gz -C /
|
||||
|
||||
# App
|
||||
ENV NODE_ENV=production
|
||||
|
||||
ADD dist /app/dist
|
||||
ADD node_modules /app/node_modules
|
||||
ADD src/backend /app/src/backend
|
||||
ADD package.json /app/package.json
|
||||
ADD knexfile.js /app/knexfile.js
|
||||
|
||||
# Volumes
|
||||
VOLUME [ "/data", "/etc/letsencrypt" ]
|
||||
CMD [ "/init" ]
|
||||
|
||||
# Ports
|
||||
EXPOSE 80
|
||||
EXPOSE 81
|
||||
EXPOSE 443
|
||||
EXPOSE 9876
|
||||
|
||||
HEALTHCHECK --interval=15s --timeout=3s CMD curl -f http://localhost:9876/health || exit 1
|
351
Jenkinsfile
vendored
351
Jenkinsfile
vendored
@ -5,17 +5,48 @@ pipeline {
|
||||
}
|
||||
agent any
|
||||
environment {
|
||||
IMAGE_NAME = "nginx-proxy-manager"
|
||||
BASE_IMAGE_NAME = "jc21/nginx-proxy-manager-base:v2"
|
||||
TEMP_IMAGE_NAME = "nginx-proxy-manager-build_${BUILD_NUMBER}"
|
||||
TEMP_IMAGE_NAME_ARM = "nginx-proxy-manager-arm-build_${BUILD_NUMBER}"
|
||||
TAG_VERSION = getPackageVersion()
|
||||
MAJOR_VERSION = "2"
|
||||
IMAGE = "nginx-proxy-manager"
|
||||
BASE_IMAGE = "jc21/${IMAGE}-base"
|
||||
TEMP_IMAGE = "${IMAGE}-build_${BUILD_NUMBER}"
|
||||
TAG_VERSION = getPackageVersion()
|
||||
MAJOR_VERSION = "2"
|
||||
BRANCH_LOWER = "${BRANCH_NAME.toLowerCase()}"
|
||||
// Architectures:
|
||||
AMD64_TAG = "amd64"
|
||||
ARMV6_TAG = "armv6l"
|
||||
ARMV7_TAG = "armv7l"
|
||||
ARM64_TAG = "arm64"
|
||||
}
|
||||
stages {
|
||||
stage('Prepare') {
|
||||
stage('Build PR') {
|
||||
when {
|
||||
changeRequest()
|
||||
}
|
||||
steps {
|
||||
sh 'docker pull $DOCKER_CI_TOOLS'
|
||||
ansiColor('xterm') {
|
||||
// Codebase
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
|
||||
sh 'rm -rf node_modules'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
|
||||
sh 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} node-prune'
|
||||
|
||||
// Docker Build
|
||||
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${AMD64_TAG} .'
|
||||
|
||||
// Dockerhub
|
||||
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG}'
|
||||
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
|
||||
sh "docker login -u '${duser}' -p '${dpass}'"
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG}'
|
||||
}
|
||||
|
||||
sh 'docker rmi ${TEMP_IMAGE}-${AMD64_TAG}'
|
||||
|
||||
script {
|
||||
def comment = pullRequest.comment("Docker Image for build ${BUILD_NUMBER} is available on [DockerHub](https://cloud.docker.com/repository/docker/jc21/${IMAGE}) as `jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG}`")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Build Develop') {
|
||||
@ -25,114 +56,289 @@ pipeline {
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
// Codebase
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME npm run-script build'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
|
||||
sh 'rm -rf node_modules'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install --prod'
|
||||
sh 'docker run --rm -v $(pwd):/data $DOCKER_CI_TOOLS node-prune'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
|
||||
sh 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} node-prune'
|
||||
|
||||
// Docker Build
|
||||
sh 'docker build --pull --no-cache --squash --compress -t $TEMP_IMAGE_NAME .'
|
||||
|
||||
// Private Registry
|
||||
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:develop'
|
||||
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:develop'
|
||||
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${AMD64_TAG} .'
|
||||
|
||||
// Dockerhub
|
||||
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:develop'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:develop-${AMD64_TAG}'
|
||||
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
|
||||
sh "docker login -u '${duser}' -p '$dpass'"
|
||||
sh 'docker push docker.io/jc21/$IMAGE_NAME:develop'
|
||||
sh "docker login -u '${duser}' -p '${dpass}'"
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:develop-${AMD64_TAG}'
|
||||
}
|
||||
|
||||
sh 'docker rmi $TEMP_IMAGE_NAME'
|
||||
sh 'docker rmi ${TEMP_IMAGE}-${AMD64_TAG}'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Build Master') {
|
||||
when {
|
||||
branch 'master'
|
||||
}
|
||||
parallel {
|
||||
stage('x86_64') {
|
||||
when {
|
||||
branch 'master'
|
||||
// ========================
|
||||
// amd64
|
||||
// ========================
|
||||
stage('amd64') {
|
||||
agent {
|
||||
label 'amd64'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
// Codebase
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME npm run-script build'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
|
||||
sh 'rm -rf node_modules'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install --prod'
|
||||
sh 'docker run --rm -v $(pwd):/data $DOCKER_CI_TOOLS node-prune'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
|
||||
sh 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} node-prune'
|
||||
|
||||
// Docker Build
|
||||
sh 'docker build --pull --no-cache --squash --compress -t $TEMP_IMAGE_NAME .'
|
||||
|
||||
// Private Registry
|
||||
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION'
|
||||
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION'
|
||||
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest'
|
||||
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest'
|
||||
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${AMD64_TAG} .'
|
||||
|
||||
// Dockerhub
|
||||
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:$TAG_VERSION'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:latest'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:latest-${AMD64_TAG}'
|
||||
|
||||
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
|
||||
sh "docker login -u '${duser}' -p '$dpass'"
|
||||
sh 'docker push docker.io/jc21/$IMAGE_NAME:$TAG_VERSION'
|
||||
sh 'docker push docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION'
|
||||
sh 'docker push docker.io/jc21/$IMAGE_NAME:latest'
|
||||
sh "docker login -u '${duser}' -p '${dpass}'"
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:latest-${AMD64_TAG}'
|
||||
}
|
||||
|
||||
sh 'docker rmi $TEMP_IMAGE_NAME'
|
||||
sh 'docker rmi ${TEMP_IMAGE}-${AMD64_TAG}'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('armhf') {
|
||||
when {
|
||||
branch 'master'
|
||||
}
|
||||
// ========================
|
||||
// arm64
|
||||
// ========================
|
||||
stage('arm64') {
|
||||
agent {
|
||||
label 'armhf'
|
||||
label 'arm64'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
// Codebase
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME-armhf yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME-armhf npm run-script build'
|
||||
sh 'rm -rf node_modules'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME-armhf yarn install --prod'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
|
||||
sh 'sudo rm -rf node_modules'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
|
||||
|
||||
// Docker Build
|
||||
sh 'docker build --pull --no-cache --squash --compress -t $TEMP_IMAGE_NAME_ARM -f Dockerfile.armhf .'
|
||||
|
||||
// Private Registry
|
||||
sh 'docker tag $TEMP_IMAGE_NAME_ARM $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION-armhf'
|
||||
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION-armhf'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME_ARM $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION-armhf'
|
||||
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION-armhf'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME_ARM $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest-armhf'
|
||||
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest-armhf'
|
||||
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${ARM64_TAG} -f Dockerfile.${ARM64_TAG} .'
|
||||
|
||||
// Dockerhub
|
||||
sh 'docker tag $TEMP_IMAGE_NAME_ARM docker.io/jc21/$IMAGE_NAME:$TAG_VERSION-armhf'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME_ARM docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION-armhf'
|
||||
sh 'docker tag $TEMP_IMAGE_NAME_ARM docker.io/jc21/$IMAGE_NAME:latest-armhf'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARM64_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARM64_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARM64_TAG} docker.io/jc21/${IMAGE}:latest-${ARM64_TAG}'
|
||||
|
||||
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
|
||||
sh "docker login -u '${duser}' -p '$dpass'"
|
||||
sh 'docker push docker.io/jc21/$IMAGE_NAME:$TAG_VERSION-armhf'
|
||||
sh 'docker push docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION-armhf'
|
||||
sh 'docker push docker.io/jc21/$IMAGE_NAME:latest-armhf'
|
||||
sh "docker login -u '${duser}' -p '${dpass}'"
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:latest-${ARM64_TAG}'
|
||||
}
|
||||
|
||||
sh 'docker rmi $TEMP_IMAGE_NAME_ARM'
|
||||
sh 'docker rmi ${TEMP_IMAGE}-${ARM64_TAG}'
|
||||
}
|
||||
}
|
||||
}
|
||||
// ========================
|
||||
// armv7l
|
||||
// ========================
|
||||
stage('armv7l') {
|
||||
agent {
|
||||
label 'armv7l'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
// Codebase
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
|
||||
sh 'rm -rf node_modules'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
|
||||
|
||||
// Docker Build
|
||||
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${ARMV7_TAG} -f Dockerfile.${ARMV7_TAG} .'
|
||||
|
||||
// Dockerhub
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARMV7_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARMV7_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARMV7_TAG} docker.io/jc21/${IMAGE}:latest-${ARMV7_TAG}'
|
||||
|
||||
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
|
||||
sh "docker login -u '${duser}' -p '${dpass}'"
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:latest-${ARMV7_TAG}'
|
||||
}
|
||||
|
||||
sh 'docker rmi ${TEMP_IMAGE}-${ARMV7_TAG}'
|
||||
}
|
||||
}
|
||||
}
|
||||
// ========================
|
||||
// armv6l - Disabled for the time being
|
||||
// ========================
|
||||
/*
|
||||
stage('armv6l') {
|
||||
agent {
|
||||
label 'armv6l'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
// Codebase
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
|
||||
sh 'rm -rf node_modules'
|
||||
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
|
||||
|
||||
// Docker Build
|
||||
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${ARMV6_TAG} -f Dockerfile.${ARMV6_TAG} .'
|
||||
|
||||
// Dockerhub
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARMV6_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARMV6_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG}'
|
||||
sh 'docker tag ${TEMP_IMAGE}-${ARMV6_TAG} docker.io/jc21/${IMAGE}:latest-${ARMV6_TAG}'
|
||||
|
||||
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
|
||||
sh "docker login -u '${duser}' -p '${dpass}'"
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG}'
|
||||
sh 'docker push docker.io/jc21/${IMAGE}:latest-${ARMV6_TAG}'
|
||||
}
|
||||
|
||||
sh 'docker rmi ${TEMP_IMAGE}-${ARMV6_TAG}'
|
||||
}
|
||||
}
|
||||
}
|
||||
*/
|
||||
}
|
||||
}
|
||||
// ========================
|
||||
// latest manifest
|
||||
// ========================
|
||||
stage('Latest Manifest') {
|
||||
when {
|
||||
branch 'master'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
// =======================
|
||||
// latest
|
||||
// =======================
|
||||
sh 'docker pull jc21/${IMAGE}:latest-${AMD64_TAG}'
|
||||
sh 'docker pull jc21/${IMAGE}:latest-${ARM64_TAG}'
|
||||
sh 'docker pull jc21/${IMAGE}:latest-${ARMV7_TAG}'
|
||||
//sh 'docker pull jc21/${IMAGE}:latest-${ARMV6_TAG}'
|
||||
|
||||
sh 'docker manifest push --purge jc21/${IMAGE}:latest || echo ""'
|
||||
sh 'docker manifest create jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${AMD64_TAG} jc21/${IMAGE}:latest-${ARM64_TAG} jc21/${IMAGE}:latest-${ARMV7_TAG}'
|
||||
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${AMD64_TAG} --arch ${AMD64_TAG}'
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
|
||||
//sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
|
||||
sh 'docker manifest push --purge jc21/${IMAGE}:latest'
|
||||
|
||||
// =======================
|
||||
// major version
|
||||
// =======================
|
||||
sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG}'
|
||||
sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG}'
|
||||
sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
|
||||
//sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG}'
|
||||
|
||||
sh 'docker manifest push --purge jc21/${IMAGE}:${MAJOR_VERSION} || echo ""'
|
||||
sh 'docker manifest create jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
|
||||
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG} --arch ${AMD64_TAG}'
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
|
||||
//sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
|
||||
|
||||
// =======================
|
||||
// version
|
||||
// =======================
|
||||
sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG}'
|
||||
sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG}'
|
||||
sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
|
||||
//sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
|
||||
|
||||
sh 'docker manifest push --purge jc21/${IMAGE}:${TAG_VERSION} || echo ""'
|
||||
sh 'docker manifest create jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
|
||||
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG} --arch ${AMD64_TAG}'
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
|
||||
//sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
|
||||
}
|
||||
}
|
||||
}
|
||||
// ========================
|
||||
// develop
|
||||
// ========================
|
||||
stage('Develop Manifest') {
|
||||
when {
|
||||
branch 'develop'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
sh 'docker pull jc21/${IMAGE}:develop-${AMD64_TAG}'
|
||||
//sh 'docker pull jc21/${IMAGE}:develop-${ARM64_TAG}'
|
||||
//sh 'docker pull jc21/${IMAGE}:develop-${ARMV7_TAG}'
|
||||
//sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
|
||||
|
||||
sh 'docker manifest push --purge jc21/${IMAGE}:develop || :'
|
||||
sh 'docker manifest create jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${AMD64_TAG}'
|
||||
|
||||
sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${AMD64_TAG} --arch ${AMD64_TAG}'
|
||||
//sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
|
||||
//sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
|
||||
//sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
|
||||
}
|
||||
}
|
||||
}
|
||||
// ========================
|
||||
// cleanup
|
||||
// ========================
|
||||
stage('Latest Cleanup') {
|
||||
when {
|
||||
branch 'master'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
sh 'docker rmi jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${AMD64_TAG} jc21/${IMAGE}:latest-${ARM64_TAG} jc21/${IMAGE}:latest-${ARMV7_TAG} || echo ""'
|
||||
sh 'docker rmi jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG} || echo ""'
|
||||
sh 'docker rmi jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG} || echo ""'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Develop Cleanup') {
|
||||
when {
|
||||
branch 'develop'
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
sh 'docker rmi jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${AMD64_TAG} || echo ""'
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('PR Cleanup') {
|
||||
when {
|
||||
changeRequest()
|
||||
}
|
||||
steps {
|
||||
ansiColor('xterm') {
|
||||
sh 'docker rmi jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG} || echo ""'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -149,7 +355,6 @@ pipeline {
|
||||
}
|
||||
|
||||
def getPackageVersion() {
|
||||
ver = sh(script: 'docker run --rm -v $(pwd):/data $DOCKER_CI_TOOLS bash -c "cat /data/package.json|jq -r \'.version\'"', returnStdout: true)
|
||||
ver = sh(script: 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} bash -c "cat /data/package.json|jq -r \'.version\'"', returnStdout: true)
|
||||
return ver.trim()
|
||||
}
|
||||
|
||||
|
34
README.md
34
README.md
@ -2,20 +2,22 @@
|
||||
|
||||
# Nginx Proxy Manager
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
This project comes as a pre-built docker image that enables you to easily forward to your websites
|
||||
running at home or otherwise, including free SSL, without having to know too much about Nginx or Letsencrypt.
|
||||
|
||||
----------
|
||||
|
||||
**WARNING: Version 2 a complete rewrite!** If you are using the `latest` docker tag and update to version 2
|
||||
without preparation, horrible things might happen. Refer to the [Importing Documentation](doc/IMPORTING.md).
|
||||
|
||||
----------
|
||||
|
||||
## Project Goal
|
||||
|
||||
I created this project to fill a personal need to provide users with a easy way to accomplish reverse
|
||||
proxying hosts with SSL termination and it had to be so easy that a monkey could do it. This goal hasn't changed.
|
||||
While there might be advanced options they are optional and the project should be as simple as possible
|
||||
so that the barrier for entry here is low.
|
||||
|
||||
|
||||
## Features
|
||||
|
||||
- Beautiful and Secure Admin Interface based on [Tabler](https://tabler.github.io/)
|
||||
@ -55,24 +57,6 @@ Please consult the [installation instructions](doc/INSTALL.md) for a complete gu
|
||||
if you just want to get up and running in the quickest time possible, grab all the files in the `doc/example/` folder and run `docker-compose up -d`
|
||||
|
||||
|
||||
## Importing from Version 1?
|
||||
|
||||
Here's a [guide for you to migrate your configuration](doc/IMPORTING.md). You should definitely read the [installation instructions](doc/INSTALL.md) first though.
|
||||
|
||||
**Why should I?**
|
||||
|
||||
Version 2 has the following improvements:
|
||||
|
||||
- Management security and multiple user access
|
||||
- User permissions and visibility
|
||||
- Custom SSL certificate support
|
||||
- Audit log of changes
|
||||
- Broken nginx config detection
|
||||
- Multiple domains in Let's Encrypt certificates
|
||||
- Wildcard domain name support (not available with a Let's Encrypt certificate though)
|
||||
- It's super sexy
|
||||
|
||||
|
||||
## Administration
|
||||
|
||||
When your docker container is running, connect to it on port `81` for the admin interface.
|
||||
|
17
doc/ADVANCED_NGINX.md
Normal file
17
doc/ADVANCED_NGINX.md
Normal file
@ -0,0 +1,17 @@
|
||||
## Advanced Nginx Configuration
|
||||
|
||||
If you are a more advanced user, you might be itching for extra Nginx customizability.
|
||||
|
||||
NPM has the ability to include different custom configuration snippets in different places.
|
||||
|
||||
You can add your custom configuration snippet files at `/data/nginx/custom` as follow:
|
||||
|
||||
`/data/nginx/custom/root.conf`: Included at the very end of nginx.conf
|
||||
`/data/nginx/custom/http.conf`: Included at the end of the main http block
|
||||
`/data/nginx/custom/server_proxy.conf`: Included at the end of every proxy server block
|
||||
`/data/nginx/custom/server_redirect.conf`: Included at the end of every redirection server block
|
||||
`/data/nginx/custom/server_stream.conf`: Included at the end of every stream server block
|
||||
`/data/nginx/custom/server_stream_tcp.conf`: Included at the end of every TCP stream server block
|
||||
`/data/nginx/custom/server_stream_udp.conf`: Included at the end of every UDP stream server block
|
||||
|
||||
Every file is optional.
|
@ -1,8 +1,8 @@
|
||||

|
||||
|
||||
# Nginx Proxy Manager
|
||||
# [Nginx Proxy Manager](https://nginxproxymanager.jc21.com)
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
@ -14,16 +14,19 @@ running at home or otherwise, including free SSL, without having to know too muc
|
||||
|
||||
## Tags
|
||||
|
||||
* latest 2, 2.x.x ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile))
|
||||
* latest-armhf, 2-armhf, 2.x.x-armhf ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile.armhf))
|
||||
* 1, 1.x.x ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/1.1.2/Dockerfile))
|
||||
* 1-armhf, 1.x.x-armhf ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/1.1.2/Dockerfile.armhf))
|
||||
* latest 2, 2.x.x ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile))
|
||||
* latest-arm64, 2-arm64, 2.x.x-arm64 ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile.arm64))
|
||||
* latest-arm7l, 2-arm7l, 2.x.x-arm7l ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile.arm7l))
|
||||
|
||||
|
||||
## Getting started
|
||||
|
||||
Please consult the [installation instructions](https://github.com/jc21/nginx-proxy-manager/blob/master/doc/INSTALL.md) for a complete guide or
|
||||
if you just want to get up and running in the quickest time possible, grab all the files in the [doc/example/](https://github.com/jc21/nginx-proxy-manager/tree/master/doc/example) folder and run `docker-compose up -d`
|
||||
if you just want to get up and running in the quickest time possible, grab all the files in the [doc/example/](https://github.com/jc21/nginx-proxy-manager/tree/master/doc/example) folder and run:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
|
||||
## Screenshots
|
||||
|
132
doc/INSTALL.md
132
doc/INSTALL.md
@ -1,9 +1,13 @@
|
||||
## Installation and Configuration
|
||||
|
||||
There's a few ways to configure this app depending on:
|
||||
If you just want to get up and running in the quickest time possible, grab all the files in
|
||||
the [doc/example/](https://github.com/jc21/nginx-proxy-manager/tree/master/doc/example)
|
||||
folder and run:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
- Whether you use `docker-compose` or vanilla docker
|
||||
- Which architecture you're running it on (raspberry pi also supported - Testers wanted!)
|
||||
|
||||
### Configuration File
|
||||
|
||||
@ -13,22 +17,22 @@ Don't worry, this is easy to do.
|
||||
|
||||
The app requires a configuration file to let it know what database you're using.
|
||||
|
||||
Here's an example configuration for `mysql` (or mariadb):
|
||||
Here's an example configuration for `mysql` (or mariadb) that is compatible with the docker-compose example below:
|
||||
|
||||
```json
|
||||
{
|
||||
"database": {
|
||||
"engine": "mysql",
|
||||
"host": "127.0.0.1",
|
||||
"name": "nginxproxymanager",
|
||||
"user": "nginxproxymanager",
|
||||
"password": "password123",
|
||||
"host": "db",
|
||||
"name": "npm",
|
||||
"user": "npm",
|
||||
"password": "npm",
|
||||
"port": 3306
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Once you've created your configuration file it's easy to mount it in the docker container, examples below.
|
||||
Once you've created your configuration file it's easy to mount it in the docker container.
|
||||
|
||||
**Note:** After the first run of the application, the config file will be altered to include generated encryption keys unique to your installation. These keys
|
||||
affect the login and session management of the application. If these keys change for any reason, all users will be logged out.
|
||||
@ -36,37 +40,13 @@ affect the login and session management of the application. If these keys change
|
||||
|
||||
### Database
|
||||
|
||||
This app doesn't come with a database, you have to provide one yourself. Currently only `mysql/mariadb` is supported.
|
||||
This app doesn't come with a database, you have to provide one yourself. Currently only `mysql/mariadb` is supported for the minimum versions:
|
||||
|
||||
It's easy to use another docker container for your database also and link it as part of the docker stack. Here's an example:
|
||||
- MySQL v5.7.8+
|
||||
- MariaDB v10.2.7+
|
||||
|
||||
```yml
|
||||
version: "3"
|
||||
services:
|
||||
app:
|
||||
image: jc21/nginx-proxy-manager:2
|
||||
restart: always
|
||||
ports:
|
||||
- 80:80
|
||||
- 81:81
|
||||
- 443:443
|
||||
volumes:
|
||||
- ./config.json:/app/config/production.json
|
||||
- ./data:/data
|
||||
- ./letsencrypt:/etc/letsencrypt
|
||||
depends_on:
|
||||
- db
|
||||
db:
|
||||
image: mariadb
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "password123"
|
||||
MYSQL_DATABASE: "nginxproxymanager"
|
||||
MYSQL_USER: "nginxproxymanager"
|
||||
MYSQL_PASSWORD: "password123"
|
||||
volumes:
|
||||
- ./data/mysql:/var/lib/mysql
|
||||
```
|
||||
It's easy to use another docker container for your database also and link it as part of the docker stack, so that's what the following examples
|
||||
are going to use.
|
||||
|
||||
|
||||
### Running the App
|
||||
@ -77,49 +57,54 @@ Via `docker-compose`:
|
||||
version: "3"
|
||||
services:
|
||||
app:
|
||||
image: jc21/nginx-proxy-manager:2
|
||||
image: jc21/nginx-proxy-manager:latest
|
||||
restart: always
|
||||
ports:
|
||||
# Public HTTP Port:
|
||||
- 80:80
|
||||
- 81:81
|
||||
# Public HTTPS Port:
|
||||
- 443:443
|
||||
# Admin Web Port:
|
||||
- 81:81
|
||||
volumes:
|
||||
# Make sure this config.json file exists as per instructions above:
|
||||
- ./config.json:/app/config/production.json
|
||||
- ./data:/data
|
||||
- ./letsencrypt:/etc/letsencrypt
|
||||
depends_on:
|
||||
- db
|
||||
db:
|
||||
image: mariadb
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "npm"
|
||||
MYSQL_DATABASE: "npm"
|
||||
MYSQL_USER: "npm"
|
||||
MYSQL_PASSWORD: "npm"
|
||||
volumes:
|
||||
- ./data/mysql:/var/lib/mysql
|
||||
```
|
||||
|
||||
Vanilla Docker:
|
||||
Then:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name nginx-proxy-manager \
|
||||
-p 80:80 \
|
||||
-p 81:81 \
|
||||
-p 443:443 \
|
||||
-v /path/to/config.json:/app/config/production.json \
|
||||
-v /path/to/data:/data \
|
||||
-v /path/to/letsencrypt:/etc/letsencrypt \
|
||||
jc21/nginx-proxy-manager:2
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
|
||||
### Running on Raspberry PI / `armhf`
|
||||
### Running on Raspberry PI / ARM devices
|
||||
|
||||
I have created a `armhf` docker container just for you. There may be issues with it,
|
||||
if you have issues please report them here.
|
||||
There are docker images for all versions of the Rasberry Pi with the exception of the really old `armv6l` versions.
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name nginx-proxy-manager-app \
|
||||
-p 80:80 \
|
||||
-p 81:81 \
|
||||
-p 443:443 \
|
||||
-v /path/to/config.json:/app/config/production.json \
|
||||
-v /path/to/data:/data \
|
||||
-v /path/to/letsencrypt:/etc/letsencrypt \
|
||||
jc21/nginx-proxy-manager:2-armhf
|
||||
```
|
||||
The `latest` docker image is a manifest of all the different architecture docker builds supported, so this means
|
||||
you don't have to worry about doing anything special and you can follow the common instructions above.
|
||||
|
||||
Check out the [dockerhub tags](https://cloud.docker.com/repository/registry-1.docker.io/jc21/nginx-proxy-manager/tags)
|
||||
for a list of supported architectures and if you want one that doesn't exist,
|
||||
[create a feature request](https://github.com/jc21/nginx-proxy-manager/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=).
|
||||
|
||||
Also, if you don't know how to already, follow [this guide to install docker and docker-compose](https://manre-universe.net/how-to-run-docker-and-docker-compose-on-raspbian/)
|
||||
on Raspbian.
|
||||
|
||||
|
||||
### Initial Run
|
||||
@ -141,3 +126,22 @@ Password: changeme
|
||||
```
|
||||
|
||||
Immediately after logging in with this default user you will be asked to modify your details and change your password.
|
||||
|
||||
|
||||
### Advanced Options
|
||||
|
||||
#### X-FRAME-OPTIONS Header
|
||||
|
||||
You can configure the [`X-FRAME-OPTIONS`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options) header
|
||||
value by specifying it as a Docker environment variable. The default if not specified is `deny`.
|
||||
|
||||
```yml
|
||||
...
|
||||
environment:
|
||||
X_FRAME_OPTIONS: "sameorigin"
|
||||
...
|
||||
```
|
||||
|
||||
```
|
||||
... -e "X_FRAME_OPTIONS=sameorigin" ...
|
||||
```
|
||||
|
@ -2,9 +2,9 @@
|
||||
"database": {
|
||||
"engine": "mysql",
|
||||
"host": "db",
|
||||
"name": "nginxproxymanager",
|
||||
"user": "nginxproxymanager",
|
||||
"password": "password123",
|
||||
"name": "npm",
|
||||
"user": "npm",
|
||||
"password": "npm",
|
||||
"port": 3306
|
||||
}
|
||||
}
|
||||
|
@ -1,7 +1,7 @@
|
||||
version: "3"
|
||||
services:
|
||||
app:
|
||||
image: jc21/nginx-proxy-manager:2
|
||||
image: jc21/nginx-proxy-manager:latest
|
||||
restart: always
|
||||
ports:
|
||||
- 80:80
|
||||
@ -17,12 +17,12 @@ services:
|
||||
# if you want pretty colors in your docker logs:
|
||||
- FORCE_COLOR=1
|
||||
db:
|
||||
image: mariadb
|
||||
image: mariadb:latest
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "password123"
|
||||
MYSQL_DATABASE: "nginxproxymanager"
|
||||
MYSQL_USER: "nginxproxymanager"
|
||||
MYSQL_PASSWORD: "password123"
|
||||
MYSQL_ROOT_PASSWORD: "npm"
|
||||
MYSQL_DATABASE: "npm"
|
||||
MYSQL_USER: "npm"
|
||||
MYSQL_PASSWORD: "npm"
|
||||
volumes:
|
||||
- ./data/mysql:/var/lib/mysql
|
||||
|
@ -4,13 +4,14 @@ services:
|
||||
app:
|
||||
image: jc21/nginx-proxy-manager-base:latest
|
||||
ports:
|
||||
- 8080:80
|
||||
- 8081:81
|
||||
- 8443:443
|
||||
- 80:80
|
||||
- 81:81
|
||||
- 43:443
|
||||
environment:
|
||||
- NODE_ENV=development
|
||||
- FORCE_COLOR=1
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./data/letsencrypt:/etc/letsencrypt
|
||||
- .:/app
|
||||
- ./rootfs/etc/nginx:/etc/nginx
|
||||
@ -21,7 +22,7 @@ services:
|
||||
- db
|
||||
command: node --max_old_space_size=250 --abort_on_uncaught_exception node_modules/nodemon/bin/nodemon.js
|
||||
db:
|
||||
image: mariadb:10.3.7
|
||||
image: jc21/mariadb-aria
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "npm"
|
||||
MYSQL_DATABASE: "npm"
|
||||
|
@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "nginx-proxy-manager",
|
||||
"version": "2.0.6",
|
||||
"version": "2.0.14",
|
||||
"description": "A beautiful interface for creating Nginx endpoints",
|
||||
"main": "src/backend/index.js",
|
||||
"devDependencies": {
|
||||
@ -28,7 +28,7 @@
|
||||
"numeral": "^2.0.6",
|
||||
"sass-loader": "^7.0.3",
|
||||
"style-loader": "^0.22.1",
|
||||
"tabler-ui": "git+https://github.com/tabler/tabler.git",
|
||||
"tabler-ui": "git+https://github.com/tabler/tabler.git#00f78ad823311bc3ad974ac3e5b0126198f0a813",
|
||||
"underscore": "^1.8.3",
|
||||
"webpack": "^4.25.1",
|
||||
"webpack-cli": "^3.1.2",
|
||||
|
@ -8,8 +8,9 @@ server {
|
||||
|
||||
include conf.d/include/block-exploits.conf;
|
||||
|
||||
set $server 127.0.0.1;
|
||||
set $port 81;
|
||||
set $forward_scheme http;
|
||||
set $server 127.0.0.1;
|
||||
set $port 81;
|
||||
|
||||
location /health {
|
||||
access_log off;
|
||||
@ -21,10 +22,10 @@ server {
|
||||
}
|
||||
}
|
||||
|
||||
# Default 80 Host, which shows a "You are not configured" page
|
||||
# "You are not configured" page, which is the default if another default doesn't exist
|
||||
server {
|
||||
listen 80 default;
|
||||
server_name localhost;
|
||||
listen 80;
|
||||
server_name localhost-nginx-proxy-manager;
|
||||
|
||||
access_log /data/logs/default.log proxy;
|
||||
|
||||
@ -37,9 +38,9 @@ server {
|
||||
}
|
||||
}
|
||||
|
||||
# Default 443 Host
|
||||
# First 443 Host, which is the default if another default doesn't exist
|
||||
server {
|
||||
listen 443 ssl default;
|
||||
listen 443 ssl;
|
||||
server_name localhost;
|
||||
|
||||
access_log /data/logs/default.log proxy;
|
||||
|
2
rootfs/etc/nginx/conf.d/include/ip_ranges.conf
Normal file
2
rootfs/etc/nginx/conf.d/include/ip_ranges.conf
Normal file
@ -0,0 +1,2 @@
|
||||
# Intentionally left blank
|
||||
|
@ -2,7 +2,10 @@
|
||||
# We use ^~ here, so that we don't check other regexes (for speed-up). We actually MUST cancel
|
||||
# other regex checks, because in our other config files have regex rule that denies access to files with dotted names.
|
||||
location ^~ /.well-known/acme-challenge/ {
|
||||
# Since this is for letsencrypt authentication of a domain and they do not give IP ranges of their infrastructure
|
||||
# we need to open up access by turning off auth and IP ACL for this location.
|
||||
auth_basic off;
|
||||
allow all;
|
||||
|
||||
# Set correct content type. According to this:
|
||||
# https://community.letsencrypt.org/t/using-the-webroot-domain-verification-method/1445/29
|
||||
|
@ -3,4 +3,4 @@ proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Scheme $scheme;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_pass http://$server:$port;
|
||||
proxy_pass $forward_scheme://$server:$port;
|
||||
|
@ -2,11 +2,8 @@ ssl_session_timeout 5m;
|
||||
ssl_session_cache shared:SSL:50m;
|
||||
|
||||
# intermediate configuration. tweak to your needs.
|
||||
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
|
||||
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers 'EECDH+AESGCM:AES256+EECDH:AES256+EDH:EDH+AESGCM:ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-
|
||||
ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AE
|
||||
S128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
|
||||
S128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES';
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
|
||||
add_header Strict-Transport-Security max-age=15768000;
|
||||
|
@ -19,25 +19,26 @@ events {
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
sendfile on;
|
||||
server_tokens off;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
client_body_temp_path /tmp/nginx/body 1 2;
|
||||
keepalive_timeout 65;
|
||||
ssl_prefer_server_ciphers on;
|
||||
gzip on;
|
||||
proxy_ignore_client_abort off;
|
||||
client_max_body_size 2000m;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header X-Forwarded-Scheme $scheme;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Accept-Encoding "";
|
||||
proxy_cache off;
|
||||
proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m;
|
||||
proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
sendfile on;
|
||||
server_tokens off;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
client_body_temp_path /tmp/nginx/body 1 2;
|
||||
keepalive_timeout 65;
|
||||
ssl_prefer_server_ciphers on;
|
||||
gzip on;
|
||||
proxy_ignore_client_abort off;
|
||||
client_max_body_size 2000m;
|
||||
server_names_hash_bucket_size 64;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header X-Forwarded-Scheme $scheme;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Accept-Encoding "";
|
||||
proxy_cache off;
|
||||
proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m;
|
||||
proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
|
||||
|
||||
# MISS
|
||||
# BYPASS
|
||||
@ -54,12 +55,30 @@ http {
|
||||
# Dynamically generated resolvers file
|
||||
include /etc/nginx/conf.d/include/resolvers.conf;
|
||||
|
||||
# Default upstream scheme
|
||||
map $host $forward_scheme {
|
||||
default http;
|
||||
}
|
||||
|
||||
# Real IP Determination
|
||||
# Docker subnet:
|
||||
set_real_ip_from 172.0.0.0/8;
|
||||
# NPM generated CDN ip ranges:
|
||||
include conf.d/include/ip_ranges.conf;
|
||||
# always put the following 2 lines after ip subnets:
|
||||
real_ip_header X-Forwarded-For;
|
||||
real_ip_recursive on;
|
||||
|
||||
# Files generated by NPM
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /data/nginx/default_host/*.conf;
|
||||
include /data/nginx/proxy_host/*.conf;
|
||||
include /data/nginx/redirection_host/*.conf;
|
||||
include /data/nginx/dead_host/*.conf;
|
||||
include /data/nginx/temp/*.conf;
|
||||
|
||||
# Custom
|
||||
include /data/nginx/custom/http[.]conf;
|
||||
}
|
||||
|
||||
stream {
|
||||
@ -67,3 +86,5 @@ stream {
|
||||
include /data/nginx/stream/*.conf;
|
||||
}
|
||||
|
||||
# Custom
|
||||
include /data/nginx/custom/root[.]conf;
|
||||
|
@ -7,6 +7,8 @@ mkdir -p /tmp/nginx/body \
|
||||
/data/custom_ssl \
|
||||
/data/logs \
|
||||
/data/access \
|
||||
/data/nginx/default_host \
|
||||
/data/nginx/default_www \
|
||||
/data/nginx/proxy_host \
|
||||
/data/nginx/redirection_host \
|
||||
/data/nginx/stream \
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const path = require('path');
|
||||
const express = require('express');
|
||||
const bodyParser = require('body-parser');
|
||||
@ -40,11 +38,17 @@ app.use(require('./lib/express/cors'));
|
||||
|
||||
// General security/cache related headers + server header
|
||||
app.use(function (req, res, next) {
|
||||
let x_frame_options = 'DENY';
|
||||
|
||||
if (typeof process.env.X_FRAME_OPTIONS !== 'undefined' && process.env.X_FRAME_OPTIONS) {
|
||||
x_frame_options = process.env.X_FRAME_OPTIONS;
|
||||
}
|
||||
|
||||
res.set({
|
||||
'Strict-Transport-Security': 'includeSubDomains; max-age=631138519; preload',
|
||||
'X-XSS-Protection': '0',
|
||||
'X-XSS-Protection': '1; mode=block',
|
||||
'X-Content-Type-Options': 'nosniff',
|
||||
'X-Frame-Options': 'DENY',
|
||||
'X-Frame-Options': x_frame_options,
|
||||
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
|
||||
Pragma: 'no-cache',
|
||||
Expires: 0
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const config = require('config');
|
||||
|
||||
if (!config.has('database')) {
|
||||
|
@ -1,10 +1,8 @@
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs');
|
||||
const logger = require('./logger').import;
|
||||
const utils = require('./lib/utils');
|
||||
const batchflow = require('batchflow');
|
||||
const debug_mode = process.env.NODE_ENV !== 'production';
|
||||
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
|
||||
|
||||
const internalProxyHost = require('./internal/proxy-host');
|
||||
const internalRedirectionHost = require('./internal/redirection-host');
|
||||
|
@ -1,7 +1,5 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
'use strict';
|
||||
|
||||
const logger = require('./logger').global;
|
||||
|
||||
function appStart () {
|
||||
@ -11,6 +9,7 @@ function appStart () {
|
||||
const app = require('./app');
|
||||
const apiValidator = require('./lib/validator/api');
|
||||
const internalCertificate = require('./internal/certificate');
|
||||
const internalIpRanges = require('./internal/ip_ranges');
|
||||
|
||||
return migrate.latest()
|
||||
.then(setup)
|
||||
@ -18,9 +17,11 @@ function appStart () {
|
||||
.then(() => {
|
||||
return apiValidator.loadSchemas;
|
||||
})
|
||||
.then(internalIpRanges.fetch)
|
||||
.then(() => {
|
||||
|
||||
internalCertificate.initTimer();
|
||||
internalIpRanges.initTimer();
|
||||
|
||||
const server = app.listen(81, () => {
|
||||
logger.info('PID ' + process.pid + ' listening on port 81 ...');
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const fs = require('fs');
|
||||
const batchflow = require('batchflow');
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const error = require('../lib/error');
|
||||
const auditLogModel = require('../models/audit-log');
|
||||
|
||||
@ -46,9 +44,9 @@ const internalAuditLog = {
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {String} data.action
|
||||
* @param {Integer} [data.user_id]
|
||||
* @param {Integer} [data.object_id]
|
||||
* @param {Integer} [data.object_type]
|
||||
* @param {Number} [data.user_id]
|
||||
* @param {Number} [data.object_id]
|
||||
* @param {Number} [data.object_type]
|
||||
* @param {Object} [data.meta]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs');
|
||||
const _ = require('lodash');
|
||||
const logger = require('../logger').ssl;
|
||||
@ -9,19 +7,20 @@ const internalAuditLog = require('./audit-log');
|
||||
const tempWrite = require('temp-write');
|
||||
const utils = require('../lib/utils');
|
||||
const moment = require('moment');
|
||||
const debug_mode = process.env.NODE_ENV !== 'production';
|
||||
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
|
||||
const le_staging = process.env.NODE_ENV !== 'production';
|
||||
const internalNginx = require('./nginx');
|
||||
const internalHost = require('./host');
|
||||
const certbot_command = '/usr/bin/certbot';
|
||||
|
||||
function omissions () {
|
||||
function omissions() {
|
||||
return ['is_deleted'];
|
||||
}
|
||||
|
||||
const internalCertificate = {
|
||||
|
||||
allowed_ssl_files: ['certificate', 'certificate_key', 'intermediate_certificate'],
|
||||
interval_timeout: 1000 * 60 * 60 * 12, // 12 hours
|
||||
interval_timeout: 1000 * 60 * 60, // 1 hour
|
||||
interval: null,
|
||||
interval_processing: false,
|
||||
|
||||
@ -38,7 +37,7 @@ const internalCertificate = {
|
||||
internalCertificate.interval_processing = true;
|
||||
logger.info('Renewing SSL certs close to expiry...');
|
||||
|
||||
return utils.exec(certbot_command + ' renew -q ' + (debug_mode ? '--staging' : ''))
|
||||
return utils.exec(certbot_command + ' renew -q ' + (le_staging ? '--staging' : ''))
|
||||
.then(result => {
|
||||
logger.info(result);
|
||||
|
||||
@ -205,7 +204,7 @@ const internalCertificate = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.email]
|
||||
* @param {String} [data.name]
|
||||
* @return {Promise}
|
||||
@ -251,7 +250,7 @@ const internalCertificate = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {Array} [data.expand]
|
||||
* @param {Array} [data.omit]
|
||||
* @return {Promise}
|
||||
@ -297,7 +296,7 @@ const internalCertificate = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -381,7 +380,7 @@ const internalCertificate = {
|
||||
/**
|
||||
* Report use
|
||||
*
|
||||
* @param {Integer} user_id
|
||||
* @param {Number} user_id
|
||||
* @param {String} visibility
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -522,7 +521,7 @@ const internalCertificate = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {Object} data.files
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -719,9 +718,9 @@ const internalCertificate = {
|
||||
|
||||
let cmd = certbot_command + ' certonly --cert-name "npm-' + certificate.id + '" --agree-tos ' +
|
||||
'--email "' + certificate.meta.letsencrypt_email + '" ' +
|
||||
'--preferred-challenges "http" ' +
|
||||
'--preferred-challenges "dns,http" ' +
|
||||
'-n -a webroot -d "' + certificate.domain_names.join(',') + '" ' +
|
||||
(debug_mode ? '--staging' : '');
|
||||
(le_staging ? '--staging' : '');
|
||||
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', cmd);
|
||||
@ -734,6 +733,48 @@ const internalCertificate = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @returns {Promise}
|
||||
*/
|
||||
renew: (access, data) => {
|
||||
return access.can('certificates:update', data)
|
||||
.then(() => {
|
||||
return internalCertificate.get(access, data);
|
||||
})
|
||||
.then((certificate) => {
|
||||
if (certificate.provider === 'letsencrypt') {
|
||||
return internalCertificate.renewLetsEncryptSsl(certificate)
|
||||
.then(() => {
|
||||
return internalCertificate.getCertificateInfoFromFile('/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem')
|
||||
})
|
||||
.then(cert_info => {
|
||||
return certificateModel
|
||||
.query()
|
||||
.patchAndFetchById(certificate.id, {
|
||||
expires_on: certificateModel.raw('FROM_UNIXTIME(' + cert_info.dates.to + ')')
|
||||
});
|
||||
})
|
||||
.then((updated_certificate) => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'renewed',
|
||||
object_type: 'certificate',
|
||||
object_id: updated_certificate.id,
|
||||
meta: updated_certificate
|
||||
})
|
||||
.then(() => {
|
||||
return updated_certificate;
|
||||
});
|
||||
})
|
||||
} else {
|
||||
throw new error.ValidationError('Only Let\'sEncrypt certificates can be renewed');
|
||||
}
|
||||
})
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Object} certificate the certificate row
|
||||
* @returns {Promise}
|
||||
@ -741,7 +782,7 @@ const internalCertificate = {
|
||||
renewLetsEncryptSsl: certificate => {
|
||||
logger.info('Renewing Let\'sEncrypt certificates for Cert #' + certificate.id + ': ' + certificate.domain_names.join(', '));
|
||||
|
||||
let cmd = certbot_command + ' renew -n --force-renewal --disable-hook-validation --cert-name "npm-' + certificate.id + '" ' + (debug_mode ? '--staging' : '');
|
||||
let cmd = certbot_command + ' renew -n --force-renewal --disable-hook-validation --cert-name "npm-' + certificate.id + '" ' + (le_staging ? '--staging' : '');
|
||||
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', cmd);
|
||||
@ -762,17 +803,29 @@ const internalCertificate = {
|
||||
revokeLetsEncryptSsl: (certificate, throw_errors) => {
|
||||
logger.info('Revoking Let\'sEncrypt certificates for Cert #' + certificate.id + ': ' + certificate.domain_names.join(', '));
|
||||
|
||||
let cmd = certbot_command + ' revoke --cert-path "/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem" ' + (debug_mode ? '--staging' : '');
|
||||
let revoke_cmd = certbot_command + ' revoke --cert-path "/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem" ' + (le_staging ? '--staging' : '');
|
||||
let delete_cmd = certbot_command + ' delete --cert-name "npm-' + certificate.id + '" ' + (le_staging ? '--staging' : '');
|
||||
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', cmd);
|
||||
logger.info('Command:', revoke_cmd);
|
||||
}
|
||||
|
||||
return utils.exec(cmd)
|
||||
.then(result => {
|
||||
return utils.exec(revoke_cmd)
|
||||
.then((result) => {
|
||||
logger.info(result);
|
||||
return result;
|
||||
})
|
||||
.then(() => {
|
||||
if (debug_mode) {
|
||||
logger.info('Command:', delete_cmd);
|
||||
}
|
||||
|
||||
return utils.exec(delete_cmd)
|
||||
.then((result) => {
|
||||
logger.info(result);
|
||||
return result;
|
||||
})
|
||||
})
|
||||
.catch(err => {
|
||||
if (debug_mode) {
|
||||
logger.error(err.message);
|
||||
@ -796,7 +849,7 @@ const internalCertificate = {
|
||||
|
||||
/**
|
||||
* @param {Object} in_use_result
|
||||
* @param {Integer} in_use_result.total_count
|
||||
* @param {Number} in_use_result.total_count
|
||||
* @param {Array} in_use_result.proxy_hosts
|
||||
* @param {Array} in_use_result.redirection_hosts
|
||||
* @param {Array} in_use_result.dead_hosts
|
||||
@ -826,7 +879,7 @@ const internalCertificate = {
|
||||
|
||||
/**
|
||||
* @param {Object} in_use_result
|
||||
* @param {Integer} in_use_result.total_count
|
||||
* @param {Number} in_use_result.total_count
|
||||
* @param {Array} in_use_result.proxy_hosts
|
||||
* @param {Array} in_use_result.redirection_hosts
|
||||
* @param {Array} in_use_result.dead_hosts
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const error = require('../lib/error');
|
||||
const deadHostModel = require('../models/dead_host');
|
||||
@ -47,6 +45,7 @@ const internalDeadHost = {
|
||||
.then(() => {
|
||||
// At this point the domains should have been checked
|
||||
data.owner_user_id = access.token.getUserId(1);
|
||||
data = internalHost.cleanSslHstsData(data);
|
||||
|
||||
return deadHostModel
|
||||
.query()
|
||||
@ -89,11 +88,11 @@ const internalDeadHost = {
|
||||
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'created',
|
||||
object_type: 'dead-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
action: 'created',
|
||||
object_type: 'dead-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
.then(() => {
|
||||
return row;
|
||||
});
|
||||
@ -103,7 +102,7 @@ const internalDeadHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @return {Promise}
|
||||
*/
|
||||
update: (access, data) => {
|
||||
@ -144,9 +143,9 @@ const internalDeadHost = {
|
||||
|
||||
if (create_certificate) {
|
||||
return internalCertificate.createQuickCertificate(access, {
|
||||
domain_names: data.domain_names || row.domain_names,
|
||||
meta: _.assign({}, row.meta, data.meta)
|
||||
})
|
||||
domain_names: data.domain_names || row.domain_names,
|
||||
meta: _.assign({}, row.meta, data.meta)
|
||||
})
|
||||
.then(cert => {
|
||||
// update host with cert id
|
||||
data.certificate_id = cert.id;
|
||||
@ -162,7 +161,9 @@ const internalDeadHost = {
|
||||
// Add domain_names to the data in case it isn't there, so that the audit log renders correctly. The order is important here.
|
||||
data = _.assign({}, {
|
||||
domain_names: row.domain_names
|
||||
},data);
|
||||
}, data);
|
||||
|
||||
data = internalHost.cleanSslHstsData(data, row);
|
||||
|
||||
return deadHostModel
|
||||
.query()
|
||||
@ -171,11 +172,11 @@ const internalDeadHost = {
|
||||
.then(saved_row => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'updated',
|
||||
object_type: 'dead-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
action: 'updated',
|
||||
object_type: 'dead-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
.then(() => {
|
||||
return _.omit(saved_row, omissions());
|
||||
});
|
||||
@ -183,15 +184,15 @@ const internalDeadHost = {
|
||||
})
|
||||
.then(() => {
|
||||
return internalDeadHost.get(access, {
|
||||
id: data.id,
|
||||
expand: ['owner', 'certificate']
|
||||
})
|
||||
id: data.id,
|
||||
expand: ['owner', 'certificate']
|
||||
})
|
||||
.then(row => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(deadHostModel, 'dead_host', row)
|
||||
.then(new_meta => {
|
||||
row.meta = new_meta;
|
||||
row = internalHost.cleanRowCertificateMeta(row);
|
||||
row = internalHost.cleanRowCertificateMeta(row);
|
||||
return _.omit(row, omissions());
|
||||
});
|
||||
});
|
||||
@ -201,7 +202,7 @@ const internalDeadHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {Array} [data.expand]
|
||||
* @param {Array} [data.omit]
|
||||
* @return {Promise}
|
||||
@ -248,7 +249,7 @@ const internalDeadHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -290,6 +291,104 @@ const internalDeadHost = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
enable: (access, data) => {
|
||||
return access.can('dead_hosts:update', data.id)
|
||||
.then(() => {
|
||||
return internalDeadHost.get(access, {
|
||||
id: data.id,
|
||||
expand: ['certificate', 'owner']
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (row.enabled) {
|
||||
throw new error.ValidationError('Host is already enabled');
|
||||
}
|
||||
|
||||
row.enabled = 1;
|
||||
|
||||
return deadHostModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 1
|
||||
})
|
||||
.then(() => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(deadHostModel, 'dead_host', row);
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'enabled',
|
||||
object_type: 'dead-host',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
disable: (access, data) => {
|
||||
return access.can('dead_hosts:update', data.id)
|
||||
.then(() => {
|
||||
return internalDeadHost.get(access, {id: data.id});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (!row.enabled) {
|
||||
throw new error.ValidationError('Host is already disabled');
|
||||
}
|
||||
|
||||
row.enabled = 0;
|
||||
|
||||
return deadHostModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 0
|
||||
})
|
||||
.then(() => {
|
||||
// Delete Nginx Config
|
||||
return internalNginx.deleteConfig('dead_host', row)
|
||||
.then(() => {
|
||||
return internalNginx.reload();
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'disabled',
|
||||
object_type: 'dead-host',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* All Hosts
|
||||
*
|
||||
@ -338,7 +437,7 @@ const internalDeadHost = {
|
||||
/**
|
||||
* Report use
|
||||
*
|
||||
* @param {Integer} user_id
|
||||
* @param {Number} user_id
|
||||
* @param {String} visibility
|
||||
* @returns {Promise}
|
||||
*/
|
||||
|
@ -1,11 +1,40 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const proxyHostModel = require('../models/proxy_host');
|
||||
const redirectionHostModel = require('../models/redirection_host');
|
||||
const deadHostModel = require('../models/dead_host');
|
||||
|
||||
const internalHost = {
|
||||
|
||||
/**
|
||||
* Makes sure that the ssl_* and hsts_* fields play nicely together.
|
||||
* ie: if there is no cert, then force_ssl is off.
|
||||
* if force_ssl is off, then hsts_enabled is definitely off.
|
||||
*
|
||||
* @param {object} data
|
||||
* @param {object} [existing_data]
|
||||
* @returns {object}
|
||||
*/
|
||||
cleanSslHstsData: function (data, existing_data) {
|
||||
existing_data = existing_data === undefined ? {} : existing_data;
|
||||
|
||||
let combined_data = _.assign({}, existing_data, data);
|
||||
|
||||
if (!combined_data.certificate_id) {
|
||||
combined_data.ssl_forced = false;
|
||||
combined_data.http2_support = false;
|
||||
}
|
||||
|
||||
if (!combined_data.ssl_forced) {
|
||||
combined_data.hsts_enabled = false;
|
||||
}
|
||||
|
||||
if (!combined_data.hsts_enabled) {
|
||||
combined_data.hsts_subdomains = false;
|
||||
}
|
||||
|
||||
return combined_data;
|
||||
},
|
||||
|
||||
/**
|
||||
* used by the getAll functions of hosts, this removes the certificate meta if present
|
||||
*
|
||||
|
147
src/backend/internal/ip_ranges.js
Normal file
147
src/backend/internal/ip_ranges.js
Normal file
@ -0,0 +1,147 @@
|
||||
const https = require('https');
|
||||
const fs = require('fs');
|
||||
const logger = require('../logger').ip_ranges;
|
||||
const error = require('../lib/error');
|
||||
const internalNginx = require('./nginx');
|
||||
const Liquid = require('liquidjs');
|
||||
|
||||
const CLOUDFRONT_URL = 'https://ip-ranges.amazonaws.com/ip-ranges.json';
|
||||
const CLOUDFARE_V4_URL = 'https://www.cloudflare.com/ips-v4';
|
||||
const CLOUDFARE_V6_URL = 'https://www.cloudflare.com/ips-v6';
|
||||
|
||||
const internalIpRanges = {
|
||||
|
||||
interval_timeout: 1000 * 60 * 60 * 6, // 6 hours
|
||||
interval: null,
|
||||
interval_processing: false,
|
||||
iteration_count: 0,
|
||||
|
||||
initTimer: () => {
|
||||
logger.info('IP Ranges Renewal Timer initialized');
|
||||
internalIpRanges.interval = setInterval(internalIpRanges.fetch, internalIpRanges.interval_timeout);
|
||||
},
|
||||
|
||||
fetchUrl: url => {
|
||||
return new Promise((resolve, reject) => {
|
||||
logger.info('Fetching ' + url);
|
||||
return https.get(url, res => {
|
||||
res.setEncoding('utf8');
|
||||
let raw_data = '';
|
||||
res.on('data', chunk => {
|
||||
raw_data += chunk;
|
||||
});
|
||||
|
||||
res.on('end', () => {
|
||||
resolve(raw_data);
|
||||
});
|
||||
}).on('error', err => {
|
||||
reject(err);
|
||||
});
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* Triggered at startup and then later by a timer, this will fetch the ip ranges from services and apply them to nginx.
|
||||
*/
|
||||
fetch: () => {
|
||||
if (!internalIpRanges.interval_processing) {
|
||||
internalIpRanges.interval_processing = true;
|
||||
logger.info('Fetching IP Ranges from online services...');
|
||||
|
||||
let ip_ranges = [];
|
||||
|
||||
return internalIpRanges.fetchUrl(CLOUDFRONT_URL)
|
||||
.then(cloudfront_data => {
|
||||
let data = JSON.parse(cloudfront_data);
|
||||
|
||||
if (data && typeof data.prefixes !== 'undefined') {
|
||||
data.prefixes.map(item => {
|
||||
if (item.service === 'CLOUDFRONT') {
|
||||
ip_ranges.push(item.ip_prefix);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
if (data && typeof data.ipv6_prefixes !== 'undefined') {
|
||||
data.ipv6_prefixes.map(item => {
|
||||
if (item.service === 'CLOUDFRONT') {
|
||||
ip_ranges.push(item.ipv6_prefix);
|
||||
}
|
||||
});
|
||||
}
|
||||
})
|
||||
.then(() => {
|
||||
return internalIpRanges.fetchUrl(CLOUDFARE_V4_URL);
|
||||
})
|
||||
.then(cloudfare_data => {
|
||||
let items = cloudfare_data.split('\n');
|
||||
ip_ranges = [... ip_ranges, ... items];
|
||||
})
|
||||
.then(() => {
|
||||
return internalIpRanges.fetchUrl(CLOUDFARE_V6_URL);
|
||||
})
|
||||
.then(cloudfare_data => {
|
||||
let items = cloudfare_data.split('\n');
|
||||
ip_ranges = [... ip_ranges, ... items];
|
||||
})
|
||||
.then(() => {
|
||||
let clean_ip_ranges = [];
|
||||
ip_ranges.map(range => {
|
||||
if (range) {
|
||||
clean_ip_ranges.push(range);
|
||||
}
|
||||
});
|
||||
|
||||
return internalIpRanges.generateConfig(clean_ip_ranges)
|
||||
.then(() => {
|
||||
if (internalIpRanges.iteration_count) {
|
||||
// Reload nginx
|
||||
return internalNginx.reload();
|
||||
}
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
internalIpRanges.interval_processing = false;
|
||||
internalIpRanges.iteration_count++;
|
||||
})
|
||||
.catch(err => {
|
||||
logger.error(err.message);
|
||||
internalIpRanges.interval_processing = false;
|
||||
});
|
||||
}
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Array} ip_ranges
|
||||
* @returns {Promise}
|
||||
*/
|
||||
generateConfig: (ip_ranges) => {
|
||||
let renderEngine = Liquid({
|
||||
root: __dirname + '/../templates/'
|
||||
});
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
let template = null;
|
||||
let filename = '/etc/nginx/conf.d/include/ip_ranges.conf';
|
||||
try {
|
||||
template = fs.readFileSync(__dirname + '/../templates/ip_ranges.conf', {encoding: 'utf8'});
|
||||
} catch (err) {
|
||||
reject(new error.ConfigurationError(err.message));
|
||||
return;
|
||||
}
|
||||
|
||||
renderEngine
|
||||
.parseAndRender(template, {ip_ranges: ip_ranges})
|
||||
.then(config_text => {
|
||||
fs.writeFileSync(filename, config_text, {encoding: 'utf8'});
|
||||
resolve(true);
|
||||
})
|
||||
.catch(err => {
|
||||
logger.warn('Could not write ' + filename + ':', err.message);
|
||||
reject(new error.ConfigurationError(err.message));
|
||||
});
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
module.exports = internalIpRanges;
|
@ -1,12 +1,10 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const fs = require('fs');
|
||||
const Liquid = require('liquidjs');
|
||||
const logger = require('../logger').nginx;
|
||||
const utils = require('../lib/utils');
|
||||
const error = require('../lib/error');
|
||||
const debug_mode = process.env.NODE_ENV !== 'production';
|
||||
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
|
||||
|
||||
const internalNginx = {
|
||||
|
||||
@ -19,9 +17,9 @@ const internalNginx = {
|
||||
* - IF BAD: update the meta with offline status and remove the config entirely
|
||||
* - then reload nginx
|
||||
*
|
||||
* @param {Object} model
|
||||
* @param {String} host_type
|
||||
* @param {Object} host
|
||||
* @param {Object|String} model
|
||||
* @param {String} host_type
|
||||
* @param {Object} host
|
||||
* @returns {Promise}
|
||||
*/
|
||||
configure: (model, host_type, host) => {
|
||||
@ -92,7 +90,7 @@ const internalNginx = {
|
||||
})
|
||||
.then(() => {
|
||||
return combined_meta;
|
||||
})
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
@ -124,9 +122,52 @@ const internalNginx = {
|
||||
*/
|
||||
getConfigName: (host_type, host_id) => {
|
||||
host_type = host_type.replace(new RegExp('-', 'g'), '_');
|
||||
|
||||
if (host_type === 'default') {
|
||||
return '/data/nginx/default_host/site.conf';
|
||||
}
|
||||
|
||||
return '/data/nginx/' + host_type + '/' + host_id + '.conf';
|
||||
},
|
||||
|
||||
/**
|
||||
* Generates custom locations
|
||||
* @param {Object} host
|
||||
* @returns {Promise}
|
||||
*/
|
||||
renderLocations: (host) => {
|
||||
return new Promise((resolve, reject) => {
|
||||
let template;
|
||||
|
||||
try {
|
||||
template = fs.readFileSync(__dirname + '/../templates/_location.conf', {encoding: 'utf8'});
|
||||
} catch (err) {
|
||||
reject(new error.ConfigurationError(err.message));
|
||||
return;
|
||||
}
|
||||
|
||||
let renderer = new Liquid();
|
||||
let renderedLocations = '';
|
||||
|
||||
const locationRendering = async () => {
|
||||
for (let i = 0; i < host.locations.length; i++) {
|
||||
let locationCopy = Object.assign({}, host.locations[i]);
|
||||
|
||||
if (locationCopy.forward_host.indexOf('/') > -1) {
|
||||
const splitted = locationCopy.forward_host.split('/');
|
||||
|
||||
locationCopy.forward_host = splitted.shift();
|
||||
locationCopy.forward_path = `/${splitted.join('/')}`;
|
||||
}
|
||||
|
||||
renderedLocations += await renderer.parseAndRender(template, locationCopy);
|
||||
}
|
||||
};
|
||||
|
||||
locationRendering().then(() => resolve(renderedLocations));
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {String} host_type
|
||||
* @param {Object} host
|
||||
@ -146,6 +187,7 @@ const internalNginx = {
|
||||
return new Promise((resolve, reject) => {
|
||||
let template = null;
|
||||
let filename = internalNginx.getConfigName(host_type, host.id);
|
||||
|
||||
try {
|
||||
template = fs.readFileSync(__dirname + '/../templates/' + host_type + '.conf', {encoding: 'utf8'});
|
||||
} catch (err) {
|
||||
@ -153,24 +195,57 @@ const internalNginx = {
|
||||
return;
|
||||
}
|
||||
|
||||
renderEngine
|
||||
.parseAndRender(template, host)
|
||||
.then(config_text => {
|
||||
fs.writeFileSync(filename, config_text, {encoding: 'utf8'});
|
||||
let locationsPromise;
|
||||
let origLocations;
|
||||
|
||||
if (debug_mode) {
|
||||
logger.success('Wrote config:', filename, config_text);
|
||||
}
|
||||
// Manipulate the data a bit before sending it to the template
|
||||
if (host_type !== 'default') {
|
||||
host.use_default_location = true;
|
||||
if (typeof host.advanced_config !== 'undefined' && host.advanced_config) {
|
||||
host.use_default_location = !internalNginx.advancedConfigHasDefaultLocation(host.advanced_config);
|
||||
}
|
||||
}
|
||||
|
||||
resolve(true);
|
||||
})
|
||||
.catch(err => {
|
||||
if (debug_mode) {
|
||||
logger.warn('Could not write ' + filename + ':', err.message);
|
||||
}
|
||||
|
||||
reject(new error.ConfigurationError(err.message));
|
||||
if (host.locations) {
|
||||
origLocations = [].concat(host.locations);
|
||||
locationsPromise = internalNginx.renderLocations(host).then((renderedLocations) => {
|
||||
host.locations = renderedLocations;
|
||||
});
|
||||
|
||||
// Allow someone who is using / custom location path to use it, and skip the default / location
|
||||
_.map(host.locations, (location) => {
|
||||
if (location.path === '/') {
|
||||
host.use_default_location = false;
|
||||
}
|
||||
});
|
||||
|
||||
} else {
|
||||
locationsPromise = Promise.resolve();
|
||||
}
|
||||
|
||||
locationsPromise.then(() => {
|
||||
renderEngine
|
||||
.parseAndRender(template, host)
|
||||
.then(config_text => {
|
||||
fs.writeFileSync(filename, config_text, {encoding: 'utf8'});
|
||||
|
||||
if (debug_mode) {
|
||||
logger.success('Wrote config:', filename, config_text);
|
||||
}
|
||||
|
||||
// Restore locations array
|
||||
host.locations = origLocations;
|
||||
|
||||
resolve(true);
|
||||
})
|
||||
.catch(err => {
|
||||
if (debug_mode) {
|
||||
logger.warn('Could not write ' + filename + ':', err.message);
|
||||
}
|
||||
|
||||
reject(new error.ConfigurationError(err.message));
|
||||
});
|
||||
});
|
||||
});
|
||||
},
|
||||
|
||||
@ -255,7 +330,7 @@ const internalNginx = {
|
||||
|
||||
/**
|
||||
* @param {String} host_type
|
||||
* @param {Object} host
|
||||
* @param {Object} [host]
|
||||
* @param {Boolean} [throw_errors]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -264,7 +339,7 @@ const internalNginx = {
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
try {
|
||||
let config_file = internalNginx.getConfigName(host_type, host.id);
|
||||
let config_file = internalNginx.getConfigName(host_type, typeof host === 'undefined' ? 0 : host.id);
|
||||
|
||||
if (debug_mode) {
|
||||
logger.warn('Deleting nginx config: ' + config_file);
|
||||
@ -312,6 +387,14 @@ const internalNginx = {
|
||||
});
|
||||
|
||||
return Promise.all(promises);
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {string} config
|
||||
* @returns {boolean}
|
||||
*/
|
||||
advancedConfigHasDefaultLocation: function (config) {
|
||||
return !!config.match(/^(?:.*;)?\s*?location\s*?\/\s*?{/im);
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const error = require('../lib/error');
|
||||
const proxyHostModel = require('../models/proxy_host');
|
||||
@ -27,7 +25,7 @@ const internalProxyHost = {
|
||||
}
|
||||
|
||||
return access.can('proxy_hosts:create', data)
|
||||
.then(access_data => {
|
||||
.then(() => {
|
||||
// Get a list of the domain names and check each of them against existing records
|
||||
let domain_name_check_promises = [];
|
||||
|
||||
@ -47,13 +45,14 @@ const internalProxyHost = {
|
||||
.then(() => {
|
||||
// At this point the domains should have been checked
|
||||
data.owner_user_id = access.token.getUserId(1);
|
||||
data = internalHost.cleanSslHstsData(data);
|
||||
|
||||
return proxyHostModel
|
||||
.query()
|
||||
.omit(omissions())
|
||||
.insertAndFetch(data);
|
||||
})
|
||||
.then(row => {
|
||||
.then((row) => {
|
||||
if (create_certificate) {
|
||||
return internalCertificate.createQuickCertificate(access, data)
|
||||
.then(cert => {
|
||||
@ -70,31 +69,31 @@ const internalProxyHost = {
|
||||
return row;
|
||||
}
|
||||
})
|
||||
.then(row => {
|
||||
.then((row) => {
|
||||
// re-fetch with cert
|
||||
return internalProxyHost.get(access, {
|
||||
id: row.id,
|
||||
expand: ['certificate', 'owner', 'access_list']
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
.then((row) => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(proxyHostModel, 'proxy_host', row)
|
||||
.then(() => {
|
||||
return row;
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
.then((row) => {
|
||||
// Audit log
|
||||
data.meta = _.assign({}, data.meta || {}, row.meta);
|
||||
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'created',
|
||||
object_type: 'proxy-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
action: 'created',
|
||||
object_type: 'proxy-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
.then(() => {
|
||||
return row;
|
||||
});
|
||||
@ -104,7 +103,7 @@ const internalProxyHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @return {Promise}
|
||||
*/
|
||||
update: (access, data) => {
|
||||
@ -145,9 +144,9 @@ const internalProxyHost = {
|
||||
|
||||
if (create_certificate) {
|
||||
return internalCertificate.createQuickCertificate(access, {
|
||||
domain_names: data.domain_names || row.domain_names,
|
||||
meta: _.assign({}, row.meta, data.meta)
|
||||
})
|
||||
domain_names: data.domain_names || row.domain_names,
|
||||
meta: _.assign({}, row.meta, data.meta)
|
||||
})
|
||||
.then(cert => {
|
||||
// update host with cert id
|
||||
data.certificate_id = cert.id;
|
||||
@ -163,7 +162,9 @@ const internalProxyHost = {
|
||||
// Add domain_names to the data in case it isn't there, so that the audit log renders correctly. The order is important here.
|
||||
data = _.assign({}, {
|
||||
domain_names: row.domain_names
|
||||
},data);
|
||||
}, data);
|
||||
|
||||
data = internalHost.cleanSslHstsData(data, row);
|
||||
|
||||
return proxyHostModel
|
||||
.query()
|
||||
@ -172,11 +173,11 @@ const internalProxyHost = {
|
||||
.then(saved_row => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'updated',
|
||||
object_type: 'proxy-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
action: 'updated',
|
||||
object_type: 'proxy-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
.then(() => {
|
||||
return _.omit(saved_row, omissions());
|
||||
});
|
||||
@ -184,15 +185,15 @@ const internalProxyHost = {
|
||||
})
|
||||
.then(() => {
|
||||
return internalProxyHost.get(access, {
|
||||
id: data.id,
|
||||
expand: ['owner', 'certificate', 'access_list']
|
||||
})
|
||||
id: data.id,
|
||||
expand: ['owner', 'certificate', 'access_list']
|
||||
})
|
||||
.then(row => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(proxyHostModel, 'proxy_host', row)
|
||||
.then(new_meta => {
|
||||
row.meta = new_meta;
|
||||
row = internalHost.cleanRowCertificateMeta(row);
|
||||
row = internalHost.cleanRowCertificateMeta(row);
|
||||
return _.omit(row, omissions());
|
||||
});
|
||||
});
|
||||
@ -202,7 +203,7 @@ const internalProxyHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {Array} [data.expand]
|
||||
* @param {Array} [data.omit]
|
||||
* @return {Promise}
|
||||
@ -249,7 +250,7 @@ const internalProxyHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -291,6 +292,104 @@ const internalProxyHost = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
enable: (access, data) => {
|
||||
return access.can('proxy_hosts:update', data.id)
|
||||
.then(() => {
|
||||
return internalProxyHost.get(access, {
|
||||
id: data.id,
|
||||
expand: ['certificate', 'owner', 'access_list']
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (row.enabled) {
|
||||
throw new error.ValidationError('Host is already enabled');
|
||||
}
|
||||
|
||||
row.enabled = 1;
|
||||
|
||||
return proxyHostModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 1
|
||||
})
|
||||
.then(() => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(proxyHostModel, 'proxy_host', row);
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'enabled',
|
||||
object_type: 'proxy-host',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
disable: (access, data) => {
|
||||
return access.can('proxy_hosts:update', data.id)
|
||||
.then(() => {
|
||||
return internalProxyHost.get(access, {id: data.id});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (!row.enabled) {
|
||||
throw new error.ValidationError('Host is already disabled');
|
||||
}
|
||||
|
||||
row.enabled = 0;
|
||||
|
||||
return proxyHostModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 0
|
||||
})
|
||||
.then(() => {
|
||||
// Delete Nginx Config
|
||||
return internalNginx.deleteConfig('proxy_host', row)
|
||||
.then(() => {
|
||||
return internalNginx.reload();
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'disabled',
|
||||
object_type: 'proxy-host',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* All Hosts
|
||||
*
|
||||
@ -339,7 +438,7 @@ const internalProxyHost = {
|
||||
/**
|
||||
* Report use
|
||||
*
|
||||
* @param {Integer} user_id
|
||||
* @param {Number} user_id
|
||||
* @param {String} visibility
|
||||
* @returns {Promise}
|
||||
*/
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const error = require('../lib/error');
|
||||
const redirectionHostModel = require('../models/redirection_host');
|
||||
@ -47,6 +45,7 @@ const internalRedirectionHost = {
|
||||
.then(() => {
|
||||
// At this point the domains should have been checked
|
||||
data.owner_user_id = access.token.getUserId(1);
|
||||
data = internalHost.cleanSslHstsData(data);
|
||||
|
||||
return redirectionHostModel
|
||||
.query()
|
||||
@ -89,11 +88,11 @@ const internalRedirectionHost = {
|
||||
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'created',
|
||||
object_type: 'redirection-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
action: 'created',
|
||||
object_type: 'redirection-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
.then(() => {
|
||||
return row;
|
||||
});
|
||||
@ -103,7 +102,7 @@ const internalRedirectionHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @return {Promise}
|
||||
*/
|
||||
update: (access, data) => {
|
||||
@ -144,9 +143,9 @@ const internalRedirectionHost = {
|
||||
|
||||
if (create_certificate) {
|
||||
return internalCertificate.createQuickCertificate(access, {
|
||||
domain_names: data.domain_names || row.domain_names,
|
||||
meta: _.assign({}, row.meta, data.meta)
|
||||
})
|
||||
domain_names: data.domain_names || row.domain_names,
|
||||
meta: _.assign({}, row.meta, data.meta)
|
||||
})
|
||||
.then(cert => {
|
||||
// update host with cert id
|
||||
data.certificate_id = cert.id;
|
||||
@ -162,7 +161,9 @@ const internalRedirectionHost = {
|
||||
// Add domain_names to the data in case it isn't there, so that the audit log renders correctly. The order is important here.
|
||||
data = _.assign({}, {
|
||||
domain_names: row.domain_names
|
||||
},data);
|
||||
}, data);
|
||||
|
||||
data = internalHost.cleanSslHstsData(data, row);
|
||||
|
||||
return redirectionHostModel
|
||||
.query()
|
||||
@ -171,11 +172,11 @@ const internalRedirectionHost = {
|
||||
.then(saved_row => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'updated',
|
||||
object_type: 'redirection-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
action: 'updated',
|
||||
object_type: 'redirection-host',
|
||||
object_id: row.id,
|
||||
meta: data
|
||||
})
|
||||
.then(() => {
|
||||
return _.omit(saved_row, omissions());
|
||||
});
|
||||
@ -183,15 +184,15 @@ const internalRedirectionHost = {
|
||||
})
|
||||
.then(() => {
|
||||
return internalRedirectionHost.get(access, {
|
||||
id: data.id,
|
||||
expand: ['owner', 'certificate']
|
||||
})
|
||||
id: data.id,
|
||||
expand: ['owner', 'certificate']
|
||||
})
|
||||
.then(row => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(redirectionHostModel, 'redirection_host', row)
|
||||
.then(new_meta => {
|
||||
row.meta = new_meta;
|
||||
row = internalHost.cleanRowCertificateMeta(row);
|
||||
row = internalHost.cleanRowCertificateMeta(row);
|
||||
return _.omit(row, omissions());
|
||||
});
|
||||
});
|
||||
@ -201,7 +202,7 @@ const internalRedirectionHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {Array} [data.expand]
|
||||
* @param {Array} [data.omit]
|
||||
* @return {Promise}
|
||||
@ -248,7 +249,7 @@ const internalRedirectionHost = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -290,6 +291,104 @@ const internalRedirectionHost = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
enable: (access, data) => {
|
||||
return access.can('redirection_hosts:update', data.id)
|
||||
.then(() => {
|
||||
return internalRedirectionHost.get(access, {
|
||||
id: data.id,
|
||||
expand: ['certificate', 'owner']
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (row.enabled) {
|
||||
throw new error.ValidationError('Host is already enabled');
|
||||
}
|
||||
|
||||
row.enabled = 1;
|
||||
|
||||
return redirectionHostModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 1
|
||||
})
|
||||
.then(() => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(redirectionHostModel, 'redirection_host', row);
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'enabled',
|
||||
object_type: 'redirection-host',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
disable: (access, data) => {
|
||||
return access.can('redirection_hosts:update', data.id)
|
||||
.then(() => {
|
||||
return internalRedirectionHost.get(access, {id: data.id});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (!row.enabled) {
|
||||
throw new error.ValidationError('Host is already disabled');
|
||||
}
|
||||
|
||||
row.enabled = 0;
|
||||
|
||||
return redirectionHostModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 0
|
||||
})
|
||||
.then(() => {
|
||||
// Delete Nginx Config
|
||||
return internalNginx.deleteConfig('redirection_host', row)
|
||||
.then(() => {
|
||||
return internalNginx.reload();
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'disabled',
|
||||
object_type: 'redirection-host',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* All Hosts
|
||||
*
|
||||
@ -338,7 +437,7 @@ const internalRedirectionHost = {
|
||||
/**
|
||||
* Report use
|
||||
*
|
||||
* @param {Integer} user_id
|
||||
* @param {Number} user_id
|
||||
* @param {String} visibility
|
||||
* @returns {Promise}
|
||||
*/
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const internalProxyHost = require('./proxy-host');
|
||||
const internalRedirectionHost = require('./redirection-host');
|
||||
const internalDeadHost = require('./dead-host');
|
||||
|
133
src/backend/internal/setting.js
Normal file
133
src/backend/internal/setting.js
Normal file
@ -0,0 +1,133 @@
|
||||
const fs = require('fs');
|
||||
const error = require('../lib/error');
|
||||
const settingModel = require('../models/setting');
|
||||
const internalNginx = require('./nginx');
|
||||
|
||||
const internalSetting = {
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {String} data.id
|
||||
* @return {Promise}
|
||||
*/
|
||||
update: (access, data) => {
|
||||
return access.can('settings:update', data.id)
|
||||
.then(access_data => {
|
||||
return internalSetting.get(access, {id: data.id});
|
||||
})
|
||||
.then(row => {
|
||||
if (row.id !== data.id) {
|
||||
// Sanity check that something crazy hasn't happened
|
||||
throw new error.InternalValidationError('Setting could not be updated, IDs do not match: ' + row.id + ' !== ' + data.id);
|
||||
}
|
||||
|
||||
return settingModel
|
||||
.query()
|
||||
.where({id: data.id})
|
||||
.patch(data);
|
||||
})
|
||||
.then(() => {
|
||||
return internalSetting.get(access, {
|
||||
id: data.id
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
if (row.id === 'default-site') {
|
||||
// write the html if we need to
|
||||
if (row.value === 'html') {
|
||||
fs.writeFileSync('/data/nginx/default_www/index.html', row.meta.html, {encoding: 'utf8'});
|
||||
}
|
||||
|
||||
// Configure nginx
|
||||
return internalNginx.deleteConfig('default')
|
||||
.then(() => {
|
||||
return internalNginx.generateConfig('default', row);
|
||||
})
|
||||
.then(() => {
|
||||
return internalNginx.test();
|
||||
})
|
||||
.then(() => {
|
||||
return internalNginx.reload();
|
||||
})
|
||||
.then(() => {
|
||||
return row;
|
||||
})
|
||||
.catch((err) => {
|
||||
internalNginx.deleteConfig('default')
|
||||
.then(() => {
|
||||
return internalNginx.test();
|
||||
})
|
||||
.then(() => {
|
||||
return internalNginx.reload();
|
||||
})
|
||||
.then(() => {
|
||||
// I'm being slack here I know..
|
||||
throw new error.ValidationError('Could not reconfigure Nginx. Please check logs.');
|
||||
})
|
||||
});
|
||||
} else {
|
||||
return row;
|
||||
}
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {String} data.id
|
||||
* @return {Promise}
|
||||
*/
|
||||
get: (access, data) => {
|
||||
return access.can('settings:get', data.id)
|
||||
.then(() => {
|
||||
return settingModel
|
||||
.query()
|
||||
.where('id', data.id)
|
||||
.first();
|
||||
})
|
||||
.then(row => {
|
||||
if (row) {
|
||||
return row;
|
||||
} else {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
}
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* This will only count the settings
|
||||
*
|
||||
* @param {Access} access
|
||||
* @returns {*}
|
||||
*/
|
||||
getCount: (access) => {
|
||||
return access.can('settings:list')
|
||||
.then(() => {
|
||||
return settingModel
|
||||
.query()
|
||||
.count('id as count')
|
||||
.first();
|
||||
})
|
||||
.then(row => {
|
||||
return parseInt(row.count, 10);
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* All settings
|
||||
*
|
||||
* @param {Access} access
|
||||
* @returns {Promise}
|
||||
*/
|
||||
getAll: (access) => {
|
||||
return access.can('settings:list')
|
||||
.then(() => {
|
||||
return settingModel
|
||||
.query()
|
||||
.orderBy('description', 'ASC');
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
module.exports = internalSetting;
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const error = require('../lib/error');
|
||||
const streamModel = require('../models/stream');
|
||||
@ -56,7 +54,7 @@ const internalStream = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @return {Promise}
|
||||
*/
|
||||
update: (access, data) => {
|
||||
@ -75,6 +73,12 @@ const internalStream = {
|
||||
.query()
|
||||
.omit(omissions())
|
||||
.patchAndFetchById(row.id, data)
|
||||
.then(saved_row => {
|
||||
return internalNginx.configure(streamModel, 'stream', saved_row)
|
||||
.then(() => {
|
||||
return internalStream.get(access, {id: row.id, expand: ['owner']});
|
||||
});
|
||||
})
|
||||
.then(saved_row => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
@ -93,7 +97,7 @@ const internalStream = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {Array} [data.expand]
|
||||
* @param {Array} [data.omit]
|
||||
* @return {Promise}
|
||||
@ -139,7 +143,7 @@ const internalStream = {
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Integer} data.id
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
@ -181,6 +185,104 @@ const internalStream = {
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
enable: (access, data) => {
|
||||
return access.can('streams:update', data.id)
|
||||
.then(() => {
|
||||
return internalStream.get(access, {
|
||||
id: data.id,
|
||||
expand: ['owner']
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (row.enabled) {
|
||||
throw new error.ValidationError('Host is already enabled');
|
||||
}
|
||||
|
||||
row.enabled = 1;
|
||||
|
||||
return streamModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 1
|
||||
})
|
||||
.then(() => {
|
||||
// Configure nginx
|
||||
return internalNginx.configure(streamModel, 'stream', row);
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'enabled',
|
||||
object_type: 'stream',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* @param {Access} access
|
||||
* @param {Object} data
|
||||
* @param {Number} data.id
|
||||
* @param {String} [data.reason]
|
||||
* @returns {Promise}
|
||||
*/
|
||||
disable: (access, data) => {
|
||||
return access.can('streams:update', data.id)
|
||||
.then(() => {
|
||||
return internalStream.get(access, {id: data.id});
|
||||
})
|
||||
.then(row => {
|
||||
if (!row) {
|
||||
throw new error.ItemNotFoundError(data.id);
|
||||
} else if (!row.enabled) {
|
||||
throw new error.ValidationError('Host is already disabled');
|
||||
}
|
||||
|
||||
row.enabled = 0;
|
||||
|
||||
return streamModel
|
||||
.query()
|
||||
.where('id', row.id)
|
||||
.patch({
|
||||
enabled: 0
|
||||
})
|
||||
.then(() => {
|
||||
// Delete Nginx Config
|
||||
return internalNginx.deleteConfig('stream', row)
|
||||
.then(() => {
|
||||
return internalNginx.reload();
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
// Add to audit log
|
||||
return internalAuditLog.add(access, {
|
||||
action: 'disabled',
|
||||
object_type: 'stream-host',
|
||||
object_id: row.id,
|
||||
meta: _.omit(row, omissions())
|
||||
});
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
return true;
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* All Streams
|
||||
*
|
||||
@ -222,7 +324,7 @@ const internalStream = {
|
||||
/**
|
||||
* Report use
|
||||
*
|
||||
* @param {Integer} user_id
|
||||
* @param {Number} user_id
|
||||
* @param {String} visibility
|
||||
* @returns {Promise}
|
||||
*/
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const error = require('../lib/error');
|
||||
const userModel = require('../models/user');
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const error = require('../lib/error');
|
||||
const userModel = require('../models/user');
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Some Notes: This is a friggin complicated piece of code.
|
||||
*
|
||||
|
7
src/backend/lib/access/settings-get.json
Normal file
7
src/backend/lib/access/settings-get.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"anyOf": [
|
||||
{
|
||||
"$ref": "roles#/definitions/admin"
|
||||
}
|
||||
]
|
||||
}
|
7
src/backend/lib/access/settings-list.json
Normal file
7
src/backend/lib/access/settings-list.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"anyOf": [
|
||||
{
|
||||
"$ref": "roles#/definitions/admin"
|
||||
}
|
||||
]
|
||||
}
|
7
src/backend/lib/access/settings-update.json
Normal file
7
src/backend/lib/access/settings-update.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"anyOf": [
|
||||
{
|
||||
"$ref": "roles#/definitions/admin"
|
||||
}
|
||||
]
|
||||
}
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const util = require('util');
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const validator = require('../validator');
|
||||
|
||||
module.exports = function (req, res, next) {
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const Access = require('../access');
|
||||
|
||||
module.exports = () => {
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
module.exports = function () {
|
||||
return function (req, res, next) {
|
||||
if (req.headers.authorization) {
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
let _ = require('lodash');
|
||||
|
||||
module.exports = function (default_sort, default_offset, default_limit, max_limit) {
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
module.exports = (req, res, next) => {
|
||||
if (req.params.user_id === 'me' && res.locals.access) {
|
||||
req.params.user_id = res.locals.access.token.get('attrs').id;
|
||||
|
@ -1,7 +1,4 @@
|
||||
'use strict';
|
||||
|
||||
const moment = require('moment');
|
||||
const _ = require('lodash');
|
||||
|
||||
module.exports = {
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const migrate_name = 'identifier_for_migrate';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const exec = require('child_process').exec;
|
||||
|
||||
module.exports = {
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const error = require('../error');
|
||||
const path = require('path');
|
||||
const parser = require('json-schema-ref-parser');
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const error = require('../error');
|
||||
const definitions = require('../../schema/definitions.json');
|
||||
|
@ -1,12 +1,13 @@
|
||||
const {Signale} = require('signale');
|
||||
|
||||
module.exports = {
|
||||
global: new Signale({scope: 'Global '}),
|
||||
migrate: new Signale({scope: 'Migrate '}),
|
||||
express: new Signale({scope: 'Express '}),
|
||||
access: new Signale({scope: 'Access '}),
|
||||
nginx: new Signale({scope: 'Nginx '}),
|
||||
ssl: new Signale({scope: 'SSL '}),
|
||||
import: new Signale({scope: 'Importer'}),
|
||||
setup: new Signale({scope: 'Setup '})
|
||||
global: new Signale({scope: 'Global '}),
|
||||
migrate: new Signale({scope: 'Migrate '}),
|
||||
express: new Signale({scope: 'Express '}),
|
||||
access: new Signale({scope: 'Access '}),
|
||||
nginx: new Signale({scope: 'Nginx '}),
|
||||
ssl: new Signale({scope: 'SSL '}),
|
||||
import: new Signale({scope: 'Importer '}),
|
||||
setup: new Signale({scope: 'Setup '}),
|
||||
ip_ranges: new Signale({scope: 'IP Ranges'})
|
||||
};
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const db = require('./db');
|
||||
const logger = require('./logger').migrate;
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const migrate_name = 'initial-schema';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const migrate_name = 'websockets';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const migrate_name = 'forward_host';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const migrate_name = 'http2_support';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
|
34
src/backend/migrations/20181213013211_forward_scheme.js
Normal file
34
src/backend/migrations/20181213013211_forward_scheme.js
Normal file
@ -0,0 +1,34 @@
|
||||
const migrate_name = 'forward_scheme';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('proxy_host', function (proxy_host) {
|
||||
proxy_host.string('forward_scheme').notNull().defaultTo('http');
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] proxy_host Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex, Promise) {
|
||||
logger.warn('[' + migrate_name + '] You can\'t migrate down this one.');
|
||||
return Promise.resolve(true);
|
||||
};
|
55
src/backend/migrations/20190104035154_disabled.js
Normal file
55
src/backend/migrations/20190104035154_disabled.js
Normal file
@ -0,0 +1,55 @@
|
||||
const migrate_name = 'disabled';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('proxy_host', function (proxy_host) {
|
||||
proxy_host.integer('enabled').notNull().unsigned().defaultTo(1);
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] proxy_host Table altered');
|
||||
|
||||
return knex.schema.table('redirection_host', function (redirection_host) {
|
||||
redirection_host.integer('enabled').notNull().unsigned().defaultTo(1);
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] redirection_host Table altered');
|
||||
|
||||
return knex.schema.table('dead_host', function (dead_host) {
|
||||
dead_host.integer('enabled').notNull().unsigned().defaultTo(1);
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] dead_host Table altered');
|
||||
|
||||
return knex.schema.table('stream', function (stream) {
|
||||
stream.integer('enabled').notNull().unsigned().defaultTo(1);
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] stream Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex, Promise) {
|
||||
logger.warn('[' + migrate_name + '] You can\'t migrate down this one.');
|
||||
return Promise.resolve(true);
|
||||
};
|
35
src/backend/migrations/20190215115310_customlocations.js
Normal file
35
src/backend/migrations/20190215115310_customlocations.js
Normal file
@ -0,0 +1,35 @@
|
||||
const migrate_name = 'custom_locations';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
* Extends proxy_host table with locations field
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('proxy_host', function (proxy_host) {
|
||||
proxy_host.json('locations');
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] proxy_host Table altered');
|
||||
})
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex, Promise) {
|
||||
logger.warn('[' + migrate_name + '] You can\'t migrate down this one.');
|
||||
return Promise.resolve(true);
|
||||
};
|
51
src/backend/migrations/20190218060101_hsts.js
Normal file
51
src/backend/migrations/20190218060101_hsts.js
Normal file
@ -0,0 +1,51 @@
|
||||
const migrate_name = 'hsts';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.table('proxy_host', function (proxy_host) {
|
||||
proxy_host.integer('hsts_enabled').notNull().unsigned().defaultTo(0);
|
||||
proxy_host.integer('hsts_subdomains').notNull().unsigned().defaultTo(0);
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] proxy_host Table altered');
|
||||
|
||||
return knex.schema.table('redirection_host', function (redirection_host) {
|
||||
redirection_host.integer('hsts_enabled').notNull().unsigned().defaultTo(0);
|
||||
redirection_host.integer('hsts_subdomains').notNull().unsigned().defaultTo(0);
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] redirection_host Table altered');
|
||||
|
||||
return knex.schema.table('dead_host', function (dead_host) {
|
||||
dead_host.integer('hsts_enabled').notNull().unsigned().defaultTo(0);
|
||||
dead_host.integer('hsts_subdomains').notNull().unsigned().defaultTo(0);
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] dead_host Table altered');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex, Promise) {
|
||||
logger.warn('[' + migrate_name + '] You can\'t migrate down this one.');
|
||||
return Promise.resolve(true);
|
||||
};
|
54
src/backend/migrations/20190227065017_settings.js
Normal file
54
src/backend/migrations/20190227065017_settings.js
Normal file
@ -0,0 +1,54 @@
|
||||
const migrate_name = 'settings';
|
||||
const logger = require('../logger').migrate;
|
||||
|
||||
/**
|
||||
* Migrate
|
||||
*
|
||||
* @see http://knexjs.org/#Schema
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.up = function (knex/*, Promise*/) {
|
||||
logger.info('[' + migrate_name + '] Migrating Up...');
|
||||
|
||||
return knex.schema.createTable('setting', table => {
|
||||
table.string('id').notNull().primary();
|
||||
table.string('name', 100).notNull();
|
||||
table.string('description', 255).notNull();
|
||||
table.string('value', 255).notNull();
|
||||
table.json('meta').notNull();
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] setting Table created');
|
||||
|
||||
// TODO: add settings
|
||||
let settingModel = require('../models/setting');
|
||||
|
||||
return settingModel
|
||||
.query()
|
||||
.insert({
|
||||
id: 'default-site',
|
||||
name: 'Default Site',
|
||||
description: 'What to show when Nginx is hit with an unknown Host',
|
||||
value: 'congratulations',
|
||||
meta: {}
|
||||
});
|
||||
})
|
||||
.then(() => {
|
||||
logger.info('[' + migrate_name + '] Default settings added');
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Undo Migrate
|
||||
*
|
||||
* @param {Object} knex
|
||||
* @param {Promise} Promise
|
||||
* @returns {Promise}
|
||||
*/
|
||||
exports.down = function (knex, Promise) {
|
||||
logger.warn('[' + migrate_name + '] You can\'t migrate down the initial data.');
|
||||
return Promise.resolve(true);
|
||||
};
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const bcrypt = require('bcrypt');
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
@ -47,7 +45,7 @@ class ProxyHost extends Model {
|
||||
}
|
||||
|
||||
static get jsonAttributes () {
|
||||
return ['domain_names', 'meta'];
|
||||
return ['domain_names', 'meta', 'locations'];
|
||||
}
|
||||
|
||||
static get relationMappings () {
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
|
30
src/backend/models/setting.js
Normal file
30
src/backend/models/setting.js
Normal file
@ -0,0 +1,30 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
|
||||
Model.knex(db);
|
||||
|
||||
class Setting extends Model {
|
||||
$beforeInsert () {
|
||||
// Default for meta
|
||||
if (typeof this.meta === 'undefined') {
|
||||
this.meta = {};
|
||||
}
|
||||
}
|
||||
|
||||
static get name () {
|
||||
return 'Setting';
|
||||
}
|
||||
|
||||
static get tableName () {
|
||||
return 'setting';
|
||||
}
|
||||
|
||||
static get jsonAttributes () {
|
||||
return ['meta'];
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = Setting;
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const User = require('./user');
|
||||
|
@ -3,8 +3,6 @@
|
||||
and then has abilities after that.
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
const _ = require('lodash');
|
||||
const config = require('config');
|
||||
const jwt = require('jsonwebtoken');
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
const UserPermission = require('./user_permission');
|
||||
|
@ -1,8 +1,6 @@
|
||||
// Objection Docs:
|
||||
// http://vincit.github.io/objection.js/
|
||||
|
||||
'use strict';
|
||||
|
||||
const db = require('../db');
|
||||
const Model = require('objection').Model;
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../lib/validator');
|
||||
const jwtdecode = require('../../lib/express/jwt-decode');
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const pjson = require('../../../../package.json');
|
||||
const error = require('../../lib/error');
|
||||
@ -31,6 +29,7 @@ router.use('/tokens', require('./tokens'));
|
||||
router.use('/users', require('./users'));
|
||||
router.use('/audit-log', require('./audit-log'));
|
||||
router.use('/reports', require('./reports'));
|
||||
router.use('/settings', require('./settings'));
|
||||
router.use('/nginx/proxy-hosts', require('./nginx/proxy_hosts'));
|
||||
router.use('/nginx/redirection-hosts', require('./nginx/redirection_hosts'));
|
||||
router.use('/nginx/dead-hosts', require('./nginx/dead_hosts'));
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../../lib/validator');
|
||||
const jwtdecode = require('../../../lib/express/jwt-decode');
|
||||
@ -20,7 +18,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/access-lists
|
||||
@ -79,7 +77,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/access-lists/123
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../../lib/validator');
|
||||
const jwtdecode = require('../../../lib/express/jwt-decode');
|
||||
@ -20,7 +18,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/certificates
|
||||
@ -79,7 +77,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/certificates/123
|
||||
@ -94,13 +92,13 @@ router
|
||||
certificate_id: {
|
||||
$ref: 'definitions#/definitions/id'
|
||||
},
|
||||
expand: {
|
||||
expand: {
|
||||
$ref: 'definitions#/definitions/expand'
|
||||
}
|
||||
}
|
||||
}, {
|
||||
certificate_id: req.params.certificate_id,
|
||||
expand: (typeof req.query.expand === 'string' ? req.query.expand.split(',') : null)
|
||||
expand: (typeof req.query.expand === 'string' ? req.query.expand.split(',') : null)
|
||||
})
|
||||
.then(data => {
|
||||
return internalCertificate.get(res.locals.access, {
|
||||
@ -157,7 +155,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/certificates/123/upload
|
||||
@ -181,6 +179,34 @@ router
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Renew LE Certs
|
||||
*
|
||||
* /api/nginx/certificates/123/renew
|
||||
*/
|
||||
router
|
||||
.route('/:certificate_id/renew')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/certificates/123/renew
|
||||
*
|
||||
* Renew certificate
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalCertificate.renew(res.locals.access, {
|
||||
id: parseInt(req.params.certificate_id, 10)
|
||||
})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Validate Certs before saving
|
||||
*
|
||||
@ -191,7 +217,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/certificates/validate
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../../lib/validator');
|
||||
const jwtdecode = require('../../../lib/express/jwt-decode');
|
||||
@ -20,7 +18,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/dead-hosts
|
||||
@ -79,7 +77,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/dead-hosts/123
|
||||
@ -147,4 +145,52 @@ router
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Enable dead-host
|
||||
*
|
||||
* /api/nginx/dead-hosts/123/enable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/enable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/dead-hosts/123/enable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalDeadHost.enable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Disable dead-host
|
||||
*
|
||||
* /api/nginx/dead-hosts/123/disable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/disable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/dead-hosts/123/disable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalDeadHost.disable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../../lib/validator');
|
||||
const jwtdecode = require('../../../lib/express/jwt-decode');
|
||||
@ -20,7 +18,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/proxy-hosts
|
||||
@ -79,7 +77,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/proxy-hosts/123
|
||||
@ -147,4 +145,52 @@ router
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Enable proxy-host
|
||||
*
|
||||
* /api/nginx/proxy-hosts/123/enable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/enable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/proxy-hosts/123/enable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalProxyHost.enable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Disable proxy-host
|
||||
*
|
||||
* /api/nginx/proxy-hosts/123/disable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/disable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/proxy-hosts/123/disable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalProxyHost.disable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../../lib/validator');
|
||||
const jwtdecode = require('../../../lib/express/jwt-decode');
|
||||
@ -20,7 +18,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/redirection-hosts
|
||||
@ -79,7 +77,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/nginx/redirection-hosts/123
|
||||
@ -147,4 +145,52 @@ router
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Enable redirection-host
|
||||
*
|
||||
* /api/nginx/redirection-hosts/123/enable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/enable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/redirection-hosts/123/enable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalRedirectionHost.enable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Disable redirection-host
|
||||
*
|
||||
* /api/nginx/redirection-hosts/123/disable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/disable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/redirection-hosts/123/disable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalRedirectionHost.disable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../../lib/validator');
|
||||
const jwtdecode = require('../../../lib/express/jwt-decode');
|
||||
@ -147,4 +145,52 @@ router
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Enable stream
|
||||
*
|
||||
* /api/nginx/streams/123/enable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/enable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/streams/123/enable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalStream.enable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Disable stream
|
||||
*
|
||||
* /api/nginx/streams/123/disable
|
||||
*/
|
||||
router
|
||||
.route('/:host_id/disable')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/nginx/streams/123/disable
|
||||
*/
|
||||
.post((req, res, next) => {
|
||||
internalStream.disable(res.locals.access, {id: parseInt(req.params.host_id, 10)})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const jwtdecode = require('../../lib/express/jwt-decode');
|
||||
const internalReport = require('../../internal/report');
|
||||
|
96
src/backend/routes/api/settings.js
Normal file
96
src/backend/routes/api/settings.js
Normal file
@ -0,0 +1,96 @@
|
||||
const express = require('express');
|
||||
const validator = require('../../lib/validator');
|
||||
const jwtdecode = require('../../lib/express/jwt-decode');
|
||||
const internalSetting = require('../../internal/setting');
|
||||
const apiValidator = require('../../lib/validator/api');
|
||||
|
||||
let router = express.Router({
|
||||
caseSensitive: true,
|
||||
strict: true,
|
||||
mergeParams: true
|
||||
});
|
||||
|
||||
/**
|
||||
* /api/settings
|
||||
*/
|
||||
router
|
||||
.route('/')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/settings
|
||||
*
|
||||
* Retrieve all settings
|
||||
*/
|
||||
.get((req, res, next) => {
|
||||
internalSetting.getAll(res.locals.access)
|
||||
.then(rows => {
|
||||
res.status(200)
|
||||
.send(rows);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
/**
|
||||
* Specific setting
|
||||
*
|
||||
* /api/settings/something
|
||||
*/
|
||||
router
|
||||
.route('/:setting_id')
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /settings/something
|
||||
*
|
||||
* Retrieve a specific setting
|
||||
*/
|
||||
.get((req, res, next) => {
|
||||
validator({
|
||||
required: ['setting_id'],
|
||||
additionalProperties: false,
|
||||
properties: {
|
||||
setting_id: {
|
||||
$ref: 'definitions#/definitions/setting_id'
|
||||
}
|
||||
}
|
||||
}, {
|
||||
setting_id: req.params.setting_id
|
||||
})
|
||||
.then(data => {
|
||||
return internalSetting.get(res.locals.access, {
|
||||
id: data.setting_id
|
||||
});
|
||||
})
|
||||
.then(row => {
|
||||
res.status(200)
|
||||
.send(row);
|
||||
})
|
||||
.catch(next);
|
||||
})
|
||||
|
||||
/**
|
||||
* PUT /api/settings/something
|
||||
*
|
||||
* Update and existing setting
|
||||
*/
|
||||
.put((req, res, next) => {
|
||||
apiValidator({$ref: 'endpoints/settings#/links/1/schema'}, req.body)
|
||||
.then(payload => {
|
||||
payload.id = req.params.setting_id;
|
||||
return internalSetting.update(res.locals.access, payload);
|
||||
})
|
||||
.then(result => {
|
||||
res.status(200)
|
||||
.send(result);
|
||||
})
|
||||
.catch(next);
|
||||
});
|
||||
|
||||
module.exports = router;
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const jwtdecode = require('../../lib/express/jwt-decode');
|
||||
const internalToken = require('../../internal/token');
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const validator = require('../../lib/validator');
|
||||
const jwtdecode = require('../../lib/express/jwt-decode');
|
||||
@ -21,7 +19,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* GET /api/users
|
||||
@ -80,7 +78,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
.all(userIdFromMe)
|
||||
|
||||
/**
|
||||
@ -160,7 +158,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
.all(userIdFromMe)
|
||||
|
||||
/**
|
||||
@ -191,7 +189,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
.all(userIdFromMe)
|
||||
|
||||
/**
|
||||
@ -222,7 +220,7 @@ router
|
||||
.options((req, res) => {
|
||||
res.sendStatus(204);
|
||||
})
|
||||
.all(jwtdecode()) // preferred so it doesn't apply to nonexistent routes
|
||||
.all(jwtdecode())
|
||||
|
||||
/**
|
||||
* POST /api/users/123/login
|
||||
|
@ -1,8 +1,7 @@
|
||||
'use strict';
|
||||
|
||||
const express = require('express');
|
||||
const fs = require('fs');
|
||||
const PACKAGE = require('../../../package.json');
|
||||
const path = require('path')
|
||||
|
||||
const router = express.Router({
|
||||
caseSensitive: true,
|
||||
@ -29,15 +28,22 @@ router.get(/(.*)/, function (req, res, next) {
|
||||
version: PACKAGE.version
|
||||
});
|
||||
} else {
|
||||
fs.readFile('dist' + req.params.page, 'utf8', function (err, data) {
|
||||
if (err) {
|
||||
res.render('index', {
|
||||
version: PACKAGE.version
|
||||
});
|
||||
} else {
|
||||
res.contentType('text/html').end(data);
|
||||
}
|
||||
});
|
||||
var p = path.normalize('dist' + req.params.page)
|
||||
if (p.startsWith('dist')) { // Allow access to ressources under 'dist' directory only.
|
||||
fs.readFile(p, 'utf8', function (err, data) {
|
||||
if (err) {
|
||||
res.render('index', {
|
||||
version: PACKAGE.version
|
||||
});
|
||||
} else {
|
||||
res.contentType('text/html').end(data);
|
||||
}
|
||||
});
|
||||
} else {
|
||||
res.render('index', {
|
||||
version: PACKAGE.version
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
|
@ -9,6 +9,13 @@
|
||||
"type": "integer",
|
||||
"minimum": 1
|
||||
},
|
||||
"setting_id": {
|
||||
"description": "Unique identifier for a Setting",
|
||||
"example": "default-site",
|
||||
"readOnly": true,
|
||||
"type": "string",
|
||||
"minLength": 2
|
||||
},
|
||||
"token": {
|
||||
"type": "string",
|
||||
"minLength": 10
|
||||
@ -172,6 +179,11 @@
|
||||
"pattern": "^(?:\\*\\.)?(?:[^.*]+\\.?)+[^.]$"
|
||||
}
|
||||
},
|
||||
"enabled": {
|
||||
"description": "Is Enabled",
|
||||
"example": true,
|
||||
"type": "boolean"
|
||||
},
|
||||
"ssl_enabled": {
|
||||
"description": "Is SSL Enabled",
|
||||
"example": true,
|
||||
@ -182,6 +194,16 @@
|
||||
"example": false,
|
||||
"type": "boolean"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"description": "Is HSTS Enabled",
|
||||
"example": false,
|
||||
"type": "boolean"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"description": "Is HSTS applicable to all subdomains",
|
||||
"example": false,
|
||||
"type": "boolean"
|
||||
},
|
||||
"ssl_provider": {
|
||||
"type": "string",
|
||||
"pattern": "^(letsencrypt|other)$"
|
||||
|
@ -24,12 +24,21 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "../definitions.json#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "../definitions.json#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "../definitions.json#/definitions/hsts_subdomains"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "../definitions.json#/definitions/http2_support"
|
||||
},
|
||||
"advanced_config": {
|
||||
"type": "string"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "../definitions.json#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"type": "object"
|
||||
}
|
||||
@ -53,12 +62,21 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_subdomains"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
"advanced_config": {
|
||||
"$ref": "#/definitions/advanced_config"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
}
|
||||
@ -107,6 +125,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -147,6 +171,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -177,6 +207,34 @@
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Enable",
|
||||
"description": "Enables a existing 404 Host",
|
||||
"href": "/nginx/dead-hosts/{definitions.identity.example}/enable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Disable",
|
||||
"description": "Disables a existing 404 Host",
|
||||
"href": "/nginx/dead-hosts/{definitions.identity.example}/disable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@ -18,6 +18,10 @@
|
||||
"domain_names": {
|
||||
"$ref": "../definitions.json#/definitions/domain_names"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"type": "string",
|
||||
"enum": ["http", "https"]
|
||||
},
|
||||
"forward_host": {
|
||||
"type": "string",
|
||||
"minLength": 1,
|
||||
@ -34,6 +38,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "../definitions.json#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "../definitions.json#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "../definitions.json#/definitions/hsts_subdomains"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "../definitions.json#/definitions/http2_support"
|
||||
},
|
||||
@ -54,8 +64,49 @@
|
||||
"advanced_config": {
|
||||
"type": "string"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "../definitions.json#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"type": "object"
|
||||
},
|
||||
"locations": {
|
||||
"type": "array",
|
||||
"minItems": 0,
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": [
|
||||
"forward_scheme",
|
||||
"forward_host",
|
||||
"forward_port",
|
||||
"path"
|
||||
],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": ["integer", "null"]
|
||||
},
|
||||
"path": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "#/definitions/forward_scheme"
|
||||
},
|
||||
"forward_host": {
|
||||
"$ref": "#/definitions/forward_host"
|
||||
},
|
||||
"forward_port": {
|
||||
"$ref": "#/definitions/forward_port"
|
||||
},
|
||||
"forward_path": {
|
||||
"type": "string"
|
||||
},
|
||||
"advanced_config": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"properties": {
|
||||
@ -71,6 +122,9 @@
|
||||
"domain_names": {
|
||||
"$ref": "#/definitions/domain_names"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "#/definitions/forward_scheme"
|
||||
},
|
||||
"forward_host": {
|
||||
"$ref": "#/definitions/forward_host"
|
||||
},
|
||||
@ -83,6 +137,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_subdomains"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -101,8 +161,14 @@
|
||||
"advanced_config": {
|
||||
"$ref": "#/definitions/advanced_config"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
},
|
||||
"locations": {
|
||||
"$ref": "#/definitions/locations"
|
||||
}
|
||||
},
|
||||
"links": [
|
||||
@ -138,6 +204,7 @@
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"domain_names",
|
||||
"forward_scheme",
|
||||
"forward_host",
|
||||
"forward_port"
|
||||
],
|
||||
@ -145,6 +212,9 @@
|
||||
"domain_names": {
|
||||
"$ref": "#/definitions/domain_names"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "#/definitions/forward_scheme"
|
||||
},
|
||||
"forward_host": {
|
||||
"$ref": "#/definitions/forward_host"
|
||||
},
|
||||
@ -157,6 +227,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -175,8 +251,14 @@
|
||||
"advanced_config": {
|
||||
"$ref": "#/definitions/advanced_config"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
},
|
||||
"locations": {
|
||||
"$ref": "#/definitions/locations"
|
||||
}
|
||||
}
|
||||
},
|
||||
@ -203,6 +285,9 @@
|
||||
"domain_names": {
|
||||
"$ref": "#/definitions/domain_names"
|
||||
},
|
||||
"forward_scheme": {
|
||||
"$ref": "#/definitions/forward_scheme"
|
||||
},
|
||||
"forward_host": {
|
||||
"$ref": "#/definitions/forward_host"
|
||||
},
|
||||
@ -215,6 +300,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -233,8 +324,14 @@
|
||||
"advanced_config": {
|
||||
"$ref": "#/definitions/advanced_config"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
},
|
||||
"locations": {
|
||||
"$ref": "#/definitions/locations"
|
||||
}
|
||||
}
|
||||
},
|
||||
@ -257,6 +354,34 @@
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Enable",
|
||||
"description": "Enables a existing Proxy Host",
|
||||
"href": "/nginx/proxy-hosts/{definitions.identity.example}/enable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Disable",
|
||||
"description": "Disables a existing Proxy Host",
|
||||
"href": "/nginx/proxy-hosts/{definitions.identity.example}/disable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@ -32,6 +32,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "../definitions.json#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "../definitions.json#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "../definitions.json#/definitions/hsts_subdomains"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "../definitions.json#/definitions/http2_support"
|
||||
},
|
||||
@ -41,6 +47,9 @@
|
||||
"advanced_config": {
|
||||
"type": "string"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "../definitions.json#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"type": "object"
|
||||
}
|
||||
@ -70,6 +79,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_subdomains"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -79,6 +94,9 @@
|
||||
"advanced_config": {
|
||||
"$ref": "#/definitions/advanced_config"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
}
|
||||
@ -134,6 +152,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -183,6 +207,12 @@
|
||||
"ssl_forced": {
|
||||
"$ref": "#/definitions/ssl_forced"
|
||||
},
|
||||
"hsts_enabled": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"hsts_subdomains": {
|
||||
"$ref": "#/definitions/hsts_enabled"
|
||||
},
|
||||
"http2_support": {
|
||||
"$ref": "#/definitions/http2_support"
|
||||
},
|
||||
@ -216,6 +246,34 @@
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Enable",
|
||||
"description": "Enables a existing Redirection Host",
|
||||
"href": "/nginx/redirection-hosts/{definitions.identity.example}/enable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Disable",
|
||||
"description": "Disables a existing Redirection Host",
|
||||
"href": "/nginx/redirection-hosts/{definitions.identity.example}/disable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
99
src/backend/schema/endpoints/settings.json
Normal file
99
src/backend/schema/endpoints/settings.json
Normal file
@ -0,0 +1,99 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "endpoints/settings",
|
||||
"title": "Settings",
|
||||
"description": "Endpoints relating to Settings",
|
||||
"stability": "stable",
|
||||
"type": "object",
|
||||
"definitions": {
|
||||
"id": {
|
||||
"$ref": "../definitions.json#/definitions/setting_id"
|
||||
},
|
||||
"name": {
|
||||
"description": "Name",
|
||||
"example": "Default Site",
|
||||
"type": "string",
|
||||
"minLength": 2,
|
||||
"maxLength": 100
|
||||
},
|
||||
"description": {
|
||||
"description": "Description",
|
||||
"example": "Default Site",
|
||||
"type": "string",
|
||||
"minLength": 2,
|
||||
"maxLength": 255
|
||||
},
|
||||
"value": {
|
||||
"description": "Value",
|
||||
"example": "404",
|
||||
"type": "string",
|
||||
"maxLength": 255
|
||||
},
|
||||
"meta": {
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"links": [
|
||||
{
|
||||
"title": "List",
|
||||
"description": "Returns a list of Settings",
|
||||
"href": "/settings",
|
||||
"access": "private",
|
||||
"method": "GET",
|
||||
"rel": "self",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/properties"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Update",
|
||||
"description": "Updates a existing Setting",
|
||||
"href": "/settings/{definitions.identity.example}",
|
||||
"access": "private",
|
||||
"method": "PUT",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"value": {
|
||||
"$ref": "#/definitions/value"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
}
|
||||
}
|
||||
},
|
||||
"targetSchema": {
|
||||
"properties": {
|
||||
"$ref": "#/properties"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"id": {
|
||||
"$ref": "#/definitions/id"
|
||||
},
|
||||
"name": {
|
||||
"$ref": "#/definitions/description"
|
||||
},
|
||||
"description": {
|
||||
"$ref": "#/definitions/description"
|
||||
},
|
||||
"value": {
|
||||
"$ref": "#/definitions/value"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
}
|
||||
}
|
||||
}
|
@ -35,6 +35,9 @@
|
||||
"udp_forwarding": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "../definitions.json#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"type": "object"
|
||||
}
|
||||
@ -64,6 +67,9 @@
|
||||
"udp_forwarding": {
|
||||
"$ref": "#/definitions/udp_forwarding"
|
||||
},
|
||||
"enabled": {
|
||||
"$ref": "#/definitions/enabled"
|
||||
},
|
||||
"meta": {
|
||||
"$ref": "#/definitions/meta"
|
||||
}
|
||||
@ -184,6 +190,34 @@
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Enable",
|
||||
"description": "Enables a existing Stream",
|
||||
"href": "/nginx/streams/{definitions.identity.example}/enable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
{
|
||||
"title": "Disable",
|
||||
"description": "Disables a existing Stream",
|
||||
"href": "/nginx/streams/{definitions.identity.example}/disable",
|
||||
"access": "private",
|
||||
"method": "POST",
|
||||
"rel": "update",
|
||||
"http_header": {
|
||||
"$ref": "../examples.json#/definitions/auth_header"
|
||||
},
|
||||
"targetSchema": {
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@ -34,6 +34,9 @@
|
||||
},
|
||||
"access-lists": {
|
||||
"$ref": "endpoints/access-lists.json"
|
||||
},
|
||||
"settings": {
|
||||
"$ref": "endpoints/settings.json"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,5 +1,3 @@
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs');
|
||||
const NodeRSA = require('node-rsa');
|
||||
const config = require('config');
|
||||
@ -7,7 +5,7 @@ const logger = require('./logger').setup;
|
||||
const userModel = require('./models/user');
|
||||
const userPermissionModel = require('./models/user_permission');
|
||||
const authModel = require('./models/auth');
|
||||
const debug_mode = process.env.NODE_ENV !== 'production';
|
||||
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
|
||||
|
||||
module.exports = function () {
|
||||
return new Promise((resolve, reject) => {
|
||||
|
8
src/backend/templates/_hsts.conf
Normal file
8
src/backend/templates/_hsts.conf
Normal file
@ -0,0 +1,8 @@
|
||||
{% if certificate and certificate_id > 0 -%}
|
||||
{% if ssl_forced == 1 or ssl_forced == true %}
|
||||
{% if hsts_enabled == 1 or hsts_enabled == true %}
|
||||
# HSTS (ngx_http_headers_module is required) (31536000 seconds = 1 year)
|
||||
add_header Strict-Transport-Security "max-age=31536000;{% if hsts_subdomains == 1 or hsts_subdomains == true -%} includeSubDomains;{% endif %} preload" always;
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endif %}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user