Compare commits

...

58 Commits

Author SHA1 Message Date
06272d3d2c Use correct var when returning updated certificate 2019-05-09 10:03:41 +10:00
3885c0ad6d Add cert renewals to audit log 2019-05-09 09:20:49 +10:00
099ec00155 Don't use LE staging when debug mode is on in production 2019-05-09 08:58:10 +10:00
92fcae9c54 Added missing dialog for renewing certs 2019-05-08 15:34:14 +10:00
22e8961c80 Fixes #104 - allow using / location in custom location 2019-05-08 15:33:54 +10:00
4d5adefa41 Added ability to force renew a LE cert, and also fix revoking certs 2019-05-08 15:25:48 +10:00
feaa0e51bd Removed use strict 2019-05-08 15:24:57 +10:00
af83cb57d0 Merge branch 'develop' of github.com:jc21/nginx-proxy-manager into develop 2019-05-08 11:23:50 +10:00
8b4f3507c3 Revert to previous tabler version to hopefully fix ui issues 2019-05-08 11:21:59 +10:00
bda3dba369 Revert to previous tabler version to hopefully fix ui issues 2019-05-08 10:53:44 +10:00
beb313af40 Merge branch 'master' of github.com:jc21/nginx-proxy-manager into develop 2019-05-08 10:11:32 +10:00
4fad9d672f Correcting X-XSS-Protection Header (#136)
* Correcting X-XSS-Protection Header

X-XSS-Protection sets the configuration for the cross-site scripting filters built into most browsers. The best configuration is "X-XSS-Protection: 1; mode=block".

Was "0"
Now "1; mode=block"

* Update issue templates
2019-05-08 10:11:05 +10:00
0fca64929e Try DNS challenge in addition to http (#85) 2019-05-08 10:07:43 +10:00
9e476e5b24 Only Secure TLS Ciphers & Protocols (#134)
Disable insecure SSL/TLS ciphers & protocols. Only TLS_1.2 and TLS_1.3 should be enabled.
2019-05-08 10:01:08 +10:00
0819a265f5 Bumped version 2019-05-08 09:50:20 +10:00
ad8eac4f07 Update issue templates 2019-05-08 09:36:44 +10:00
b49de0e23e Enable TLS 1.3 by default 2019-05-02 13:03:16 +10:00
efbd024da9 Update copyright year (#121)
Updated the year in the copyright statement in the footer
2019-04-20 21:28:06 +10:00
e7ddcb91fc Fixed directory traversal vulnerability. (#114)
Awesome find!
2019-04-03 08:37:40 +10:00
3095cff7d9 Forgot to change dockerfiles to match CI names 2019-03-19 07:01:12 +10:00
6d8f5aa3a7 Version bump 2019-03-15 07:49:08 +10:00
27a06850ff CI Docker manifest improvements 2019-03-15 07:49:08 +10:00
dce6423c85 Fixes #103 - Allow for longer domain names 2019-03-15 07:49:08 +10:00
d79fcbf447 This commit resolves #98 so custom location can forward to custom path. (#99)
Awesome work!
2019-03-11 13:52:09 +10:00
631d9ae4eb CI Changes, docker image tag changes and manifests 2019-03-07 09:45:01 +10:00
0ac349ba67 CI Changes, docker image tag changes and manifests 2019-03-07 09:45:01 +10:00
1b0563a4a6 CI Changes, docker image tag changes and manifests 2019-03-07 09:45:01 +10:00
1db2a29d49 CI Changes, docker image tag changes and manifests 2019-03-07 09:45:01 +10:00
14e62a0830 CI Changes, docker image tag changes and manifests 2019-03-07 09:45:01 +10:00
2280a61c2b CI Fix for arm64 2019-03-05 08:25:12 +10:00
f3e6f64c0c Version Bump 2019-03-05 08:25:12 +10:00
d04b7a0d88 Bug fixes 2019-03-05 08:25:12 +10:00
71dfd5d8f8 Feature/custom locations (#74)
* New feature: custom locations

* Custom locations: exteding config generator

* Custom locations: refactoring

* Fixing proxy_host table on small screens

* Custom locations: translations

* Custom locations bugfix

* Custom locations bugfix

* PR #74 fixes
2019-03-05 08:21:02 +10:00
133d66c2fe Default Site customisation and new Settings space (#91) 2019-03-04 21:19:36 +10:00
6f1d38a0e2 Fixes #88 - Allow specifying X-FRAME-OPTIONS with an environment variable (#89) 2019-03-04 10:16:46 +10:00
aad9ecde6b CI: Prevent having to spin up resources when not Master branch 2019-03-01 20:12:49 +10:00
ae9324295c Merge branch 'develop' of github.com:jc21/nginx-proxy-manager into develop 2019-03-01 13:48:30 +10:00
0acec1105b CI: Prevent having to spin up resources when not Master branch 2019-03-01 20:12:49 +10:00
5a9a716ca6 CI: Prevent having to spin up resources when not Master branch 2019-03-01 13:47:49 +10:00
418899d425 Version bump 2019-02-27 17:52:30 +10:00
e7379e3683 Ignore default location when defined in advanced config (#79) 2019-02-25 10:42:16 +10:00
29bebcc73e Ignore default location when defined in advanced config (#79) 2019-02-25 10:34:55 +10:00
26064b20b8 Fix PR docker image pushing to wrong repo 2019-02-20 08:25:12 +10:00
3dc9b20543 CI Improvements (#77) 2019-02-20 14:35:10 +10:00
444dbd5160 Added PR build steps to CI 2019-02-20 08:25:12 +10:00
c2f99e253c Merge branch 'master' of github.com:jc21/nginx-proxy-manager 2019-02-20 10:04:16 +10:00
5c7fb7b698 Added armv6 Dockerfile 2019-02-20 08:25:12 +10:00
733d7d9583 Update DOCKERHUB.md 2019-02-19 17:05:26 +10:00
6d2f532806 Updated arm instructions 2019-02-18 21:14:26 +10:00
f76c9226c8 Fix workdir perms for subsequent builds 2019-02-18 21:14:26 +10:00
ecbc41b622 Arm64 build process doesn't run as root 2019-02-18 21:14:26 +10:00
4f60d3e7df Fix CI now that tags are changes 2019-02-18 21:14:26 +10:00
7d86fd223e Fix base docker images for arm packages 2019-02-18 21:12:46 +10:00
e3ed216a70 Added arm64 build 2019-02-18 21:12:41 +10:00
2a3d792591 Fixes #68 - HSTS is now part of the UI 2019-02-18 18:21:45 +10:00
4d754275ab Fixes #61 - Http/2 support can now be disabled 2019-02-18 15:33:32 +10:00
44e5f0957c Whoops, missing comma 2019-01-16 10:12:10 +10:00
83ef426b93 Increased custom ssl file size limits 2019-01-16 10:11:51 +10:00
185 changed files with 2439 additions and 602 deletions

36
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,36 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Checklist**
- Have you pulled and found the error with `jc21/nginx-proxy-manager:latest` docker image?
- Are you sure you're not using someone else's docker image?
- If having problems with Lets Encrypt, have you made absolutely sure your site is accessible from outside of your network?
**Describe the bug**
- A clear and concise description of what the bug is.
- What version of Nginx Proxy Manager is reported on the login page?
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Operating System**
- Please specify if using a Rpi, Mac, orchestration tool or any other setups that might affect the reproduction of this error.
**Additional context**
Add any other context about the problem here, docker version, browser version if applicable to the problem. Too much info is better than too little.

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

38
Dockerfile.arm64 Normal file
View File

@ -0,0 +1,38 @@
FROM jc21/nginx-proxy-manager-base:arm64
MAINTAINER Jamie Curnow <jc@jc21.com>
LABEL maintainer="Jamie Curnow <jc@jc21.com>"
ENV SUPPRESS_NO_CONFIG_WARNING=1
ENV S6_FIX_ATTRS_HIDDEN=1
RUN echo "fs.file-max = 65535" > /etc/sysctl.conf
# Nginx, Node and required packages should already be installed from the base image
# root filesystem
COPY rootfs /
# s6 overlay
RUN curl -L -o /tmp/s6-overlay-aarch64.tar.gz "https://github.com/just-containers/s6-overlay/releases/download/v1.21.8.0/s6-overlay-aarch64.tar.gz" \
&& tar xzf /tmp/s6-overlay-aarch64.tar.gz -C /
# App
ENV NODE_ENV=production
ADD dist /app/dist
ADD node_modules /app/node_modules
ADD src/backend /app/src/backend
ADD package.json /app/package.json
ADD knexfile.js /app/knexfile.js
# Volumes
VOLUME [ "/data", "/etc/letsencrypt" ]
CMD [ "/init" ]
# Ports
EXPOSE 80
EXPOSE 81
EXPOSE 443
EXPOSE 9876
HEALTHCHECK --interval=15s --timeout=3s CMD curl -f http://localhost:9876/health || exit 1

38
Dockerfile.armv6l Normal file
View File

@ -0,0 +1,38 @@
FROM jc21/nginx-proxy-manager-base:armv6
MAINTAINER Jamie Curnow <jc@jc21.com>
LABEL maintainer="Jamie Curnow <jc@jc21.com>"
ENV SUPPRESS_NO_CONFIG_WARNING=1
ENV S6_FIX_ATTRS_HIDDEN=1
RUN echo "fs.file-max = 65535" > /etc/sysctl.conf
# Nginx, Node and required packages should already be installed from the base image
# root filesystem
COPY rootfs /
# s6 overlay
RUN curl -L -o /tmp/s6-overlay-arm.tar.gz "https://github.com/just-containers/s6-overlay/releases/download/v1.21.8.0/s6-overlay-arm.tar.gz" \
&& tar xzf /tmp/s6-overlay-arm.tar.gz -C /
# App
ENV NODE_ENV=production
ADD dist /app/dist
ADD node_modules /app/node_modules
ADD src/backend /app/src/backend
ADD package.json /app/package.json
ADD knexfile.js /app/knexfile.js
# Volumes
VOLUME [ "/data", "/etc/letsencrypt" ]
CMD [ "/init" ]
# Ports
EXPOSE 80
EXPOSE 81
EXPOSE 443
EXPOSE 9876
HEALTHCHECK --interval=15s --timeout=3s CMD curl -f http://localhost:9876/health || exit 1

View File

@ -1,4 +1,4 @@
FROM jc21/nginx-proxy-manager-base:latest-armhf
FROM jc21/nginx-proxy-manager-base:armhf
MAINTAINER Jamie Curnow <jc@jc21.com>
LABEL maintainer="Jamie Curnow <jc@jc21.com>"

362
Jenkinsfile vendored
View File

@ -5,17 +5,48 @@ pipeline {
}
agent any
environment {
IMAGE_NAME = "nginx-proxy-manager"
BASE_IMAGE_NAME = "jc21/nginx-proxy-manager-base:latest"
TEMP_IMAGE_NAME = "nginx-proxy-manager-build_${BUILD_NUMBER}"
TEMP_IMAGE_NAME_ARM = "nginx-proxy-manager-arm-build_${BUILD_NUMBER}"
TAG_VERSION = getPackageVersion()
MAJOR_VERSION = "2"
IMAGE = "nginx-proxy-manager"
BASE_IMAGE = "jc21/${IMAGE}-base"
TEMP_IMAGE = "${IMAGE}-build_${BUILD_NUMBER}"
TAG_VERSION = getPackageVersion()
MAJOR_VERSION = "2"
BRANCH_LOWER = "${BRANCH_NAME.toLowerCase()}"
// Architectures:
AMD64_TAG = "amd64"
ARMV6_TAG = "armv6l"
ARMV7_TAG = "armv7l"
ARM64_TAG = "arm64"
}
stages {
stage('Prepare') {
stage('Build PR') {
when {
changeRequest()
}
steps {
sh 'docker pull $DOCKER_CI_TOOLS'
ansiColor('xterm') {
// Codebase
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
sh 'rm -rf node_modules'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
sh 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} node-prune'
// Docker Build
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${AMD64_TAG} .'
// Dockerhub
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG}'
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '${dpass}'"
sh 'docker push docker.io/jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG}'
}
sh 'docker rmi ${TEMP_IMAGE}-${AMD64_TAG}'
script {
def comment = pullRequest.comment("Docker Image for build ${BUILD_NUMBER} is available on [DockerHub](https://cloud.docker.com/repository/docker/jc21/${IMAGE}) as `jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG}`")
}
}
}
}
stage('Build Develop') {
@ -25,125 +56,289 @@ pipeline {
steps {
ansiColor('xterm') {
// Codebase
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME npm run-script build'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
sh 'rm -rf node_modules'
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install --prod'
sh 'docker run --rm -v $(pwd):/data $DOCKER_CI_TOOLS node-prune'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
sh 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} node-prune'
// Docker Build
sh 'docker build --pull --no-cache --squash --compress -t $TEMP_IMAGE_NAME .'
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${AMD64_TAG} .'
// Dockerhub
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:develop'
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:develop-${AMD64_TAG}'
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '$dpass'"
sh 'docker push docker.io/jc21/$IMAGE_NAME:develop'
sh "docker login -u '${duser}' -p '${dpass}'"
sh 'docker push docker.io/jc21/${IMAGE}:develop-${AMD64_TAG}'
}
// Private Registry
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:develop'
withCredentials([usernamePassword(credentialsId: 'jc21-private-registry', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '$dpass' $DOCKER_PRIVATE_REGISTRY"
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:develop'
}
sh 'docker rmi $TEMP_IMAGE_NAME'
sh 'docker rmi ${TEMP_IMAGE}-${AMD64_TAG}'
}
}
}
stage('Build Master') {
when {
branch 'master'
}
parallel {
stage('x86_64') {
when {
branch 'master'
// ========================
// amd64
// ========================
stage('amd64') {
agent {
label 'amd64'
}
steps {
ansiColor('xterm') {
// Codebase
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME npm run-script build'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
sh 'rm -rf node_modules'
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME yarn install --prod'
sh 'docker run --rm -v $(pwd):/data $DOCKER_CI_TOOLS node-prune'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
sh 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} node-prune'
// Docker Build
sh 'docker build --pull --no-cache --squash --compress -t $TEMP_IMAGE_NAME .'
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${AMD64_TAG} .'
// Dockerhub
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:$TAG_VERSION'
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION'
sh 'docker tag $TEMP_IMAGE_NAME docker.io/jc21/$IMAGE_NAME:latest'
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${AMD64_TAG} docker.io/jc21/${IMAGE}:latest-${AMD64_TAG}'
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '$dpass'"
sh 'docker push docker.io/jc21/$IMAGE_NAME:$TAG_VERSION'
sh 'docker push docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION'
sh 'docker push docker.io/jc21/$IMAGE_NAME:latest'
sh "docker login -u '${duser}' -p '${dpass}'"
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:latest-${AMD64_TAG}'
}
// Private Registry
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION'
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION'
sh 'docker tag $TEMP_IMAGE_NAME $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest'
withCredentials([usernamePassword(credentialsId: 'jc21-private-registry', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '$dpass' $DOCKER_PRIVATE_REGISTRY"
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION'
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION'
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest'
}
sh 'docker rmi $TEMP_IMAGE_NAME'
sh 'docker rmi ${TEMP_IMAGE}-${AMD64_TAG}'
}
}
}
stage('armhf') {
when {
branch 'master'
}
// ========================
// arm64
// ========================
stage('arm64') {
agent {
label 'armhf'
label 'arm64'
}
steps {
ansiColor('xterm') {
// Codebase
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME-armhf yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME-armhf npm run-script build'
sh 'rm -rf node_modules'
sh 'docker run --rm -v $(pwd):/app -w /app $BASE_IMAGE_NAME-armhf yarn install --prod'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
sh 'sudo rm -rf node_modules'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
// Docker Build
sh 'docker build --pull --no-cache --squash --compress -t $TEMP_IMAGE_NAME_ARM -f Dockerfile.armhf .'
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${ARM64_TAG} -f Dockerfile.${ARM64_TAG} .'
// Dockerhub
sh 'docker tag $TEMP_IMAGE_NAME_ARM docker.io/jc21/$IMAGE_NAME:$TAG_VERSION-armhf'
sh 'docker tag $TEMP_IMAGE_NAME_ARM docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION-armhf'
sh 'docker tag $TEMP_IMAGE_NAME_ARM docker.io/jc21/$IMAGE_NAME:latest-armhf'
sh 'docker tag ${TEMP_IMAGE}-${ARM64_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${ARM64_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${ARM64_TAG} docker.io/jc21/${IMAGE}:latest-${ARM64_TAG}'
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '$dpass'"
sh 'docker push docker.io/jc21/$IMAGE_NAME:$TAG_VERSION-armhf'
sh 'docker push docker.io/jc21/$IMAGE_NAME:$MAJOR_VERSION-armhf'
sh 'docker push docker.io/jc21/$IMAGE_NAME:latest-armhf'
sh "docker login -u '${duser}' -p '${dpass}'"
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:latest-${ARM64_TAG}'
}
// Private Registry
sh 'docker tag $TEMP_IMAGE_NAME_ARM $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION-armhf'
sh 'docker tag $TEMP_IMAGE_NAME_ARM $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION-armhf'
sh 'docker tag $TEMP_IMAGE_NAME_ARM $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest-armhf'
withCredentials([usernamePassword(credentialsId: 'jc21-private-registry', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '$dpass' $DOCKER_PRIVATE_REGISTRY"
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$TAG_VERSION-armhf'
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:$MAJOR_VERSION-armhf'
sh 'docker push $DOCKER_PRIVATE_REGISTRY/$IMAGE_NAME:latest-armhf'
}
sh 'docker rmi $TEMP_IMAGE_NAME_ARM'
sh 'docker rmi ${TEMP_IMAGE}-${ARM64_TAG}'
}
}
}
// ========================
// armv7l
// ========================
stage('armv7l') {
agent {
label 'armv7l'
}
steps {
ansiColor('xterm') {
// Codebase
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
sh 'rm -rf node_modules'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
// Docker Build
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${ARMV7_TAG} -f Dockerfile.${ARMV7_TAG} .'
// Dockerhub
sh 'docker tag ${TEMP_IMAGE}-${ARMV7_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${ARMV7_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${ARMV7_TAG} docker.io/jc21/${IMAGE}:latest-${ARMV7_TAG}'
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '${dpass}'"
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:latest-${ARMV7_TAG}'
}
sh 'docker rmi ${TEMP_IMAGE}-${ARMV7_TAG}'
}
}
}
// ========================
// armv6l - Disabled for the time being
// ========================
/*
stage('armv6l') {
agent {
label 'armv6l'
}
steps {
ansiColor('xterm') {
// Codebase
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} npm run-script build'
sh 'rm -rf node_modules'
sh 'docker run --rm -v $(pwd):/app -w /app ${BASE_IMAGE} yarn install --prod'
// Docker Build
sh 'docker build --pull --no-cache --squash --compress -t ${TEMP_IMAGE}-${ARMV6_TAG} -f Dockerfile.${ARMV6_TAG} .'
// Dockerhub
sh 'docker tag ${TEMP_IMAGE}-${ARMV6_TAG} docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${ARMV6_TAG} docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG}'
sh 'docker tag ${TEMP_IMAGE}-${ARMV6_TAG} docker.io/jc21/${IMAGE}:latest-${ARMV6_TAG}'
withCredentials([usernamePassword(credentialsId: 'jc21-dockerhub', passwordVariable: 'dpass', usernameVariable: 'duser')]) {
sh "docker login -u '${duser}' -p '${dpass}'"
sh 'docker push docker.io/jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG}'
sh 'docker push docker.io/jc21/${IMAGE}:latest-${ARMV6_TAG}'
}
sh 'docker rmi ${TEMP_IMAGE}-${ARMV6_TAG}'
}
}
}
*/
}
}
// ========================
// latest manifest
// ========================
stage('Latest Manifest') {
when {
branch 'master'
}
steps {
ansiColor('xterm') {
// =======================
// latest
// =======================
sh 'docker pull jc21/${IMAGE}:latest-${AMD64_TAG}'
sh 'docker pull jc21/${IMAGE}:latest-${ARM64_TAG}'
sh 'docker pull jc21/${IMAGE}:latest-${ARMV7_TAG}'
//sh 'docker pull jc21/${IMAGE}:latest-${ARMV6_TAG}'
sh 'docker manifest push --purge jc21/${IMAGE}:latest || echo ""'
sh 'docker manifest create jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${AMD64_TAG} jc21/${IMAGE}:latest-${ARM64_TAG} jc21/${IMAGE}:latest-${ARMV7_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${AMD64_TAG} --arch ${AMD64_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
//sh 'docker manifest annotate jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
sh 'docker manifest push --purge jc21/${IMAGE}:latest'
// =======================
// major version
// =======================
sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG}'
sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG}'
sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
//sh 'docker pull jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG}'
sh 'docker manifest push --purge jc21/${IMAGE}:${MAJOR_VERSION} || echo ""'
sh 'docker manifest create jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG} --arch ${AMD64_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
//sh 'docker manifest annotate jc21/${IMAGE}:${MAJOR_VERSION} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
// =======================
// version
// =======================
sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG}'
sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG}'
sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
//sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
sh 'docker manifest push --purge jc21/${IMAGE}:${TAG_VERSION} || echo ""'
sh 'docker manifest create jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG} --arch ${AMD64_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
//sh 'docker manifest annotate jc21/${IMAGE}:${TAG_VERSION} jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
}
}
}
// ========================
// develop
// ========================
stage('Develop Manifest') {
when {
branch 'develop'
}
steps {
ansiColor('xterm') {
sh 'docker pull jc21/${IMAGE}:develop-${AMD64_TAG}'
//sh 'docker pull jc21/${IMAGE}:develop-${ARM64_TAG}'
//sh 'docker pull jc21/${IMAGE}:develop-${ARMV7_TAG}'
//sh 'docker pull jc21/${IMAGE}:${TAG_VERSION}-${ARMV6_TAG}'
sh 'docker manifest push --purge jc21/${IMAGE}:develop || :'
sh 'docker manifest create jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${AMD64_TAG}'
sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${AMD64_TAG} --arch ${AMD64_TAG}'
//sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${ARM64_TAG} --os linux --arch ${ARM64_TAG}'
//sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${ARMV7_TAG} --os linux --arch arm --variant ${ARMV7_TAG}'
//sh 'docker manifest annotate jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${ARMV6_TAG} --os linux --arch arm --variant ${ARMV6_TAG}'
}
}
}
// ========================
// cleanup
// ========================
stage('Latest Cleanup') {
when {
branch 'master'
}
steps {
ansiColor('xterm') {
sh 'docker rmi jc21/${IMAGE}:latest jc21/${IMAGE}:latest-${AMD64_TAG} jc21/${IMAGE}:latest-${ARM64_TAG} jc21/${IMAGE}:latest-${ARMV7_TAG} || echo ""'
sh 'docker rmi jc21/${IMAGE}:${MAJOR_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${MAJOR_VERSION}-${ARMV7_TAG} || echo ""'
sh 'docker rmi jc21/${IMAGE}:${TAG_VERSION}-${AMD64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARM64_TAG} jc21/${IMAGE}:${TAG_VERSION}-${ARMV7_TAG} || echo ""'
}
}
}
stage('Develop Cleanup') {
when {
branch 'develop'
}
steps {
ansiColor('xterm') {
sh 'docker rmi jc21/${IMAGE}:develop jc21/${IMAGE}:develop-${AMD64_TAG} || echo ""'
}
}
}
stage('PR Cleanup') {
when {
changeRequest()
}
steps {
ansiColor('xterm') {
sh 'docker rmi jc21/${IMAGE}:github-${BRANCH_LOWER}-${AMD64_TAG} || echo ""'
}
}
}
}
@ -160,7 +355,6 @@ pipeline {
}
def getPackageVersion() {
ver = sh(script: 'docker run --rm -v $(pwd):/data $DOCKER_CI_TOOLS bash -c "cat /data/package.json|jq -r \'.version\'"', returnStdout: true)
ver = sh(script: 'docker run --rm -v $(pwd):/data ${DOCKER_CI_TOOLS} bash -c "cat /data/package.json|jq -r \'.version\'"', returnStdout: true)
return ver.trim()
}

View File

@ -2,7 +2,7 @@
# Nginx Proxy Manager
![Version](https://img.shields.io/badge/version-2.0.8-green.svg?style=for-the-badge)
![Version](https://img.shields.io/badge/version-2.0.13-green.svg?style=for-the-badge)
![Stars](https://img.shields.io/docker/stars/jc21/nginx-proxy-manager.svg?style=for-the-badge)
![Pulls](https://img.shields.io/docker/pulls/jc21/nginx-proxy-manager.svg?style=for-the-badge)
@ -57,24 +57,6 @@ Please consult the [installation instructions](doc/INSTALL.md) for a complete gu
if you just want to get up and running in the quickest time possible, grab all the files in the `doc/example/` folder and run `docker-compose up -d`
## Importing from Version 1?
Here's a [guide for you to migrate your configuration](doc/IMPORTING.md). You should definitely read the [installation instructions](doc/INSTALL.md) first though.
**Why should I?**
Version 2 has the following improvements:
- Management security and multiple user access
- User permissions and visibility
- Custom SSL certificate support
- Audit log of changes
- Broken nginx config detection
- Multiple domains in Let's Encrypt certificates
- Wildcard domain name support (not available with a Let's Encrypt certificate though)
- It's super sexy
## Administration
When your docker container is running, connect to it on port `81` for the admin interface.

View File

@ -2,7 +2,7 @@
# Nginx Proxy Manager
![Version](https://img.shields.io/badge/version-2.0.8-green.svg?style=for-the-badge)
![Version](https://img.shields.io/badge/version-2.0.13-green.svg?style=for-the-badge)
![Stars](https://img.shields.io/docker/stars/jc21/nginx-proxy-manager.svg?style=for-the-badge)
![Pulls](https://img.shields.io/docker/pulls/jc21/nginx-proxy-manager.svg?style=for-the-badge)
@ -16,6 +16,7 @@ running at home or otherwise, including free SSL, without having to know too muc
* latest 2, 2.x.x ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile))
* latest-armhf, 2-armhf, 2.x.x-armhf ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile.armhf))
* latest-arm64, 2-arm64, 2.x.x-arm64 ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/master/Dockerfile.arm64))
* 1, 1.x.x ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/1.1.2/Dockerfile))
* 1-armhf, 1.x.x-armhf ([Dockerfile](https://github.com/jc21/nginx-proxy-manager/blob/1.1.2/Dockerfile.armhf))

View File

@ -44,7 +44,7 @@ It's easy to use another docker container for your database also and link it as
version: "3"
services:
app:
image: jc21/nginx-proxy-manager:2
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- 80:80
@ -77,7 +77,7 @@ Via `docker-compose`:
version: "3"
services:
app:
image: jc21/nginx-proxy-manager:2
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- 80:80
@ -100,15 +100,17 @@ docker run -d \
-v /path/to/config.json:/app/config/production.json \
-v /path/to/data:/data \
-v /path/to/letsencrypt:/etc/letsencrypt \
jc21/nginx-proxy-manager:2
jc21/nginx-proxy-manager:latest
```
### Running on Raspberry PI / `armhf`
I have created a `armhf` docker container just for you. There may be issues with it,
I have created `armhf` and `arm64` docker containers just for you. There may be issues with it,
if you have issues please report them here.
Note: Rpi v2 and below won't work with these images.
```bash
docker run -d \
--name nginx-proxy-manager-app \
@ -118,7 +120,7 @@ docker run -d \
-v /path/to/config.json:/app/config/production.json \
-v /path/to/data:/data \
-v /path/to/letsencrypt:/etc/letsencrypt \
jc21/nginx-proxy-manager:2-armhf
jc21/nginx-proxy-manager:latest-armhf
```
@ -141,3 +143,23 @@ Password: changeme
```
Immediately after logging in with this default user you will be asked to modify your details and change your password.
### Advanced Options
#### X-FRAME-OPTIONS Header
You can configure the [`X-FRAME-OPTIONS`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options) header
value by specifying it as a Docker environment variable. The default if not specified is `deny`.
```yml
...
environment:
X_FRAME_OPTIONS: "sameorigin"
...
```
```
... -e "X_FRAME_OPTIONS=sameorigin" ...
```

View File

@ -1,6 +1,6 @@
{
"name": "nginx-proxy-manager",
"version": "2.0.8",
"version": "2.0.13",
"description": "A beautiful interface for creating Nginx endpoints",
"main": "src/backend/index.js",
"devDependencies": {
@ -28,7 +28,7 @@
"numeral": "^2.0.6",
"sass-loader": "^7.0.3",
"style-loader": "^0.22.1",
"tabler-ui": "git+https://github.com/tabler/tabler.git",
"tabler-ui": "git+https://github.com/tabler/tabler.git#00f78ad823311bc3ad974ac3e5b0126198f0a813",
"underscore": "^1.8.3",
"webpack": "^4.25.1",
"webpack-cli": "^3.1.2",

View File

@ -22,10 +22,10 @@ server {
}
}
# Default 80 Host, which shows a "You are not configured" page
# "You are not configured" page, which is the default if another default doesn't exist
server {
listen 80 default;
server_name localhost;
listen 80;
server_name localhost-nginx-proxy-manager;
access_log /data/logs/default.log proxy;
@ -38,9 +38,9 @@ server {
}
}
# Default 443 Host
# First 443 Host, which is the default if another default doesn't exist
server {
listen 443 ssl default;
listen 443 ssl;
server_name localhost;
access_log /data/logs/default.log proxy;

View File

@ -2,11 +2,8 @@ ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'EECDH+AESGCM:AES256+EECDH:AES256+EDH:EDH+AESGCM:ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-
ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AE
S128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
S128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES';
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;

View File

@ -19,25 +19,26 @@ events {
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
client_body_temp_path /tmp/nginx/body 1 2;
keepalive_timeout 65;
ssl_prefer_server_ciphers on;
gzip on;
proxy_ignore_client_abort off;
client_max_body_size 2000m;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_cache off;
proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m;
proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
client_body_temp_path /tmp/nginx/body 1 2;
keepalive_timeout 65;
ssl_prefer_server_ciphers on;
gzip on;
proxy_ignore_client_abort off;
client_max_body_size 2000m;
server_names_hash_bucket_size 64;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_cache off;
proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m;
proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
# MISS
# BYPASS
@ -70,6 +71,7 @@ http {
# Files generated by NPM
include /etc/nginx/conf.d/*.conf;
include /data/nginx/default_host/*.conf;
include /data/nginx/proxy_host/*.conf;
include /data/nginx/redirection_host/*.conf;
include /data/nginx/dead_host/*.conf;

View File

@ -7,6 +7,8 @@ mkdir -p /tmp/nginx/body \
/data/custom_ssl \
/data/logs \
/data/access \
/data/nginx/default_host \
/data/nginx/default_www \
/data/nginx/proxy_host \
/data/nginx/redirection_host \
/data/nginx/stream \

View File

@ -1,5 +1,3 @@
'use strict';
const path = require('path');
const express = require('express');
const bodyParser = require('body-parser');
@ -40,11 +38,17 @@ app.use(require('./lib/express/cors'));
// General security/cache related headers + server header
app.use(function (req, res, next) {
let x_frame_options = 'DENY';
if (typeof process.env.X_FRAME_OPTIONS !== 'undefined' && process.env.X_FRAME_OPTIONS) {
x_frame_options = process.env.X_FRAME_OPTIONS;
}
res.set({
'Strict-Transport-Security': 'includeSubDomains; max-age=631138519; preload',
'X-XSS-Protection': '0',
'X-XSS-Protection': '1; mode=block',
'X-Content-Type-Options': 'nosniff',
'X-Frame-Options': 'DENY',
'X-Frame-Options': x_frame_options,
'Cache-Control': 'no-cache, no-store, max-age=0, must-revalidate',
Pragma: 'no-cache',
Expires: 0

View File

@ -1,5 +1,3 @@
'use strict';
const config = require('config');
if (!config.has('database')) {

View File

@ -1,10 +1,8 @@
'use strict';
const fs = require('fs');
const logger = require('./logger').import;
const utils = require('./lib/utils');
const batchflow = require('batchflow');
const debug_mode = process.env.NODE_ENV !== 'production';
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
const internalProxyHost = require('./internal/proxy-host');
const internalRedirectionHost = require('./internal/redirection-host');

View File

@ -1,7 +1,5 @@
#!/usr/bin/env node
'use strict';
const logger = require('./logger').global;
function appStart () {

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const fs = require('fs');
const batchflow = require('batchflow');

View File

@ -1,5 +1,3 @@
'use strict';
const error = require('../lib/error');
const auditLogModel = require('../models/audit-log');
@ -46,9 +44,9 @@ const internalAuditLog = {
* @param {Access} access
* @param {Object} data
* @param {String} data.action
* @param {Integer} [data.user_id]
* @param {Integer} [data.object_id]
* @param {Integer} [data.object_type]
* @param {Number} [data.user_id]
* @param {Number} [data.object_id]
* @param {Number} [data.object_type]
* @param {Object} [data.meta]
* @returns {Promise}
*/

View File

@ -1,5 +1,3 @@
'use strict';
const fs = require('fs');
const _ = require('lodash');
const logger = require('../logger').ssl;
@ -9,19 +7,20 @@ const internalAuditLog = require('./audit-log');
const tempWrite = require('temp-write');
const utils = require('../lib/utils');
const moment = require('moment');
const debug_mode = process.env.NODE_ENV !== 'production';
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
const le_staging = process.env.NODE_ENV !== 'production';
const internalNginx = require('./nginx');
const internalHost = require('./host');
const certbot_command = '/usr/bin/certbot';
function omissions () {
function omissions() {
return ['is_deleted'];
}
const internalCertificate = {
allowed_ssl_files: ['certificate', 'certificate_key', 'intermediate_certificate'],
interval_timeout: 1000 * 60 * 60 * 12, // 12 hours
interval_timeout: 1000 * 60 * 60, // 1 hour
interval: null,
interval_processing: false,
@ -38,7 +37,7 @@ const internalCertificate = {
internalCertificate.interval_processing = true;
logger.info('Renewing SSL certs close to expiry...');
return utils.exec(certbot_command + ' renew -q ' + (debug_mode ? '--staging' : ''))
return utils.exec(certbot_command + ' renew -q ' + (le_staging ? '--staging' : ''))
.then(result => {
logger.info(result);
@ -205,7 +204,7 @@ const internalCertificate = {
/**
* @param {Access} access
* @param {Object} data
* @param {Integer} data.id
* @param {Number} data.id
* @param {String} [data.email]
* @param {String} [data.name]
* @return {Promise}
@ -251,7 +250,7 @@ const internalCertificate = {
/**
* @param {Access} access
* @param {Object} data
* @param {Integer} data.id
* @param {Number} data.id
* @param {Array} [data.expand]
* @param {Array} [data.omit]
* @return {Promise}
@ -297,7 +296,7 @@ const internalCertificate = {
/**
* @param {Access} access
* @param {Object} data
* @param {Integer} data.id
* @param {Number} data.id
* @param {String} [data.reason]
* @returns {Promise}
*/
@ -381,7 +380,7 @@ const internalCertificate = {
/**
* Report use
*
* @param {Integer} user_id
* @param {Number} user_id
* @param {String} visibility
* @returns {Promise}
*/
@ -522,7 +521,7 @@ const internalCertificate = {
/**
* @param {Access} access
* @param {Object} data
* @param {Integer} data.id
* @param {Number} data.id
* @param {Object} data.files
* @returns {Promise}
*/
@ -719,9 +718,9 @@ const internalCertificate = {
let cmd = certbot_command + ' certonly --cert-name "npm-' + certificate.id + '" --agree-tos ' +
'--email "' + certificate.meta.letsencrypt_email + '" ' +
'--preferred-challenges "http" ' +
'--preferred-challenges "dns,http" ' +
'-n -a webroot -d "' + certificate.domain_names.join(',') + '" ' +
(debug_mode ? '--staging' : '');
(le_staging ? '--staging' : '');
if (debug_mode) {
logger.info('Command:', cmd);
@ -734,6 +733,48 @@ const internalCertificate = {
});
},
/**
* @param {Access} access
* @param {Object} data
* @param {Number} data.id
* @returns {Promise}
*/
renew: (access, data) => {
return access.can('certificates:update', data)
.then(() => {
return internalCertificate.get(access, data);
})
.then((certificate) => {
if (certificate.provider === 'letsencrypt') {
return internalCertificate.renewLetsEncryptSsl(certificate)
.then(() => {
return internalCertificate.getCertificateInfoFromFile('/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem')
})
.then(cert_info => {
return certificateModel
.query()
.patchAndFetchById(certificate.id, {
expires_on: certificateModel.raw('FROM_UNIXTIME(' + cert_info.dates.to + ')')
});
})
.then((updated_certificate) => {
// Add to audit log
return internalAuditLog.add(access, {
action: 'renewed',
object_type: 'certificate',
object_id: updated_certificate.id,
meta: updated_certificate
})
.then(() => {
return updated_certificate;
});
})
} else {
throw new error.ValidationError('Only Let\'sEncrypt certificates can be renewed');
}
})
},
/**
* @param {Object} certificate the certificate row
* @returns {Promise}
@ -741,7 +782,7 @@ const internalCertificate = {
renewLetsEncryptSsl: certificate => {
logger.info('Renewing Let\'sEncrypt certificates for Cert #' + certificate.id + ': ' + certificate.domain_names.join(', '));
let cmd = certbot_command + ' renew -n --force-renewal --disable-hook-validation --cert-name "npm-' + certificate.id + '" ' + (debug_mode ? '--staging' : '');
let cmd = certbot_command + ' renew -n --force-renewal --disable-hook-validation --cert-name "npm-' + certificate.id + '" ' + (le_staging ? '--staging' : '');
if (debug_mode) {
logger.info('Command:', cmd);
@ -762,17 +803,29 @@ const internalCertificate = {
revokeLetsEncryptSsl: (certificate, throw_errors) => {
logger.info('Revoking Let\'sEncrypt certificates for Cert #' + certificate.id + ': ' + certificate.domain_names.join(', '));
let cmd = certbot_command + ' revoke --cert-path "/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem" ' + (debug_mode ? '--staging' : '');
let revoke_cmd = certbot_command + ' revoke --cert-path "/etc/letsencrypt/live/npm-' + certificate.id + '/fullchain.pem" ' + (le_staging ? '--staging' : '');
let delete_cmd = certbot_command + ' delete --cert-name "npm-' + certificate.id + '" ' + (le_staging ? '--staging' : '');
if (debug_mode) {
logger.info('Command:', cmd);
logger.info('Command:', revoke_cmd);
}
return utils.exec(cmd)
.then(result => {
return utils.exec(revoke_cmd)
.then((result) => {
logger.info(result);
return result;
})
.then(() => {
if (debug_mode) {
logger.info('Command:', delete_cmd);
}
return utils.exec(delete_cmd)
.then((result) => {
logger.info(result);
return result;
})
})
.catch(err => {
if (debug_mode) {
logger.error(err.message);
@ -796,7 +849,7 @@ const internalCertificate = {
/**
* @param {Object} in_use_result
* @param {Integer} in_use_result.total_count
* @param {Number} in_use_result.total_count
* @param {Array} in_use_result.proxy_hosts
* @param {Array} in_use_result.redirection_hosts
* @param {Array} in_use_result.dead_hosts
@ -826,7 +879,7 @@ const internalCertificate = {
/**
* @param {Object} in_use_result
* @param {Integer} in_use_result.total_count
* @param {Number} in_use_result.total_count
* @param {Array} in_use_result.proxy_hosts
* @param {Array} in_use_result.redirection_hosts
* @param {Array} in_use_result.dead_hosts

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const error = require('../lib/error');
const deadHostModel = require('../models/dead_host');
@ -47,6 +45,7 @@ const internalDeadHost = {
.then(() => {
// At this point the domains should have been checked
data.owner_user_id = access.token.getUserId(1);
data = internalHost.cleanSslHstsData(data);
return deadHostModel
.query()
@ -89,11 +88,11 @@ const internalDeadHost = {
// Add to audit log
return internalAuditLog.add(access, {
action: 'created',
object_type: 'dead-host',
object_id: row.id,
meta: data
})
action: 'created',
object_type: 'dead-host',
object_id: row.id,
meta: data
})
.then(() => {
return row;
});
@ -144,9 +143,9 @@ const internalDeadHost = {
if (create_certificate) {
return internalCertificate.createQuickCertificate(access, {
domain_names: data.domain_names || row.domain_names,
meta: _.assign({}, row.meta, data.meta)
})
domain_names: data.domain_names || row.domain_names,
meta: _.assign({}, row.meta, data.meta)
})
.then(cert => {
// update host with cert id
data.certificate_id = cert.id;
@ -162,7 +161,9 @@ const internalDeadHost = {
// Add domain_names to the data in case it isn't there, so that the audit log renders correctly. The order is important here.
data = _.assign({}, {
domain_names: row.domain_names
},data);
}, data);
data = internalHost.cleanSslHstsData(data, row);
return deadHostModel
.query()
@ -171,11 +172,11 @@ const internalDeadHost = {
.then(saved_row => {
// Add to audit log
return internalAuditLog.add(access, {
action: 'updated',
object_type: 'dead-host',
object_id: row.id,
meta: data
})
action: 'updated',
object_type: 'dead-host',
object_id: row.id,
meta: data
})
.then(() => {
return _.omit(saved_row, omissions());
});
@ -183,15 +184,15 @@ const internalDeadHost = {
})
.then(() => {
return internalDeadHost.get(access, {
id: data.id,
expand: ['owner', 'certificate']
})
id: data.id,
expand: ['owner', 'certificate']
})
.then(row => {
// Configure nginx
return internalNginx.configure(deadHostModel, 'dead_host', row)
.then(new_meta => {
row.meta = new_meta;
row = internalHost.cleanRowCertificateMeta(row);
row = internalHost.cleanRowCertificateMeta(row);
return _.omit(row, omissions());
});
});

View File

@ -1,11 +1,40 @@
'use strict';
const _ = require('lodash');
const proxyHostModel = require('../models/proxy_host');
const redirectionHostModel = require('../models/redirection_host');
const deadHostModel = require('../models/dead_host');
const internalHost = {
/**
* Makes sure that the ssl_* and hsts_* fields play nicely together.
* ie: if there is no cert, then force_ssl is off.
* if force_ssl is off, then hsts_enabled is definitely off.
*
* @param {object} data
* @param {object} [existing_data]
* @returns {object}
*/
cleanSslHstsData: function (data, existing_data) {
existing_data = existing_data === undefined ? {} : existing_data;
let combined_data = _.assign({}, existing_data, data);
if (!combined_data.certificate_id) {
combined_data.ssl_forced = false;
combined_data.http2_support = false;
}
if (!combined_data.ssl_forced) {
combined_data.hsts_enabled = false;
}
if (!combined_data.hsts_enabled) {
combined_data.hsts_subdomains = false;
}
return combined_data;
},
/**
* used by the getAll functions of hosts, this removes the certificate meta if present
*

View File

@ -1,8 +1,5 @@
'use strict';
const https = require('https');
const fs = require('fs');
const _ = require('lodash');
const logger = require('../logger').ip_ranges;
const error = require('../lib/error');
const internalNginx = require('./nginx');

View File

@ -1,12 +1,10 @@
'use strict';
const _ = require('lodash');
const fs = require('fs');
const Liquid = require('liquidjs');
const logger = require('../logger').nginx;
const utils = require('../lib/utils');
const error = require('../lib/error');
const debug_mode = process.env.NODE_ENV !== 'production';
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
const internalNginx = {
@ -19,9 +17,9 @@ const internalNginx = {
* - IF BAD: update the meta with offline status and remove the config entirely
* - then reload nginx
*
* @param {Object} model
* @param {String} host_type
* @param {Object} host
* @param {Object|String} model
* @param {String} host_type
* @param {Object} host
* @returns {Promise}
*/
configure: (model, host_type, host) => {
@ -92,7 +90,7 @@ const internalNginx = {
})
.then(() => {
return combined_meta;
})
});
},
/**
@ -124,9 +122,52 @@ const internalNginx = {
*/
getConfigName: (host_type, host_id) => {
host_type = host_type.replace(new RegExp('-', 'g'), '_');
if (host_type === 'default') {
return '/data/nginx/default_host/site.conf';
}
return '/data/nginx/' + host_type + '/' + host_id + '.conf';
},
/**
* Generates custom locations
* @param {Object} host
* @returns {Promise}
*/
renderLocations: (host) => {
return new Promise((resolve, reject) => {
let template;
try {
template = fs.readFileSync(__dirname + '/../templates/_location.conf', {encoding: 'utf8'});
} catch (err) {
reject(new error.ConfigurationError(err.message));
return;
}
let renderer = new Liquid();
let renderedLocations = '';
const locationRendering = async () => {
for (let i = 0; i < host.locations.length; i++) {
let locationCopy = Object.assign({}, host.locations[i]);
if (locationCopy.forward_host.indexOf('/') > -1) {
const splitted = locationCopy.forward_host.split('/');
locationCopy.forward_host = splitted.shift();
locationCopy.forward_path = `/${splitted.join('/')}`;
}
renderedLocations += await renderer.parseAndRender(template, locationCopy);
}
};
locationRendering().then(() => resolve(renderedLocations));
});
},
/**
* @param {String} host_type
* @param {Object} host
@ -146,6 +187,7 @@ const internalNginx = {
return new Promise((resolve, reject) => {
let template = null;
let filename = internalNginx.getConfigName(host_type, host.id);
try {
template = fs.readFileSync(__dirname + '/../templates/' + host_type + '.conf', {encoding: 'utf8'});
} catch (err) {
@ -153,24 +195,57 @@ const internalNginx = {
return;
}
renderEngine
.parseAndRender(template, host)
.then(config_text => {
fs.writeFileSync(filename, config_text, {encoding: 'utf8'});
let locationsPromise;
let origLocations;
if (debug_mode) {
logger.success('Wrote config:', filename, config_text);
}
// Manipulate the data a bit before sending it to the template
if (host_type !== 'default') {
host.use_default_location = true;
if (typeof host.advanced_config !== 'undefined' && host.advanced_config) {
host.use_default_location = !internalNginx.advancedConfigHasDefaultLocation(host.advanced_config);
}
}
resolve(true);
})
.catch(err => {
if (debug_mode) {
logger.warn('Could not write ' + filename + ':', err.message);
}
reject(new error.ConfigurationError(err.message));
if (host.locations) {
origLocations = [].concat(host.locations);
locationsPromise = internalNginx.renderLocations(host).then((renderedLocations) => {
host.locations = renderedLocations;
});
// Allow someone who is using / custom location path to use it, and skip the default / location
_.map(host.locations, (location) => {
if (location.path === '/') {
host.use_default_location = false;
}
});
} else {
locationsPromise = Promise.resolve();
}
locationsPromise.then(() => {
renderEngine
.parseAndRender(template, host)
.then(config_text => {
fs.writeFileSync(filename, config_text, {encoding: 'utf8'});
if (debug_mode) {
logger.success('Wrote config:', filename, config_text);
}
// Restore locations array
host.locations = origLocations;
resolve(true);
})
.catch(err => {
if (debug_mode) {
logger.warn('Could not write ' + filename + ':', err.message);
}
reject(new error.ConfigurationError(err.message));
});
});
});
},
@ -255,7 +330,7 @@ const internalNginx = {
/**
* @param {String} host_type
* @param {Object} host
* @param {Object} [host]
* @param {Boolean} [throw_errors]
* @returns {Promise}
*/
@ -264,7 +339,7 @@ const internalNginx = {
return new Promise((resolve, reject) => {
try {
let config_file = internalNginx.getConfigName(host_type, host.id);
let config_file = internalNginx.getConfigName(host_type, typeof host === 'undefined' ? 0 : host.id);
if (debug_mode) {
logger.warn('Deleting nginx config: ' + config_file);
@ -312,6 +387,14 @@ const internalNginx = {
});
return Promise.all(promises);
},
/**
* @param {string} config
* @returns {boolean}
*/
advancedConfigHasDefaultLocation: function (config) {
return !!config.match(/^(?:.*;)?\s*?location\s*?\/\s*?{/im);
}
};

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const error = require('../lib/error');
const proxyHostModel = require('../models/proxy_host');
@ -27,7 +25,7 @@ const internalProxyHost = {
}
return access.can('proxy_hosts:create', data)
.then(access_data => {
.then(() => {
// Get a list of the domain names and check each of them against existing records
let domain_name_check_promises = [];
@ -47,13 +45,14 @@ const internalProxyHost = {
.then(() => {
// At this point the domains should have been checked
data.owner_user_id = access.token.getUserId(1);
data = internalHost.cleanSslHstsData(data);
return proxyHostModel
.query()
.omit(omissions())
.insertAndFetch(data);
})
.then(row => {
.then((row) => {
if (create_certificate) {
return internalCertificate.createQuickCertificate(access, data)
.then(cert => {
@ -70,31 +69,31 @@ const internalProxyHost = {
return row;
}
})
.then(row => {
.then((row) => {
// re-fetch with cert
return internalProxyHost.get(access, {
id: row.id,
expand: ['certificate', 'owner', 'access_list']
});
})
.then(row => {
.then((row) => {
// Configure nginx
return internalNginx.configure(proxyHostModel, 'proxy_host', row)
.then(() => {
return row;
});
})
.then(row => {
.then((row) => {
// Audit log
data.meta = _.assign({}, data.meta || {}, row.meta);
// Add to audit log
return internalAuditLog.add(access, {
action: 'created',
object_type: 'proxy-host',
object_id: row.id,
meta: data
})
action: 'created',
object_type: 'proxy-host',
object_id: row.id,
meta: data
})
.then(() => {
return row;
});
@ -145,9 +144,9 @@ const internalProxyHost = {
if (create_certificate) {
return internalCertificate.createQuickCertificate(access, {
domain_names: data.domain_names || row.domain_names,
meta: _.assign({}, row.meta, data.meta)
})
domain_names: data.domain_names || row.domain_names,
meta: _.assign({}, row.meta, data.meta)
})
.then(cert => {
// update host with cert id
data.certificate_id = cert.id;
@ -165,6 +164,8 @@ const internalProxyHost = {
domain_names: row.domain_names
}, data);
data = internalHost.cleanSslHstsData(data, row);
return proxyHostModel
.query()
.where({id: data.id})
@ -172,11 +173,11 @@ const internalProxyHost = {
.then(saved_row => {
// Add to audit log
return internalAuditLog.add(access, {
action: 'updated',
object_type: 'proxy-host',
object_id: row.id,
meta: data
})
action: 'updated',
object_type: 'proxy-host',
object_id: row.id,
meta: data
})
.then(() => {
return _.omit(saved_row, omissions());
});
@ -184,9 +185,9 @@ const internalProxyHost = {
})
.then(() => {
return internalProxyHost.get(access, {
id: data.id,
expand: ['owner', 'certificate', 'access_list']
})
id: data.id,
expand: ['owner', 'certificate', 'access_list']
})
.then(row => {
// Configure nginx
return internalNginx.configure(proxyHostModel, 'proxy_host', row)

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const error = require('../lib/error');
const redirectionHostModel = require('../models/redirection_host');
@ -47,6 +45,7 @@ const internalRedirectionHost = {
.then(() => {
// At this point the domains should have been checked
data.owner_user_id = access.token.getUserId(1);
data = internalHost.cleanSslHstsData(data);
return redirectionHostModel
.query()
@ -89,11 +88,11 @@ const internalRedirectionHost = {
// Add to audit log
return internalAuditLog.add(access, {
action: 'created',
object_type: 'redirection-host',
object_id: row.id,
meta: data
})
action: 'created',
object_type: 'redirection-host',
object_id: row.id,
meta: data
})
.then(() => {
return row;
});
@ -144,9 +143,9 @@ const internalRedirectionHost = {
if (create_certificate) {
return internalCertificate.createQuickCertificate(access, {
domain_names: data.domain_names || row.domain_names,
meta: _.assign({}, row.meta, data.meta)
})
domain_names: data.domain_names || row.domain_names,
meta: _.assign({}, row.meta, data.meta)
})
.then(cert => {
// update host with cert id
data.certificate_id = cert.id;
@ -162,7 +161,9 @@ const internalRedirectionHost = {
// Add domain_names to the data in case it isn't there, so that the audit log renders correctly. The order is important here.
data = _.assign({}, {
domain_names: row.domain_names
},data);
}, data);
data = internalHost.cleanSslHstsData(data, row);
return redirectionHostModel
.query()
@ -171,11 +172,11 @@ const internalRedirectionHost = {
.then(saved_row => {
// Add to audit log
return internalAuditLog.add(access, {
action: 'updated',
object_type: 'redirection-host',
object_id: row.id,
meta: data
})
action: 'updated',
object_type: 'redirection-host',
object_id: row.id,
meta: data
})
.then(() => {
return _.omit(saved_row, omissions());
});
@ -183,15 +184,15 @@ const internalRedirectionHost = {
})
.then(() => {
return internalRedirectionHost.get(access, {
id: data.id,
expand: ['owner', 'certificate']
})
id: data.id,
expand: ['owner', 'certificate']
})
.then(row => {
// Configure nginx
return internalNginx.configure(redirectionHostModel, 'redirection_host', row)
.then(new_meta => {
row.meta = new_meta;
row = internalHost.cleanRowCertificateMeta(row);
row = internalHost.cleanRowCertificateMeta(row);
return _.omit(row, omissions());
});
});

View File

@ -1,5 +1,3 @@
'use strict';
const internalProxyHost = require('./proxy-host');
const internalRedirectionHost = require('./redirection-host');
const internalDeadHost = require('./dead-host');

View File

@ -0,0 +1,133 @@
const fs = require('fs');
const error = require('../lib/error');
const settingModel = require('../models/setting');
const internalNginx = require('./nginx');
const internalSetting = {
/**
* @param {Access} access
* @param {Object} data
* @param {String} data.id
* @return {Promise}
*/
update: (access, data) => {
return access.can('settings:update', data.id)
.then(access_data => {
return internalSetting.get(access, {id: data.id});
})
.then(row => {
if (row.id !== data.id) {
// Sanity check that something crazy hasn't happened
throw new error.InternalValidationError('Setting could not be updated, IDs do not match: ' + row.id + ' !== ' + data.id);
}
return settingModel
.query()
.where({id: data.id})
.patch(data);
})
.then(() => {
return internalSetting.get(access, {
id: data.id
});
})
.then(row => {
if (row.id === 'default-site') {
// write the html if we need to
if (row.value === 'html') {
fs.writeFileSync('/data/nginx/default_www/index.html', row.meta.html, {encoding: 'utf8'});
}
// Configure nginx
return internalNginx.deleteConfig('default')
.then(() => {
return internalNginx.generateConfig('default', row);
})
.then(() => {
return internalNginx.test();
})
.then(() => {
return internalNginx.reload();
})
.then(() => {
return row;
})
.catch((err) => {
internalNginx.deleteConfig('default')
.then(() => {
return internalNginx.test();
})
.then(() => {
return internalNginx.reload();
})
.then(() => {
// I'm being slack here I know..
throw new error.ValidationError('Could not reconfigure Nginx. Please check logs.');
})
});
} else {
return row;
}
});
},
/**
* @param {Access} access
* @param {Object} data
* @param {String} data.id
* @return {Promise}
*/
get: (access, data) => {
return access.can('settings:get', data.id)
.then(() => {
return settingModel
.query()
.where('id', data.id)
.first();
})
.then(row => {
if (row) {
return row;
} else {
throw new error.ItemNotFoundError(data.id);
}
});
},
/**
* This will only count the settings
*
* @param {Access} access
* @returns {*}
*/
getCount: (access) => {
return access.can('settings:list')
.then(() => {
return settingModel
.query()
.count('id as count')
.first();
})
.then(row => {
return parseInt(row.count, 10);
});
},
/**
* All settings
*
* @param {Access} access
* @returns {Promise}
*/
getAll: (access) => {
return access.can('settings:list')
.then(() => {
return settingModel
.query()
.orderBy('description', 'ASC');
});
}
};
module.exports = internalSetting;

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const error = require('../lib/error');
const streamModel = require('../models/stream');

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const error = require('../lib/error');
const userModel = require('../models/user');

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const error = require('../lib/error');
const userModel = require('../models/user');

View File

@ -1,5 +1,3 @@
'use strict';
/**
* Some Notes: This is a friggin complicated piece of code.
*

View File

@ -0,0 +1,7 @@
{
"anyOf": [
{
"$ref": "roles#/definitions/admin"
}
]
}

View File

@ -0,0 +1,7 @@
{
"anyOf": [
{
"$ref": "roles#/definitions/admin"
}
]
}

View File

@ -0,0 +1,7 @@
{
"anyOf": [
{
"$ref": "roles#/definitions/admin"
}
]
}

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const util = require('util');

View File

@ -1,5 +1,3 @@
'use strict';
const validator = require('../validator');
module.exports = function (req, res, next) {

View File

@ -1,5 +1,3 @@
'use strict';
const Access = require('../access');
module.exports = () => {

View File

@ -1,5 +1,3 @@
'use strict';
module.exports = function () {
return function (req, res, next) {
if (req.headers.authorization) {

View File

@ -1,5 +1,3 @@
'use strict';
let _ = require('lodash');
module.exports = function (default_sort, default_offset, default_limit, max_limit) {

View File

@ -1,5 +1,3 @@
'use strict';
module.exports = (req, res, next) => {
if (req.params.user_id === 'me' && res.locals.access) {
req.params.user_id = res.locals.access.token.get('attrs').id;

View File

@ -1,7 +1,4 @@
'use strict';
const moment = require('moment');
const _ = require('lodash');
module.exports = {

View File

@ -1,5 +1,3 @@
'use strict';
const migrate_name = 'identifier_for_migrate';
const logger = require('../logger').migrate;

View File

@ -1,5 +1,3 @@
'use strict';
const exec = require('child_process').exec;
module.exports = {

View File

@ -1,5 +1,3 @@
'use strict';
const error = require('../error');
const path = require('path');
const parser = require('json-schema-ref-parser');

View File

@ -1,5 +1,3 @@
'use strict';
const _ = require('lodash');
const error = require('../error');
const definitions = require('../../schema/definitions.json');

View File

@ -1,5 +1,3 @@
'use strict';
const db = require('./db');
const logger = require('./logger').migrate;

View File

@ -1,5 +1,3 @@
'use strict';
const migrate_name = 'initial-schema';
const logger = require('../logger').migrate;

View File

@ -1,5 +1,3 @@
'use strict';
const migrate_name = 'websockets';
const logger = require('../logger').migrate;

View File

@ -1,5 +1,3 @@
'use strict';
const migrate_name = 'forward_host';
const logger = require('../logger').migrate;

View File

@ -1,5 +1,3 @@
'use strict';
const migrate_name = 'http2_support';
const logger = require('../logger').migrate;

View File

@ -1,5 +1,3 @@
'use strict';
const migrate_name = 'forward_scheme';
const logger = require('../logger').migrate;

View File

@ -1,5 +1,3 @@
'use strict';
const migrate_name = 'disabled';
const logger = require('../logger').migrate;

View File

@ -0,0 +1,35 @@
const migrate_name = 'custom_locations';
const logger = require('../logger').migrate;
/**
* Migrate
* Extends proxy_host table with locations field
*
* @see http://knexjs.org/#Schema
*
* @param {Object} knex
* @param {Promise} Promise
* @returns {Promise}
*/
exports.up = function (knex/*, Promise*/) {
logger.info('[' + migrate_name + '] Migrating Up...');
return knex.schema.table('proxy_host', function (proxy_host) {
proxy_host.json('locations');
})
.then(() => {
logger.info('[' + migrate_name + '] proxy_host Table altered');
})
};
/**
* Undo Migrate
*
* @param {Object} knex
* @param {Promise} Promise
* @returns {Promise}
*/
exports.down = function (knex, Promise) {
logger.warn('[' + migrate_name + '] You can\'t migrate down this one.');
return Promise.resolve(true);
};

View File

@ -0,0 +1,51 @@
const migrate_name = 'hsts';
const logger = require('../logger').migrate;
/**
* Migrate
*
* @see http://knexjs.org/#Schema
*
* @param {Object} knex
* @param {Promise} Promise
* @returns {Promise}
*/
exports.up = function (knex/*, Promise*/) {
logger.info('[' + migrate_name + '] Migrating Up...');
return knex.schema.table('proxy_host', function (proxy_host) {
proxy_host.integer('hsts_enabled').notNull().unsigned().defaultTo(0);
proxy_host.integer('hsts_subdomains').notNull().unsigned().defaultTo(0);
})
.then(() => {
logger.info('[' + migrate_name + '] proxy_host Table altered');
return knex.schema.table('redirection_host', function (redirection_host) {
redirection_host.integer('hsts_enabled').notNull().unsigned().defaultTo(0);
redirection_host.integer('hsts_subdomains').notNull().unsigned().defaultTo(0);
});
})
.then(() => {
logger.info('[' + migrate_name + '] redirection_host Table altered');
return knex.schema.table('dead_host', function (dead_host) {
dead_host.integer('hsts_enabled').notNull().unsigned().defaultTo(0);
dead_host.integer('hsts_subdomains').notNull().unsigned().defaultTo(0);
});
})
.then(() => {
logger.info('[' + migrate_name + '] dead_host Table altered');
});
};
/**
* Undo Migrate
*
* @param {Object} knex
* @param {Promise} Promise
* @returns {Promise}
*/
exports.down = function (knex, Promise) {
logger.warn('[' + migrate_name + '] You can\'t migrate down this one.');
return Promise.resolve(true);
};

View File

@ -0,0 +1,54 @@
const migrate_name = 'settings';
const logger = require('../logger').migrate;
/**
* Migrate
*
* @see http://knexjs.org/#Schema
*
* @param {Object} knex
* @param {Promise} Promise
* @returns {Promise}
*/
exports.up = function (knex/*, Promise*/) {
logger.info('[' + migrate_name + '] Migrating Up...');
return knex.schema.createTable('setting', table => {
table.string('id').notNull().primary();
table.string('name', 100).notNull();
table.string('description', 255).notNull();
table.string('value', 255).notNull();
table.json('meta').notNull();
})
.then(() => {
logger.info('[' + migrate_name + '] setting Table created');
// TODO: add settings
let settingModel = require('../models/setting');
return settingModel
.query()
.insert({
id: 'default-site',
name: 'Default Site',
description: 'What to show when Nginx is hit with an unknown Host',
value: 'congratulations',
meta: {}
});
})
.then(() => {
logger.info('[' + migrate_name + '] Default settings added');
});
};
/**
* Undo Migrate
*
* @param {Object} knex
* @param {Promise} Promise
* @returns {Promise}
*/
exports.down = function (knex, Promise) {
logger.warn('[' + migrate_name + '] You can\'t migrate down the initial data.');
return Promise.resolve(true);
};

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const User = require('./user');

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const User = require('./user');

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const bcrypt = require('bcrypt');
const db = require('../db');
const Model = require('objection').Model;

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const User = require('./user');

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const User = require('./user');

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const User = require('./user');
@ -47,7 +45,7 @@ class ProxyHost extends Model {
}
static get jsonAttributes () {
return ['domain_names', 'meta'];
return ['domain_names', 'meta', 'locations'];
}
static get relationMappings () {

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const User = require('./user');

View File

@ -0,0 +1,30 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
const db = require('../db');
const Model = require('objection').Model;
Model.knex(db);
class Setting extends Model {
$beforeInsert () {
// Default for meta
if (typeof this.meta === 'undefined') {
this.meta = {};
}
}
static get name () {
return 'Setting';
}
static get tableName () {
return 'setting';
}
static get jsonAttributes () {
return ['meta'];
}
}
module.exports = Setting;

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const User = require('./user');

View File

@ -3,8 +3,6 @@
and then has abilities after that.
*/
'use strict';
const _ = require('lodash');
const config = require('config');
const jwt = require('jsonwebtoken');

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;
const UserPermission = require('./user_permission');

View File

@ -1,8 +1,6 @@
// Objection Docs:
// http://vincit.github.io/objection.js/
'use strict';
const db = require('../db');
const Model = require('objection').Model;

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../lib/validator');
const jwtdecode = require('../../lib/express/jwt-decode');

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const pjson = require('../../../../package.json');
const error = require('../../lib/error');
@ -31,6 +29,7 @@ router.use('/tokens', require('./tokens'));
router.use('/users', require('./users'));
router.use('/audit-log', require('./audit-log'));
router.use('/reports', require('./reports'));
router.use('/settings', require('./settings'));
router.use('/nginx/proxy-hosts', require('./nginx/proxy_hosts'));
router.use('/nginx/redirection-hosts', require('./nginx/redirection_hosts'));
router.use('/nginx/dead-hosts', require('./nginx/dead_hosts'));

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../../lib/validator');
const jwtdecode = require('../../../lib/express/jwt-decode');

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../../lib/validator');
const jwtdecode = require('../../../lib/express/jwt-decode');
@ -94,13 +92,13 @@ router
certificate_id: {
$ref: 'definitions#/definitions/id'
},
expand: {
expand: {
$ref: 'definitions#/definitions/expand'
}
}
}, {
certificate_id: req.params.certificate_id,
expand: (typeof req.query.expand === 'string' ? req.query.expand.split(',') : null)
expand: (typeof req.query.expand === 'string' ? req.query.expand.split(',') : null)
})
.then(data => {
return internalCertificate.get(res.locals.access, {
@ -181,6 +179,34 @@ router
}
});
/**
* Renew LE Certs
*
* /api/nginx/certificates/123/renew
*/
router
.route('/:certificate_id/renew')
.options((req, res) => {
res.sendStatus(204);
})
.all(jwtdecode())
/**
* POST /api/nginx/certificates/123/renew
*
* Renew certificate
*/
.post((req, res, next) => {
internalCertificate.renew(res.locals.access, {
id: parseInt(req.params.certificate_id, 10)
})
.then(result => {
res.status(200)
.send(result);
})
.catch(next);
});
/**
* Validate Certs before saving
*

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../../lib/validator');
const jwtdecode = require('../../../lib/express/jwt-decode');

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../../lib/validator');
const jwtdecode = require('../../../lib/express/jwt-decode');

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../../lib/validator');
const jwtdecode = require('../../../lib/express/jwt-decode');

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../../lib/validator');
const jwtdecode = require('../../../lib/express/jwt-decode');

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const jwtdecode = require('../../lib/express/jwt-decode');
const internalReport = require('../../internal/report');

View File

@ -0,0 +1,96 @@
const express = require('express');
const validator = require('../../lib/validator');
const jwtdecode = require('../../lib/express/jwt-decode');
const internalSetting = require('../../internal/setting');
const apiValidator = require('../../lib/validator/api');
let router = express.Router({
caseSensitive: true,
strict: true,
mergeParams: true
});
/**
* /api/settings
*/
router
.route('/')
.options((req, res) => {
res.sendStatus(204);
})
.all(jwtdecode())
/**
* GET /api/settings
*
* Retrieve all settings
*/
.get((req, res, next) => {
internalSetting.getAll(res.locals.access)
.then(rows => {
res.status(200)
.send(rows);
})
.catch(next);
});
/**
* Specific setting
*
* /api/settings/something
*/
router
.route('/:setting_id')
.options((req, res) => {
res.sendStatus(204);
})
.all(jwtdecode())
/**
* GET /settings/something
*
* Retrieve a specific setting
*/
.get((req, res, next) => {
validator({
required: ['setting_id'],
additionalProperties: false,
properties: {
setting_id: {
$ref: 'definitions#/definitions/setting_id'
}
}
}, {
setting_id: req.params.setting_id
})
.then(data => {
return internalSetting.get(res.locals.access, {
id: data.setting_id
});
})
.then(row => {
res.status(200)
.send(row);
})
.catch(next);
})
/**
* PUT /api/settings/something
*
* Update and existing setting
*/
.put((req, res, next) => {
apiValidator({$ref: 'endpoints/settings#/links/1/schema'}, req.body)
.then(payload => {
payload.id = req.params.setting_id;
return internalSetting.update(res.locals.access, payload);
})
.then(result => {
res.status(200)
.send(result);
})
.catch(next);
});
module.exports = router;

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const jwtdecode = require('../../lib/express/jwt-decode');
const internalToken = require('../../internal/token');

View File

@ -1,5 +1,3 @@
'use strict';
const express = require('express');
const validator = require('../../lib/validator');
const jwtdecode = require('../../lib/express/jwt-decode');

View File

@ -1,8 +1,7 @@
'use strict';
const express = require('express');
const fs = require('fs');
const PACKAGE = require('../../../package.json');
const path = require('path')
const router = express.Router({
caseSensitive: true,
@ -29,15 +28,22 @@ router.get(/(.*)/, function (req, res, next) {
version: PACKAGE.version
});
} else {
fs.readFile('dist' + req.params.page, 'utf8', function (err, data) {
if (err) {
res.render('index', {
version: PACKAGE.version
});
} else {
res.contentType('text/html').end(data);
}
});
var p = path.normalize('dist' + req.params.page)
if (p.startsWith('dist')) { // Allow access to ressources under 'dist' directory only.
fs.readFile(p, 'utf8', function (err, data) {
if (err) {
res.render('index', {
version: PACKAGE.version
});
} else {
res.contentType('text/html').end(data);
}
});
} else {
res.render('index', {
version: PACKAGE.version
});
}
}
});

View File

@ -9,6 +9,13 @@
"type": "integer",
"minimum": 1
},
"setting_id": {
"description": "Unique identifier for a Setting",
"example": "default-site",
"readOnly": true,
"type": "string",
"minLength": 2
},
"token": {
"type": "string",
"minLength": 10
@ -187,6 +194,16 @@
"example": false,
"type": "boolean"
},
"hsts_enabled": {
"description": "Is HSTS Enabled",
"example": false,
"type": "boolean"
},
"hsts_subdomains": {
"description": "Is HSTS applicable to all subdomains",
"example": false,
"type": "boolean"
},
"ssl_provider": {
"type": "string",
"pattern": "^(letsencrypt|other)$"

View File

@ -24,6 +24,12 @@
"ssl_forced": {
"$ref": "../definitions.json#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "../definitions.json#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "../definitions.json#/definitions/hsts_subdomains"
},
"http2_support": {
"$ref": "../definitions.json#/definitions/http2_support"
},
@ -56,6 +62,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_subdomains"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},
@ -113,6 +125,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_enabled"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},
@ -153,6 +171,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_enabled"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},

View File

@ -38,6 +38,12 @@
"ssl_forced": {
"$ref": "../definitions.json#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "../definitions.json#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "../definitions.json#/definitions/hsts_subdomains"
},
"http2_support": {
"$ref": "../definitions.json#/definitions/http2_support"
},
@ -63,6 +69,44 @@
},
"meta": {
"type": "object"
},
"locations": {
"type": "array",
"minItems": 0,
"items": {
"type": "object",
"required": [
"forward_scheme",
"forward_host",
"forward_port",
"path"
],
"additionalProperties": false,
"properties": {
"id": {
"type": ["integer", "null"]
},
"path": {
"type": "string",
"minLength": 1
},
"forward_scheme": {
"$ref": "#/definitions/forward_scheme"
},
"forward_host": {
"$ref": "#/definitions/forward_host"
},
"forward_port": {
"$ref": "#/definitions/forward_port"
},
"forward_path": {
"type": "string"
},
"advanced_config": {
"type": "string"
}
}
}
}
},
"properties": {
@ -93,6 +137,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_subdomains"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},
@ -116,6 +166,9 @@
},
"meta": {
"$ref": "#/definitions/meta"
},
"locations": {
"$ref": "#/definitions/locations"
}
},
"links": [
@ -174,6 +227,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_enabled"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},
@ -197,6 +256,9 @@
},
"meta": {
"$ref": "#/definitions/meta"
},
"locations": {
"$ref": "#/definitions/locations"
}
}
},
@ -238,6 +300,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_enabled"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},
@ -261,6 +329,9 @@
},
"meta": {
"$ref": "#/definitions/meta"
},
"locations": {
"$ref": "#/definitions/locations"
}
}
},

View File

@ -32,6 +32,12 @@
"ssl_forced": {
"$ref": "../definitions.json#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "../definitions.json#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "../definitions.json#/definitions/hsts_subdomains"
},
"http2_support": {
"$ref": "../definitions.json#/definitions/http2_support"
},
@ -73,6 +79,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_subdomains"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},
@ -140,6 +152,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_enabled"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},
@ -189,6 +207,12 @@
"ssl_forced": {
"$ref": "#/definitions/ssl_forced"
},
"hsts_enabled": {
"$ref": "#/definitions/hsts_enabled"
},
"hsts_subdomains": {
"$ref": "#/definitions/hsts_enabled"
},
"http2_support": {
"$ref": "#/definitions/http2_support"
},

View File

@ -0,0 +1,99 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "endpoints/settings",
"title": "Settings",
"description": "Endpoints relating to Settings",
"stability": "stable",
"type": "object",
"definitions": {
"id": {
"$ref": "../definitions.json#/definitions/setting_id"
},
"name": {
"description": "Name",
"example": "Default Site",
"type": "string",
"minLength": 2,
"maxLength": 100
},
"description": {
"description": "Description",
"example": "Default Site",
"type": "string",
"minLength": 2,
"maxLength": 255
},
"value": {
"description": "Value",
"example": "404",
"type": "string",
"maxLength": 255
},
"meta": {
"type": "object"
}
},
"links": [
{
"title": "List",
"description": "Returns a list of Settings",
"href": "/settings",
"access": "private",
"method": "GET",
"rel": "self",
"http_header": {
"$ref": "../examples.json#/definitions/auth_header"
},
"targetSchema": {
"type": "array",
"items": {
"$ref": "#/properties"
}
}
},
{
"title": "Update",
"description": "Updates a existing Setting",
"href": "/settings/{definitions.identity.example}",
"access": "private",
"method": "PUT",
"rel": "update",
"http_header": {
"$ref": "../examples.json#/definitions/auth_header"
},
"schema": {
"type": "object",
"properties": {
"value": {
"$ref": "#/definitions/value"
},
"meta": {
"$ref": "#/definitions/meta"
}
}
},
"targetSchema": {
"properties": {
"$ref": "#/properties"
}
}
}
],
"properties": {
"id": {
"$ref": "#/definitions/id"
},
"name": {
"$ref": "#/definitions/description"
},
"description": {
"$ref": "#/definitions/description"
},
"value": {
"$ref": "#/definitions/value"
},
"meta": {
"$ref": "#/definitions/meta"
}
}
}

View File

@ -34,6 +34,9 @@
},
"access-lists": {
"$ref": "endpoints/access-lists.json"
},
"settings": {
"$ref": "endpoints/settings.json"
}
}
}

View File

@ -1,5 +1,3 @@
'use strict';
const fs = require('fs');
const NodeRSA = require('node-rsa');
const config = require('config');
@ -7,7 +5,7 @@ const logger = require('./logger').setup;
const userModel = require('./models/user');
const userPermissionModel = require('./models/user_permission');
const authModel = require('./models/auth');
const debug_mode = process.env.NODE_ENV !== 'production';
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
module.exports = function () {
return new Promise((resolve, reject) => {

View File

@ -0,0 +1,8 @@
{% if certificate and certificate_id > 0 -%}
{% if ssl_forced == 1 or ssl_forced == true %}
{% if hsts_enabled == 1 or hsts_enabled == true %}
# HSTS (ngx_http_headers_module is required) (31536000 seconds = 1 year)
add_header Strict-Transport-Security "max-age=31536000;{% if hsts_subdomains == 1 or hsts_subdomains == true -%} includeSubDomains;{% endif %} preload" always;
{% endif %}
{% endif %}
{% endif %}

View File

@ -0,0 +1,9 @@
location {{ path }} {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass {{ forward_scheme }}://{{ forward_host }}:{{ forward_port }}{{ forward_path }};
{{ advanced_config }}
}

View File

@ -4,11 +4,19 @@
server {
{% include "_listen.conf" %}
{% include "_certificates.conf" %}
{% include "_hsts.conf" %}
access_log /data/logs/dead_host-{{ id }}.log standard;
{{ advanced_config }}
return 404;
{% if use_default_location %}
location / {
{% include "_forced_ssl.conf" %}
{% include "_hsts.conf" %}
return 404;
}
{% endif %}
}
{% endif %}

View File

@ -0,0 +1,32 @@
# ------------------------------------------------------------
# Default Site
# ------------------------------------------------------------
{% if value == "congratulations" %}
# Skipping output, congratulations page configration is baked in.
{%- else %}
server {
listen 80 default;
server_name default-host.localhost;
access_log /data/logs/default_host.log combined;
{% include "_exploits.conf" %}
{%- if value == "404" %}
location / {
return 404;
}
{% endif %}
{%- if value == "redirect" %}
location / {
return 301 {{ meta.redirect }};
}
{%- endif %}
{%- if value == "html" %}
root /data/nginx/default_www;
location / {
try_files $uri /index.html;
}
{%- endif %}
}
{% endif %}

View File

@ -10,11 +10,16 @@ server {
{% include "_certificates.conf" %}
{% include "_assets.conf" %}
{% include "_exploits.conf" %}
{% include "_hsts.conf" %}
access_log /data/logs/proxy_host-{{ id }}.log proxy;
{{ advanced_config }}
{{ locations }}
{% if use_default_location %}
location / {
{%- if access_list_id > 0 -%}
# Access List
@ -23,6 +28,7 @@ server {
{%- endif %}
{% include "_forced_ssl.conf" %}
{% include "_hsts.conf" %}
{% if allow_websocket_upgrade == 1 or allow_websocket_upgrade == true %}
proxy_set_header Upgrade $http_upgrade;
@ -33,5 +39,7 @@ server {
# Proxy!
include conf.d/include/proxy.conf;
}
{% endif %}
}
{% endif %}

View File

@ -6,15 +6,16 @@ server {
{% include "_certificates.conf" %}
{% include "_assets.conf" %}
{% include "_exploits.conf" %}
{% include "_hsts.conf" %}
access_log /data/logs/redirection_host-{{ id }}.log standard;
{{ advanced_config }}
# TODO: Preserve Path Option
{% if use_default_location %}
location / {
{% include "_forced_ssl.conf" %}
{% include "_hsts.conf" %}
{% if preserve_path == 1 or preserve_path == true %}
return 301 $scheme://{{ forward_domain_name }}$request_uri;
@ -22,5 +23,7 @@ server {
return 301 $scheme://{{ forward_domain_name }};
{% endif %}
}
{% endif %}
}
{% endif %}

View File

@ -1,5 +1,3 @@
'use strict';
const $ = require('jquery');
const _ = require('underscore');
const Tokens = require('./tokens');
@ -11,8 +9,8 @@ const Tokens = require('./tokens');
* @constructor
*/
const ApiError = function (message, debug, code) {
let temp = Error.call(this, message);
temp.name = this.name = 'ApiError';
let temp = Error.call(this, message);
temp.name = this.name = 'ApiError';
this.stack = temp.stack;
this.message = temp.message;
this.debug = debug;
@ -35,7 +33,7 @@ ApiError.prototype = Object.create(Error.prototype, {
* @param {Object} [options]
* @returns {Promise}
*/
function fetch (verb, path, data, options) {
function fetch(verb, path, data, options) {
options = options || {};
return new Promise(function (resolve, reject) {
@ -55,7 +53,7 @@ function fetch (verb, path, data, options) {
contentType: options.contentType || 'application/json; charset=UTF-8',
processData: options.processData || true,
crossDomain: true,
timeout: options.timeout ? options.timeout : 15000,
timeout: options.timeout ? options.timeout : 30000,
xhrFields: {
withCredentials: true
},
@ -99,7 +97,7 @@ function fetch (verb, path, data, options) {
* @param {Array} expand
* @returns {String}
*/
function makeExpansionString (expand) {
function makeExpansionString(expand) {
let items = [];
_.forEach(expand, function (exp) {
items.push(encodeURIComponent(exp));
@ -114,7 +112,7 @@ function makeExpansionString (expand) {
* @param {String} [query]
* @returns {Promise}
*/
function getAllObjects (path, expand, query) {
function getAllObjects(path, expand, query) {
let params = [];
if (typeof expand === 'object' && expand !== null && expand.length) {
@ -128,20 +126,7 @@ function getAllObjects (path, expand, query) {
return fetch('get', path + (params.length ? '?' + params.join('&') : ''));
}
/**
* @param {String} path
* @param {FormData} form_data
* @returns {Promise}
*/
function upload (path, form_data) {
console.log('UPLOAD:', path, form_data);
return fetch('post', path, form_data, {
contentType: 'multipart/form-data',
processData: false
});
}
function FileUpload (path, fd) {
function FileUpload(path, fd) {
return new Promise((resolve, reject) => {
let xhr = new XMLHttpRequest();
let token = Tokens.getTopToken();
@ -214,7 +199,7 @@ module.exports = {
Users: {
/**
* @param {Integer|String} user_id
* @param {Number|String} user_id
* @param {Array} [expand]
* @returns {Promise}
*/
@ -639,6 +624,14 @@ module.exports = {
*/
validate: function (form_data) {
return FileUpload('nginx/certificates/validate', form_data);
},
/**
* @param {Number} id
* @returns {Promise}
*/
renew: function (id) {
return fetch('post', 'nginx/certificates/' + id + '/renew');
}
}
},
@ -662,5 +655,34 @@ module.exports = {
getHostStats: function () {
return fetch('get', 'reports/hosts');
}
},
Settings: {
/**
* @param {String} setting_id
* @returns {Promise}
*/
getById: function (setting_id) {
return fetch('get', 'settings/' + setting_id);
},
/**
* @returns {Promise}
*/
getAll: function () {
return getAllObjects('settings');
},
/**
* @param {Object} data
* @param {Number} data.id
* @returns {Promise}
*/
update: function (data) {
let id = data.id;
delete data.id;
return fetch('put', 'settings/' + id, data);
}
}
};

View File

@ -1,5 +1,3 @@
'use strict';
const Mn = require('backbone.marionette');
const Controller = require('../../controller');
const template = require('./item.ejs');

View File

@ -1,5 +1,3 @@
'use strict';
const Mn = require('backbone.marionette');
const ItemView = require('./item');
const template = require('./main.ejs');

View File

@ -1,5 +1,3 @@
'use strict';
const Mn = require('backbone.marionette');
const App = require('../main');
const AuditLogModel = require('../../models/audit-log');

Some files were not shown because too many files have changed in this diff Show More