Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Difference From release To trunk
2023-03-23
| ||
18:03 | URL and whitespace fixes to previous. ... (Leaf check-in: 9e73519c user: wyoung tags: trunk) | |
16:40 | The /etc/os-release workaround for nspawn's pickiness has caused the feature to go into negative ROI territory. Ripped it out of the mainstream process and made it a manual step for those who need it, in the hopes that this will cause fewer ongoing problems than leaving it as it is. ... (check-in: 4cb5c03e user: wyoung tags: trunk) | |
2023-02-25
| ||
20:44 | Documentation for "fossil all remote". ... (check-in: 6ad6c559 user: drh tags: trunk) | |
19:23 | Version 2.21 ... (check-in: f9aa4740 user: drh tags: trunk, release, version-2.21) | |
2023-02-24
| ||
17:14 | Fix a harmless compiler warning in gzip.c. ... (check-in: 9b05cad1 user: drh tags: trunk) | |
Changes to Dockerfile.
|
| | | 1 2 3 4 5 6 7 8 | # syntax=docker/dockerfile:1.0 # See www/containers.md for documentation on how to use this file. ## --------------------------------------------------------------------- ## STAGE 1: Build static Fossil & BusyBox binaries atop Alpine Linux ## --------------------------------------------------------------------- FROM alpine:latest AS builder |
︙ | ︙ | |||
24 25 26 27 28 29 30 | ### Bake the custom BusyBox into another layer. The intent is that this ### changes only when we change BBXVER. That will force an update of ### the layers below, but this is a rare occurrence. ARG BBXVER="1_35_0" ENV BBXURL "https://github.com/mirror/busybox/tarball/${BBXVER}" COPY containers/busybox-config /tmp/bbx/.config ADD $BBXURL /tmp/bbx/src.tar.gz | | < < < < < | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | ### Bake the custom BusyBox into another layer. The intent is that this ### changes only when we change BBXVER. That will force an update of ### the layers below, but this is a rare occurrence. ARG BBXVER="1_35_0" ENV BBXURL "https://github.com/mirror/busybox/tarball/${BBXVER}" COPY containers/busybox-config /tmp/bbx/.config ADD $BBXURL /tmp/bbx/src.tar.gz RUN set -x \ && tar --strip-components=1 -C bbx -xzf bbx/src.tar.gz \ && ( cd bbx && yes "" | make oldconfig && make -j11 ) ### The changeable Fossil layer is the only one in the first stage that ### changes often, so add it last, to make it independent of the others. ### ### $FSLSTB can be either a file or a directory due to a ADD's bizarre ### behavior: it unpacks tarballs when added from a local file but not ### from a URL! It matters because we default to a URL in case you're ### building outside a Fossil checkout, but when building via the ### container-image target, we can avoid a costly hit on the Fossil ### project's home site by pulling the data from the local repo via the ### "tarball" command. This is a DVCS, after all! ARG FSLCFG="" ARG FSLVER="trunk" ARG FSLURL="https://fossil-scm.org/home/tarball/src?r=${FSLVER}" ENV FSLSTB=/tmp/fsl/src.tar.gz ADD $FSLURL $FSLSTB RUN set -x \ && if [ -d $FSLSTB ] ; then mv $FSLSTB/src fsl ; \ else tar -C fsl -xzf fsl/src.tar.gz ; fi \ && m=fsl/src/src/main.mk \ && fsl/src/configure --static CFLAGS='-Os -s' $FSLCFG && make -j11 ## --------------------------------------------------------------------- |
︙ | ︙ |
Changes to VERSION.
|
| | | 1 | 2.22 |
Changes to auto.def.
︙ | ︙ | |||
771 772 773 774 775 776 777 | # of Fossil each one contains. This not only allows multiple images # to coexist and multiple containers to be created unamgiguosly from # them, it also changes the URL we fetch the source tarball from, so # repeated builds of a given version generate and fetch the source # tarball once only, keeping it in the local Docker/Podman cache. set ci [readfile "$::autosetup(srcdir)/manifest.uuid"] define FOSSIL_CI_PFX [string range $ci 0 11] | < | 771 772 773 774 775 776 777 778 779 780 | # of Fossil each one contains. This not only allows multiple images # to coexist and multiple containers to be created unamgiguosly from # them, it also changes the URL we fetch the source tarball from, so # repeated builds of a given version generate and fetch the source # tarball once only, keeping it in the local Docker/Podman cache. set ci [readfile "$::autosetup(srcdir)/manifest.uuid"] define FOSSIL_CI_PFX [string range $ci 0 11] make-template Makefile.in make-config-header autoconfig.h -auto {USE_* FOSSIL_*} |
Added containers/os-release.
> > > > > | 1 2 3 4 5 | NAME="Fossil BusyBox" ID="fslbbx" VERSION="Fossil 2" HOME_URL="https://fossil-scm.org/home/doc/trunk/www/containers.md" BUG_REPORT_URL="https://fossil-scm.org/forum" |
Deleted containers/os-release.in.
|
| < < < < < < |
Changes to extsrc/shell.c.
︙ | ︙ | |||
13257 13258 13259 13260 13261 13262 13263 | ** and return SQLITE_OK. Otherwise, return an SQLite error code. */ static int dbdataGetEncoding(DbdataCursor *pCsr){ int rc = SQLITE_OK; int nPg1 = 0; u8 *aPg1 = 0; rc = dbdataLoadPage(pCsr, 1, &aPg1, &nPg1); | < | | 13257 13258 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 13270 13271 | ** and return SQLITE_OK. Otherwise, return an SQLite error code. */ static int dbdataGetEncoding(DbdataCursor *pCsr){ int rc = SQLITE_OK; int nPg1 = 0; u8 *aPg1 = 0; rc = dbdataLoadPage(pCsr, 1, &aPg1, &nPg1); if( rc==SQLITE_OK && nPg1>=(56+4) ){ pCsr->enc = get_uint32(&aPg1[56]); } sqlite3_free(aPg1); return rc; } |
︙ | ︙ |
Changes to extsrc/sqlite3.c.
1 2 | /****************************************************************************** ** This file is an amalgamation of many separate C source files from SQLite | | | 1 2 3 4 5 6 7 8 9 10 | /****************************************************************************** ** This file is an amalgamation of many separate C source files from SQLite ** version 3.41.2. By combining all the individual C code files into this ** single large file, the entire code can be compiled as a single translation ** unit. This allows many compilers to do optimizations that would not be ** possible if the files were compiled separately. Performance improvements ** of 5% or more are commonly seen when SQLite is compiled as a single ** translation unit. ** ** This file is all you need to compile SQLite. To use SQLite in other |
︙ | ︙ | |||
448 449 450 451 452 453 454 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ | | | | | 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.41.2" #define SQLITE_VERSION_NUMBER 3041002 #define SQLITE_SOURCE_ID "2023-03-17 12:25:10 c5bd0ea3b5b2f3ed8e971c5fd6e85e8f06d8055d74df65612c3794138306e6ba" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ | |||
18833 18834 18835 18836 18837 18838 18839 | #define NC_AllowAgg 0x000001 /* Aggregate functions are allowed here */ #define NC_PartIdx 0x000002 /* True if resolving a partial index WHERE */ #define NC_IsCheck 0x000004 /* True if resolving a CHECK constraint */ #define NC_GenCol 0x000008 /* True for a GENERATED ALWAYS AS clause */ #define NC_HasAgg 0x000010 /* One or more aggregate functions seen */ #define NC_IdxExpr 0x000020 /* True if resolving columns of CREATE INDEX */ #define NC_SelfRef 0x00002e /* Combo: PartIdx, isCheck, GenCol, and IdxExpr */ | | | 18833 18834 18835 18836 18837 18838 18839 18840 18841 18842 18843 18844 18845 18846 18847 | #define NC_AllowAgg 0x000001 /* Aggregate functions are allowed here */ #define NC_PartIdx 0x000002 /* True if resolving a partial index WHERE */ #define NC_IsCheck 0x000004 /* True if resolving a CHECK constraint */ #define NC_GenCol 0x000008 /* True for a GENERATED ALWAYS AS clause */ #define NC_HasAgg 0x000010 /* One or more aggregate functions seen */ #define NC_IdxExpr 0x000020 /* True if resolving columns of CREATE INDEX */ #define NC_SelfRef 0x00002e /* Combo: PartIdx, isCheck, GenCol, and IdxExpr */ #define NC_Subquery 0x000040 /* A subquery has been seen */ #define NC_UEList 0x000080 /* True if uNC.pEList is used */ #define NC_UAggInfo 0x000100 /* True if uNC.pAggInfo is used */ #define NC_UUpsert 0x000200 /* True if uNC.pUpsert is used */ #define NC_UBaseReg 0x000400 /* True if uNC.iBaseReg is used */ #define NC_MinMaxAgg 0x001000 /* min/max aggregates seen. See note above */ #define NC_Complex 0x002000 /* True if a function or subquery seen */ #define NC_AllowWin 0x004000 /* Window functions are allowed here */ |
︙ | ︙ | |||
19152 19153 19154 19155 19156 19157 19158 19159 19160 19161 19162 19163 19164 19165 | */ struct IndexedExpr { Expr *pExpr; /* The expression contained in the index */ int iDataCur; /* The data cursor associated with the index */ int iIdxCur; /* The index cursor */ int iIdxCol; /* The index column that contains value of pExpr */ u8 bMaybeNullRow; /* True if we need an OP_IfNullRow check */ IndexedExpr *pIENext; /* Next in a list of all indexed expressions */ #ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS const char *zIdxName; /* Name of index, used only for bytecode comments */ #endif }; /* | > | 19152 19153 19154 19155 19156 19157 19158 19159 19160 19161 19162 19163 19164 19165 19166 | */ struct IndexedExpr { Expr *pExpr; /* The expression contained in the index */ int iDataCur; /* The data cursor associated with the index */ int iIdxCur; /* The index cursor */ int iIdxCol; /* The index column that contains value of pExpr */ u8 bMaybeNullRow; /* True if we need an OP_IfNullRow check */ u8 aff; /* Affinity of the pExpr expression */ IndexedExpr *pIENext; /* Next in a list of all indexed expressions */ #ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS const char *zIdxName; /* Name of index, used only for bytecode comments */ #endif }; /* |
︙ | ︙ | |||
19204 19205 19206 19207 19208 19209 19210 19211 19212 19213 19214 19215 19216 19217 | u8 okConstFactor; /* OK to factor out constants */ u8 disableLookaside; /* Number of times lookaside has been disabled */ u8 prepFlags; /* SQLITE_PREPARE_* flags */ u8 withinRJSubrtn; /* Nesting level for RIGHT JOIN body subroutines */ #if defined(SQLITE_DEBUG) || defined(SQLITE_COVERAGE_TEST) u8 earlyCleanup; /* OOM inside sqlite3ParserAddCleanup() */ #endif int nRangeReg; /* Size of the temporary register block */ int iRangeReg; /* First register in temporary register block */ int nErr; /* Number of errors seen */ int nTab; /* Number of previously allocated VDBE cursors */ int nMem; /* Number of memory cells used so far */ int szOpAlloc; /* Bytes of memory space allocated for Vdbe.aOp[] */ int iSelfTab; /* Table associated with an index on expr, or negative | > > > | 19205 19206 19207 19208 19209 19210 19211 19212 19213 19214 19215 19216 19217 19218 19219 19220 19221 | u8 okConstFactor; /* OK to factor out constants */ u8 disableLookaside; /* Number of times lookaside has been disabled */ u8 prepFlags; /* SQLITE_PREPARE_* flags */ u8 withinRJSubrtn; /* Nesting level for RIGHT JOIN body subroutines */ #if defined(SQLITE_DEBUG) || defined(SQLITE_COVERAGE_TEST) u8 earlyCleanup; /* OOM inside sqlite3ParserAddCleanup() */ #endif #ifdef SQLITE_DEBUG u8 ifNotExists; /* Might be true if IF NOT EXISTS. Assert()s only */ #endif int nRangeReg; /* Size of the temporary register block */ int iRangeReg; /* First register in temporary register block */ int nErr; /* Number of errors seen */ int nTab; /* Number of previously allocated VDBE cursors */ int nMem; /* Number of memory cells used so far */ int szOpAlloc; /* Bytes of memory space allocated for Vdbe.aOp[] */ int iSelfTab; /* Table associated with an index on expr, or negative |
︙ | ︙ | |||
74500 74501 74502 74503 74504 74505 74506 | pCur->eState = CURSOR_VALID; if( pCur->skipNext>0 ) return SQLITE_OK; } } pPage = pCur->pPage; idx = ++pCur->ix; | | | 74504 74505 74506 74507 74508 74509 74510 74511 74512 74513 74514 74515 74516 74517 74518 | pCur->eState = CURSOR_VALID; if( pCur->skipNext>0 ) return SQLITE_OK; } } pPage = pCur->pPage; idx = ++pCur->ix; if( !pPage->isInit || sqlite3FaultSim(412) ){ return SQLITE_CORRUPT_BKPT; } if( idx>=pPage->nCell ){ if( !pPage->leaf ){ rc = moveToChild(pCur, get4byte(&pPage->aData[pPage->hdrOffset+8])); if( rc ) return rc; |
︙ | ︙ | |||
90911 90912 90913 90914 90915 90916 90917 | for(i=pOp->p3, mx=i+pOp->p4.i; i<mx; i++){ const Mem *p = &aMem[i]; if( p->flags & (MEM_Int|MEM_IntReal) ){ h += p->u.i; }else if( p->flags & MEM_Real ){ h += sqlite3VdbeIntValue(p); }else if( p->flags & (MEM_Str|MEM_Blob) ){ | | < | 90915 90916 90917 90918 90919 90920 90921 90922 90923 90924 90925 90926 90927 90928 90929 | for(i=pOp->p3, mx=i+pOp->p4.i; i<mx; i++){ const Mem *p = &aMem[i]; if( p->flags & (MEM_Int|MEM_IntReal) ){ h += p->u.i; }else if( p->flags & MEM_Real ){ h += sqlite3VdbeIntValue(p); }else if( p->flags & (MEM_Str|MEM_Blob) ){ /* no-op */ } } return h; } /* ** Return the symbolic name for the data type of a pMem |
︙ | ︙ | |||
104491 104492 104493 104494 104495 104496 104497 | for(i=0, p=pNC; p && i<ArraySize(anRef); p=p->pNext, i++){ anRef[i] = p->nRef; } sqlite3WalkExpr(pWalker, pExpr->pLeft); if( 0==sqlite3ExprCanBeNull(pExpr->pLeft) && !IN_RENAME_OBJECT ){ testcase( ExprHasProperty(pExpr, EP_OuterON) ); assert( !ExprHasProperty(pExpr, EP_IntValue) ); | | | < < | < | < | 104494 104495 104496 104497 104498 104499 104500 104501 104502 104503 104504 104505 104506 104507 104508 104509 104510 104511 | for(i=0, p=pNC; p && i<ArraySize(anRef); p=p->pNext, i++){ anRef[i] = p->nRef; } sqlite3WalkExpr(pWalker, pExpr->pLeft); if( 0==sqlite3ExprCanBeNull(pExpr->pLeft) && !IN_RENAME_OBJECT ){ testcase( ExprHasProperty(pExpr, EP_OuterON) ); assert( !ExprHasProperty(pExpr, EP_IntValue) ); pExpr->u.iValue = (pExpr->op==TK_NOTNULL); pExpr->flags |= EP_IntValue; pExpr->op = TK_INTEGER; for(i=0, p=pNC; p && i<ArraySize(anRef); p=p->pNext, i++){ p->nRef = anRef[i]; } sqlite3ExprDelete(pParse->db, pExpr->pLeft); pExpr->pLeft = 0; } return WRC_Prune; |
︙ | ︙ | |||
104800 104801 104802 104803 104804 104805 104806 | notValidImpl(pParse, pNC, "subqueries", pExpr, pExpr); }else{ sqlite3WalkSelect(pWalker, pExpr->x.pSelect); } assert( pNC->nRef>=nRef ); if( nRef!=pNC->nRef ){ ExprSetProperty(pExpr, EP_VarSelect); | < > | 104799 104800 104801 104802 104803 104804 104805 104806 104807 104808 104809 104810 104811 104812 104813 104814 | notValidImpl(pParse, pNC, "subqueries", pExpr, pExpr); }else{ sqlite3WalkSelect(pWalker, pExpr->x.pSelect); } assert( pNC->nRef>=nRef ); if( nRef!=pNC->nRef ){ ExprSetProperty(pExpr, EP_VarSelect); } pNC->ncFlags |= NC_Subquery; } break; } case TK_VARIABLE: { testcase( pNC->ncFlags & NC_IsCheck ); testcase( pNC->ncFlags & NC_PartIdx ); testcase( pNC->ncFlags & NC_IdxExpr ); |
︙ | ︙ | |||
109578 109579 109580 109581 109582 109583 109584 109585 109586 109587 109588 109589 109590 109591 | int iTabCur, /* The table cursor. Or the PK cursor for WITHOUT ROWID */ int iCol, /* Index of the column to extract */ int regOut /* Extract the value into this register */ ){ Column *pCol; assert( v!=0 ); assert( pTab!=0 ); if( iCol<0 || iCol==pTab->iPKey ){ sqlite3VdbeAddOp2(v, OP_Rowid, iTabCur, regOut); VdbeComment((v, "%s.rowid", pTab->zName)); }else{ int op; int x; if( IsVirtual(pTab) ){ | > | 109577 109578 109579 109580 109581 109582 109583 109584 109585 109586 109587 109588 109589 109590 109591 | int iTabCur, /* The table cursor. Or the PK cursor for WITHOUT ROWID */ int iCol, /* Index of the column to extract */ int regOut /* Extract the value into this register */ ){ Column *pCol; assert( v!=0 ); assert( pTab!=0 ); assert( iCol!=XN_EXPR ); if( iCol<0 || iCol==pTab->iPKey ){ sqlite3VdbeAddOp2(v, OP_Rowid, iTabCur, regOut); VdbeComment((v, "%s.rowid", pTab->zName)); }else{ int op; int x; if( IsVirtual(pTab) ){ |
︙ | ︙ | |||
109844 109845 109846 109847 109848 109849 109850 109851 109852 109853 109854 109855 109856 109857 109858 109859 109860 109861 109862 109863 109864 | Parse *pParse, /* The parsing context */ Expr *pExpr, /* The expression to potentially bypass */ int target /* Where to store the result of the expression */ ){ IndexedExpr *p; Vdbe *v; for(p=pParse->pIdxEpr; p; p=p->pIENext){ int iDataCur = p->iDataCur; if( iDataCur<0 ) continue; if( pParse->iSelfTab ){ if( p->iDataCur!=pParse->iSelfTab-1 ) continue; iDataCur = -1; } if( sqlite3ExprCompare(0, pExpr, p->pExpr, iDataCur)!=0 ) continue; v = pParse->pVdbe; assert( v!=0 ); if( p->bMaybeNullRow ){ /* If the index is on a NULL row due to an outer join, then we ** cannot extract the value from the index. The value must be ** computed using the original expression. */ int addr = sqlite3VdbeCurrentAddr(v); | > > > > > > > > > > > | 109844 109845 109846 109847 109848 109849 109850 109851 109852 109853 109854 109855 109856 109857 109858 109859 109860 109861 109862 109863 109864 109865 109866 109867 109868 109869 109870 109871 109872 109873 109874 109875 | Parse *pParse, /* The parsing context */ Expr *pExpr, /* The expression to potentially bypass */ int target /* Where to store the result of the expression */ ){ IndexedExpr *p; Vdbe *v; for(p=pParse->pIdxEpr; p; p=p->pIENext){ u8 exprAff; int iDataCur = p->iDataCur; if( iDataCur<0 ) continue; if( pParse->iSelfTab ){ if( p->iDataCur!=pParse->iSelfTab-1 ) continue; iDataCur = -1; } if( sqlite3ExprCompare(0, pExpr, p->pExpr, iDataCur)!=0 ) continue; assert( p->aff>=SQLITE_AFF_BLOB && p->aff<=SQLITE_AFF_NUMERIC ); exprAff = sqlite3ExprAffinity(pExpr); if( (exprAff<=SQLITE_AFF_BLOB && p->aff!=SQLITE_AFF_BLOB) || (exprAff==SQLITE_AFF_TEXT && p->aff!=SQLITE_AFF_TEXT) || (exprAff>=SQLITE_AFF_NUMERIC && p->aff!=SQLITE_AFF_NUMERIC) ){ /* Affinity mismatch on a generated column */ continue; } v = pParse->pVdbe; assert( v!=0 ); if( p->bMaybeNullRow ){ /* If the index is on a NULL row due to an outer join, then we ** cannot extract the value from the index. The value must be ** computed using the original expression. */ int addr = sqlite3VdbeCurrentAddr(v); |
︙ | ︙ | |||
110430 110431 110432 110433 110434 110435 110436 | ** Z is stored in pExpr->pList->a[1].pExpr. */ case TK_BETWEEN: { exprCodeBetween(pParse, pExpr, target, 0, 0); return target; } case TK_COLLATE: { | | | > > > > | < | 110441 110442 110443 110444 110445 110446 110447 110448 110449 110450 110451 110452 110453 110454 110455 110456 110457 110458 110459 110460 110461 | ** Z is stored in pExpr->pList->a[1].pExpr. */ case TK_BETWEEN: { exprCodeBetween(pParse, pExpr, target, 0, 0); return target; } case TK_COLLATE: { if( !ExprHasProperty(pExpr, EP_Collate) ){ /* A TK_COLLATE Expr node without the EP_Collate tag is a so-called ** "SOFT-COLLATE" that is added to constraints that are pushed down ** from outer queries into sub-queries by the push-down optimization. ** Clear subtypes as subtypes may not cross a subquery boundary. */ assert( pExpr->pLeft ); inReg = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); if( inReg!=target ){ sqlite3VdbeAddOp2(v, OP_SCopy, inReg, target); inReg = target; } sqlite3VdbeAddOp1(v, OP_ClrSubtype, inReg); return inReg; |
︙ | ︙ | |||
110541 110542 110543 110544 110545 110546 110547 | sqlite3VdbeAddOp3(v, OP_Column, pAggInfo->sortingIdxPTab, pAggInfo->aCol[pExpr->iAgg].iSorterColumn, target); inReg = target; break; } } | | > > > | < < > | > | > > > > < | 110555 110556 110557 110558 110559 110560 110561 110562 110563 110564 110565 110566 110567 110568 110569 110570 110571 110572 110573 110574 110575 110576 110577 110578 110579 110580 110581 110582 110583 110584 | sqlite3VdbeAddOp3(v, OP_Column, pAggInfo->sortingIdxPTab, pAggInfo->aCol[pExpr->iAgg].iSorterColumn, target); inReg = target; break; } } addrINR = sqlite3VdbeAddOp3(v, OP_IfNullRow, pExpr->iTable, 0, target); /* The OP_IfNullRow opcode above can overwrite the result register with ** NULL. So we have to ensure that the result register is not a value ** that is suppose to be a constant. Two defenses are needed: ** (1) Temporarily disable factoring of constant expressions ** (2) Make sure the computed value really is stored in register ** "target" and not someplace else. */ pParse->okConstFactor = 0; /* note (1) above */ inReg = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); pParse->okConstFactor = okConstFactor; if( inReg!=target ){ /* note (2) above */ sqlite3VdbeAddOp2(v, OP_SCopy, inReg, target); inReg = target; } sqlite3VdbeJumpHere(v, addrINR); break; } /* ** Form A: ** CASE x WHEN e1 THEN r1 WHEN e2 THEN r2 ... WHEN eN THEN rN ELSE y END ** |
︙ | ︙ | |||
118913 118914 118915 118916 118917 118918 118919 | SQLITE_PRIVATE void sqlite3AddReturning(Parse *pParse, ExprList *pList){ Returning *pRet; Hash *pHash; sqlite3 *db = pParse->db; if( pParse->pNewTrigger ){ sqlite3ErrorMsg(pParse, "cannot use RETURNING in a trigger"); }else{ | | | 118933 118934 118935 118936 118937 118938 118939 118940 118941 118942 118943 118944 118945 118946 118947 | SQLITE_PRIVATE void sqlite3AddReturning(Parse *pParse, ExprList *pList){ Returning *pRet; Hash *pHash; sqlite3 *db = pParse->db; if( pParse->pNewTrigger ){ sqlite3ErrorMsg(pParse, "cannot use RETURNING in a trigger"); }else{ assert( pParse->bReturning==0 || pParse->ifNotExists ); } pParse->bReturning = 1; pRet = sqlite3DbMallocZero(db, sizeof(*pRet)); if( pRet==0 ){ sqlite3ExprListDelete(db, pList); return; } |
︙ | ︙ | |||
118939 118940 118941 118942 118943 118944 118945 | pRet->retTrig.pSchema = db->aDb[1].pSchema; pRet->retTrig.pTabSchema = db->aDb[1].pSchema; pRet->retTrig.step_list = &pRet->retTStep; pRet->retTStep.op = TK_RETURNING; pRet->retTStep.pTrig = &pRet->retTrig; pRet->retTStep.pExprList = pList; pHash = &(db->aDb[1].pSchema->trigHash); | | > | 118959 118960 118961 118962 118963 118964 118965 118966 118967 118968 118969 118970 118971 118972 118973 118974 | pRet->retTrig.pSchema = db->aDb[1].pSchema; pRet->retTrig.pTabSchema = db->aDb[1].pSchema; pRet->retTrig.step_list = &pRet->retTStep; pRet->retTStep.op = TK_RETURNING; pRet->retTStep.pTrig = &pRet->retTrig; pRet->retTStep.pExprList = pList; pHash = &(db->aDb[1].pSchema->trigHash); assert( sqlite3HashFind(pHash, RETURNING_TRIGGER_NAME)==0 || pParse->nErr || pParse->ifNotExists ); if( sqlite3HashInsert(pHash, RETURNING_TRIGGER_NAME, &pRet->retTrig) ==&pRet->retTrig ){ sqlite3OomFault(db); } } /* |
︙ | ︙ | |||
119474 119475 119476 119477 119478 119479 119480 119481 119482 119483 119484 119485 119486 119487 | if( ALWAYS(pExpr) && pExpr->op==TK_ID ){ /* The value of a generated column needs to be a real expression, not ** just a reference to another column, in order for covering index ** optimizations to work correctly. So if the value is not an expression, ** turn it into one by adding a unary "+" operator. */ pExpr = sqlite3PExpr(pParse, TK_UPLUS, pExpr, 0); } sqlite3ColumnSetExpr(pParse, pTab, pCol, pExpr); pExpr = 0; goto generated_done; generated_error: sqlite3ErrorMsg(pParse, "error in generated column \"%s\"", pCol->zCnName); | > | 119495 119496 119497 119498 119499 119500 119501 119502 119503 119504 119505 119506 119507 119508 119509 | if( ALWAYS(pExpr) && pExpr->op==TK_ID ){ /* The value of a generated column needs to be a real expression, not ** just a reference to another column, in order for covering index ** optimizations to work correctly. So if the value is not an expression, ** turn it into one by adding a unary "+" operator. */ pExpr = sqlite3PExpr(pParse, TK_UPLUS, pExpr, 0); } if( pExpr && pExpr->op!=TK_RAISE ) pExpr->affExpr = pCol->affinity; sqlite3ColumnSetExpr(pParse, pTab, pCol, pExpr); pExpr = 0; goto generated_done; generated_error: sqlite3ErrorMsg(pParse, "error in generated column \"%s\"", pCol->zCnName); |
︙ | ︙ | |||
124180 124181 124182 124183 124184 124185 124186 | sqlite3VdbeAddOp2(v, OP_Clear, pIdx->tnum, iDb); } } }else #endif /* SQLITE_OMIT_TRUNCATE_OPTIMIZATION */ { u16 wcf = WHERE_ONEPASS_DESIRED|WHERE_DUPLICATES_OK; | | | 124202 124203 124204 124205 124206 124207 124208 124209 124210 124211 124212 124213 124214 124215 124216 | sqlite3VdbeAddOp2(v, OP_Clear, pIdx->tnum, iDb); } } }else #endif /* SQLITE_OMIT_TRUNCATE_OPTIMIZATION */ { u16 wcf = WHERE_ONEPASS_DESIRED|WHERE_DUPLICATES_OK; if( sNC.ncFlags & NC_Subquery ) bComplex = 1; wcf |= (bComplex ? 0 : WHERE_ONEPASS_MULTIROW); if( HasRowid(pTab) ){ /* For a rowid table, initialize the RowSet to an empty set */ pPk = 0; nPk = 1; iRowSet = ++pParse->nMem; sqlite3VdbeAddOp2(v, OP_Null, 0, iRowSet); |
︙ | ︙ | |||
126876 126877 126878 126879 126880 126881 126882 126883 126884 126885 126886 126887 126888 126889 | /* ** On some systems, ceil() and floor() are intrinsic function. You are ** unable to take a pointer to these functions. Hence, we here wrap them ** in our own actual functions. */ static double xCeil(double x){ return ceil(x); } static double xFloor(double x){ return floor(x); } /* ** Implementation of SQL functions: ** ** ln(X) - natural logarithm ** log(X) - log X base 10 ** log10(X) - log X base 10 | > > > > > > > > > > > > | 126898 126899 126900 126901 126902 126903 126904 126905 126906 126907 126908 126909 126910 126911 126912 126913 126914 126915 126916 126917 126918 126919 126920 126921 126922 126923 | /* ** On some systems, ceil() and floor() are intrinsic function. You are ** unable to take a pointer to these functions. Hence, we here wrap them ** in our own actual functions. */ static double xCeil(double x){ return ceil(x); } static double xFloor(double x){ return floor(x); } /* ** Some systems do not have log2() and log10() in their standard math ** libraries. */ #if defined(HAVE_LOG10) && HAVE_LOG10==0 # define log10(X) (0.4342944819032517867*log(X)) #endif #if defined(HAVE_LOG2) && HAVE_LOG2==0 # define log2(X) (1.442695040888963456*log(X)) #endif /* ** Implementation of SQL functions: ** ** ln(X) - natural logarithm ** log(X) - log X base 10 ** log10(X) - log X base 10 |
︙ | ︙ | |||
136255 136256 136257 136258 136259 136260 136261 136262 136263 136264 136265 136266 136267 136268 | sqlite3VdbeAddOp3(v, OP_Concat, 7, 3, 3); sqlite3VdbeLoadString(v, 4, " missing from index "); sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 3); jmp5 = sqlite3VdbeLoadString(v, 4, pIdx->zName); sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 3); jmp4 = integrityCheckResultRow(v); sqlite3VdbeJumpHere(v, jmp2); /* Any indexed columns with non-BINARY collations must still hold ** the exact same text value as the table. */ label6 = 0; for(kk=0; kk<pIdx->nKeyCol; kk++){ if( pIdx->azColl[kk]==sqlite3StrBINARY ) continue; if( label6==0 ) label6 = sqlite3VdbeMakeLabel(pParse); | > > > > > > > > > > > > > > > > > | 136289 136290 136291 136292 136293 136294 136295 136296 136297 136298 136299 136300 136301 136302 136303 136304 136305 136306 136307 136308 136309 136310 136311 136312 136313 136314 136315 136316 136317 136318 136319 | sqlite3VdbeAddOp3(v, OP_Concat, 7, 3, 3); sqlite3VdbeLoadString(v, 4, " missing from index "); sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 3); jmp5 = sqlite3VdbeLoadString(v, 4, pIdx->zName); sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 3); jmp4 = integrityCheckResultRow(v); sqlite3VdbeJumpHere(v, jmp2); /* The OP_IdxRowid opcode is an optimized version of OP_Column ** that extracts the rowid off the end of the index record. ** But it only works correctly if index record does not have ** any extra bytes at the end. Verify that this is the case. */ if( HasRowid(pTab) ){ int jmp7; sqlite3VdbeAddOp2(v, OP_IdxRowid, iIdxCur+j, 3); jmp7 = sqlite3VdbeAddOp3(v, OP_Eq, 3, 0, r1+pIdx->nColumn-1); VdbeCoverage(v); sqlite3VdbeLoadString(v, 3, "rowid not at end-of-record for row "); sqlite3VdbeAddOp3(v, OP_Concat, 7, 3, 3); sqlite3VdbeLoadString(v, 4, " of index "); sqlite3VdbeGoto(v, jmp5-1); sqlite3VdbeJumpHere(v, jmp7); } /* Any indexed columns with non-BINARY collations must still hold ** the exact same text value as the table. */ label6 = 0; for(kk=0; kk<pIdx->nKeyCol; kk++){ if( pIdx->azColl[kk]==sqlite3StrBINARY ) continue; if( label6==0 ) label6 = sqlite3VdbeMakeLabel(pParse); |
︙ | ︙ | |||
140551 140552 140553 140554 140555 140556 140557 | i64 n; pTab->tabFlags |= (pCol->colFlags & COLFLAG_NOINSERT); p = a[i].pExpr; /* pCol->szEst = ... // Column size est for SELECT tables never used */ pCol->affinity = sqlite3ExprAffinity(p); if( pCol->affinity<=SQLITE_AFF_NONE ){ pCol->affinity = aff; | < < > > > | 140602 140603 140604 140605 140606 140607 140608 140609 140610 140611 140612 140613 140614 140615 140616 140617 140618 140619 140620 140621 140622 140623 140624 140625 140626 140627 140628 140629 140630 | i64 n; pTab->tabFlags |= (pCol->colFlags & COLFLAG_NOINSERT); p = a[i].pExpr; /* pCol->szEst = ... // Column size est for SELECT tables never used */ pCol->affinity = sqlite3ExprAffinity(p); if( pCol->affinity<=SQLITE_AFF_NONE ){ pCol->affinity = aff; } if( pCol->affinity>=SQLITE_AFF_TEXT && pSelect->pNext ){ int m = 0; Select *pS2; for(m=0, pS2=pSelect->pNext; pS2; pS2=pS2->pNext){ m |= sqlite3ExprDataType(pS2->pEList->a[i].pExpr); } if( pCol->affinity==SQLITE_AFF_TEXT && (m&0x01)!=0 ){ pCol->affinity = SQLITE_AFF_BLOB; }else if( pCol->affinity>=SQLITE_AFF_NUMERIC && (m&0x02)!=0 ){ pCol->affinity = SQLITE_AFF_BLOB; } if( pCol->affinity>=SQLITE_AFF_NUMERIC && p->op==TK_CAST ){ pCol->affinity = SQLITE_AFF_FLEXNUM; } } zType = columnType(&sNC, p, 0, 0, 0); if( zType==0 || pCol->affinity!=sqlite3AffinityType(zType, 0) ){ if( pCol->affinity==SQLITE_AFF_NUMERIC || pCol->affinity==SQLITE_AFF_FLEXNUM ){ |
︙ | ︙ | |||
142080 142081 142082 142083 142084 142085 142086 | Expr ifNullRow; assert( pSubst->pEList!=0 && iColumn<pSubst->pEList->nExpr ); assert( pExpr->pRight==0 ); if( sqlite3ExprIsVector(pCopy) ){ sqlite3VectorErrorMsg(pSubst->pParse, pCopy); }else{ sqlite3 *db = pSubst->pParse->db; | | > > | 142132 142133 142134 142135 142136 142137 142138 142139 142140 142141 142142 142143 142144 142145 142146 142147 142148 | Expr ifNullRow; assert( pSubst->pEList!=0 && iColumn<pSubst->pEList->nExpr ); assert( pExpr->pRight==0 ); if( sqlite3ExprIsVector(pCopy) ){ sqlite3VectorErrorMsg(pSubst->pParse, pCopy); }else{ sqlite3 *db = pSubst->pParse->db; if( pSubst->isOuterJoin && (pCopy->op!=TK_COLUMN || pCopy->iTable!=pSubst->iNewTable) ){ memset(&ifNullRow, 0, sizeof(ifNullRow)); ifNullRow.op = TK_IF_NULL_ROW; ifNullRow.pLeft = pCopy; ifNullRow.iTable = pSubst->iNewTable; ifNullRow.iColumn = -99; ifNullRow.flags = EP_IfNullRow; pCopy = &ifNullRow; |
︙ | ︙ | |||
144596 144597 144598 144599 144600 144601 144602 144603 144604 144605 | static void optimizeAggregateUseOfIndexedExpr( Parse *pParse, /* Parsing context */ Select *pSelect, /* The SELECT statement being processed */ AggInfo *pAggInfo, /* The aggregate info */ NameContext *pNC /* Name context used to resolve agg-func args */ ){ assert( pAggInfo->iFirstReg==0 ); pAggInfo->nColumn = pAggInfo->nAccumulator; if( ALWAYS(pAggInfo->nSortingColumn>0) ){ if( pAggInfo->nColumn==0 ){ | > > | | 144650 144651 144652 144653 144654 144655 144656 144657 144658 144659 144660 144661 144662 144663 144664 144665 144666 144667 144668 144669 | static void optimizeAggregateUseOfIndexedExpr( Parse *pParse, /* Parsing context */ Select *pSelect, /* The SELECT statement being processed */ AggInfo *pAggInfo, /* The aggregate info */ NameContext *pNC /* Name context used to resolve agg-func args */ ){ assert( pAggInfo->iFirstReg==0 ); assert( pSelect!=0 ); assert( pSelect->pGroupBy!=0 ); pAggInfo->nColumn = pAggInfo->nAccumulator; if( ALWAYS(pAggInfo->nSortingColumn>0) ){ if( pAggInfo->nColumn==0 ){ pAggInfo->nSortingColumn = pSelect->pGroupBy->nExpr; }else{ pAggInfo->nSortingColumn = pAggInfo->aCol[pAggInfo->nColumn-1].iSorterColumn+1; } } analyzeAggFuncArgs(pAggInfo, pNC); #if TREETRACE_ENABLED |
︙ | ︙ | |||
145024 145025 145026 145027 145028 145029 145030 145031 145032 145033 145034 145035 145036 145037 145038 145039 145040 | Expr *pExpr; Expr *pCount; sqlite3 *db; if( (p->selFlags & SF_Aggregate)==0 ) return 0; /* This is an aggregate */ if( p->pEList->nExpr!=1 ) return 0; /* Single result column */ if( p->pWhere ) return 0; if( p->pGroupBy ) return 0; pExpr = p->pEList->a[0].pExpr; if( pExpr->op!=TK_AGG_FUNCTION ) return 0; /* Result is an aggregate */ assert( ExprUseUToken(pExpr) ); if( sqlite3_stricmp(pExpr->u.zToken,"count") ) return 0; /* Is count() */ assert( ExprUseXList(pExpr) ); if( pExpr->x.pList!=0 ) return 0; /* Must be count(*) */ if( p->pSrc->nSrc!=1 ) return 0; /* One table in FROM */ if( ExprHasProperty(pExpr, EP_WinFunc) ) return 0;/* Not a window function */ pSub = p->pSrc->a[0].pSelect; if( pSub==0 ) return 0; /* The FROM is a subquery */ | > | > | 145080 145081 145082 145083 145084 145085 145086 145087 145088 145089 145090 145091 145092 145093 145094 145095 145096 145097 145098 145099 145100 145101 145102 145103 145104 145105 145106 | Expr *pExpr; Expr *pCount; sqlite3 *db; if( (p->selFlags & SF_Aggregate)==0 ) return 0; /* This is an aggregate */ if( p->pEList->nExpr!=1 ) return 0; /* Single result column */ if( p->pWhere ) return 0; if( p->pGroupBy ) return 0; if( p->pOrderBy ) return 0; pExpr = p->pEList->a[0].pExpr; if( pExpr->op!=TK_AGG_FUNCTION ) return 0; /* Result is an aggregate */ assert( ExprUseUToken(pExpr) ); if( sqlite3_stricmp(pExpr->u.zToken,"count") ) return 0; /* Is count() */ assert( ExprUseXList(pExpr) ); if( pExpr->x.pList!=0 ) return 0; /* Must be count(*) */ if( p->pSrc->nSrc!=1 ) return 0; /* One table in FROM */ if( ExprHasProperty(pExpr, EP_WinFunc) ) return 0;/* Not a window function */ pSub = p->pSrc->a[0].pSelect; if( pSub==0 ) return 0; /* The FROM is a subquery */ if( pSub->pPrior==0 ) return 0; /* Must be a compound */ if( pSub->selFlags & SF_CopyCte ) return 0; /* Not a CTE */ do{ if( pSub->op!=TK_ALL && pSub->pPrior ) return 0; /* Must be UNION ALL */ if( pSub->pWhere ) return 0; /* No WHERE clause */ if( pSub->pLimit ) return 0; /* No LIMIT clause */ if( pSub->selFlags & SF_Aggregate ) return 0; /* Not an aggregate */ pSub = pSub->pPrior; /* Repeat over compound */ }while( pSub ); |
︙ | ︙ | |||
145477 145478 145479 145480 145481 145482 145483 | } #ifdef SQLITE_COUNTOFVIEW_OPTIMIZATION if( OptimizationEnabled(db, SQLITE_QueryFlattener|SQLITE_CountOfView) && countOfViewOptimization(pParse, p) ){ if( db->mallocFailed ) goto select_end; | < | 145535 145536 145537 145538 145539 145540 145541 145542 145543 145544 145545 145546 145547 145548 | } #ifdef SQLITE_COUNTOFVIEW_OPTIMIZATION if( OptimizationEnabled(db, SQLITE_QueryFlattener|SQLITE_CountOfView) && countOfViewOptimization(pParse, p) ){ if( db->mallocFailed ) goto select_end; pTabList = p->pSrc; } #endif /* For each term in the FROM clause, do two things: ** (1) Authorized unreferenced tables ** (2) Generate code for all sub-queries |
︙ | ︙ | |||
146839 146840 146841 146842 146843 146844 146845 146846 146847 146848 146849 146850 146851 146852 | if( !IN_RENAME_OBJECT ){ if( sqlite3HashFind(&(db->aDb[iDb].pSchema->trigHash),zName) ){ if( !noErr ){ sqlite3ErrorMsg(pParse, "trigger %T already exists", pName); }else{ assert( !db->init.busy ); sqlite3CodeVerifySchema(pParse, iDb); } goto trigger_cleanup; } } /* Do not create a trigger on a system table */ if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 ){ | > | 146896 146897 146898 146899 146900 146901 146902 146903 146904 146905 146906 146907 146908 146909 146910 | if( !IN_RENAME_OBJECT ){ if( sqlite3HashFind(&(db->aDb[iDb].pSchema->trigHash),zName) ){ if( !noErr ){ sqlite3ErrorMsg(pParse, "trigger %T already exists", pName); }else{ assert( !db->init.busy ); sqlite3CodeVerifySchema(pParse, iDb); VVA_ONLY( pParse->ifNotExists = 1; ) } goto trigger_cleanup; } } /* Do not create a trigger on a system table */ if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 ){ |
︙ | ︙ | |||
147620 147621 147622 147623 147624 147625 147626 | sqlite3SelectPrep(pParse, &sSelect, 0); if( pParse->nErr==0 ){ assert( db->mallocFailed==0 ); sqlite3GenerateColumnNames(pParse, &sSelect); } sqlite3ExprListDelete(db, sSelect.pEList); pNew = sqlite3ExpandReturning(pParse, pReturning->pReturnEL, pTab); | | | 147678 147679 147680 147681 147682 147683 147684 147685 147686 147687 147688 147689 147690 147691 147692 | sqlite3SelectPrep(pParse, &sSelect, 0); if( pParse->nErr==0 ){ assert( db->mallocFailed==0 ); sqlite3GenerateColumnNames(pParse, &sSelect); } sqlite3ExprListDelete(db, sSelect.pEList); pNew = sqlite3ExpandReturning(pParse, pReturning->pReturnEL, pTab); if( pParse->nErr==0 ){ NameContext sNC; memset(&sNC, 0, sizeof(sNC)); if( pReturning->nRetCol==0 ){ pReturning->nRetCol = pNew->nExpr; pReturning->iRetCur = pParse->nTab++; } sNC.pParse = pParse; |
︙ | ︙ | |||
148842 148843 148844 148845 148846 148847 148848 | eOnePass = ONEPASS_SINGLE; sqlite3ExprIfFalse(pParse, pWhere, labelBreak, SQLITE_JUMPIFNULL); bFinishSeek = 0; }else{ /* Begin the database scan. ** ** Do not consider a single-pass strategy for a multi-row update if | > > > | > | < < > > | > > > > > > | 148900 148901 148902 148903 148904 148905 148906 148907 148908 148909 148910 148911 148912 148913 148914 148915 148916 148917 148918 148919 148920 148921 148922 148923 148924 148925 148926 148927 148928 148929 | eOnePass = ONEPASS_SINGLE; sqlite3ExprIfFalse(pParse, pWhere, labelBreak, SQLITE_JUMPIFNULL); bFinishSeek = 0; }else{ /* Begin the database scan. ** ** Do not consider a single-pass strategy for a multi-row update if ** there is anything that might disrupt the cursor being used to do ** the UPDATE: ** (1) This is a nested UPDATE ** (2) There are triggers ** (3) There are FOREIGN KEY constraints ** (4) There are REPLACE conflict handlers ** (5) There are subqueries in the WHERE clause */ flags = WHERE_ONEPASS_DESIRED; if( !pParse->nested && !pTrigger && !hasFK && !chngKey && !bReplace && (sNC.ncFlags & NC_Subquery)==0 ){ flags |= WHERE_ONEPASS_MULTIROW; } pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere,0,0,0,flags,iIdxCur); if( pWInfo==0 ) goto update_cleanup; /* A one-pass strategy that might update more than one row may not ** be used if any column of the index used for the scan is being |
︙ | ︙ | |||
150812 150813 150814 150815 150816 150817 150818 150819 150820 150821 150822 150823 150824 150825 150826 | assert( &db->pVtabCtx ); assert( xConstruct ); sCtx.pTab = pTab; sCtx.pVTable = pVTable; sCtx.pPrior = db->pVtabCtx; sCtx.bDeclared = 0; db->pVtabCtx = &sCtx; rc = xConstruct(db, pMod->pAux, nArg, azArg, &pVTable->pVtab, &zErr); db->pVtabCtx = sCtx.pPrior; if( rc==SQLITE_NOMEM ) sqlite3OomFault(db); assert( sCtx.pTab==pTab ); if( SQLITE_OK!=rc ){ if( zErr==0 ){ *pzErr = sqlite3MPrintf(db, "vtable constructor failed: %s", zModuleName); | > > | 150880 150881 150882 150883 150884 150885 150886 150887 150888 150889 150890 150891 150892 150893 150894 150895 150896 | assert( &db->pVtabCtx ); assert( xConstruct ); sCtx.pTab = pTab; sCtx.pVTable = pVTable; sCtx.pPrior = db->pVtabCtx; sCtx.bDeclared = 0; db->pVtabCtx = &sCtx; pTab->nTabRef++; rc = xConstruct(db, pMod->pAux, nArg, azArg, &pVTable->pVtab, &zErr); sqlite3DeleteTable(db, pTab); db->pVtabCtx = sCtx.pPrior; if( rc==SQLITE_NOMEM ) sqlite3OomFault(db); assert( sCtx.pTab==pTab ); if( SQLITE_OK!=rc ){ if( zErr==0 ){ *pzErr = sqlite3MPrintf(db, "vtable constructor failed: %s", zModuleName); |
︙ | ︙ | |||
156844 156845 156846 156847 156848 156849 156850 | pColRef->iColumn = k++; assert( ExprUseYTab(pColRef) ); pColRef->y.pTab = pTab; pItem->colUsed |= sqlite3ExprColUsed(pColRef); pRhs = sqlite3PExpr(pParse, TK_UPLUS, sqlite3ExprDup(pParse->db, pArgs->a[j].pExpr, 0), 0); pTerm = sqlite3PExpr(pParse, TK_EQ, pColRef, pRhs); | | | 156914 156915 156916 156917 156918 156919 156920 156921 156922 156923 156924 156925 156926 156927 156928 | pColRef->iColumn = k++; assert( ExprUseYTab(pColRef) ); pColRef->y.pTab = pTab; pItem->colUsed |= sqlite3ExprColUsed(pColRef); pRhs = sqlite3PExpr(pParse, TK_UPLUS, sqlite3ExprDup(pParse->db, pArgs->a[j].pExpr, 0), 0); pTerm = sqlite3PExpr(pParse, TK_EQ, pColRef, pRhs); if( pItem->fg.jointype & (JT_LEFT|JT_LTORJ|JT_RIGHT) ){ joinType = EP_OuterON; }else{ joinType = EP_InnerON; } sqlite3SetJoinExpr(pTerm, pItem->iCursor, joinType); whereClauseInsert(pWC, pTerm, TERM_DYNAMIC); } |
︙ | ︙ | |||
157981 157982 157983 157984 157985 157986 157987 157988 157989 157990 157991 157992 157993 157994 | int addrCont; /* Jump here to skip a row */ const WhereTerm *pTerm; /* For looping over WHERE clause terms */ const WhereTerm *pWCEnd; /* Last WHERE clause term */ Parse *pParse = pWInfo->pParse; /* Parsing context */ Vdbe *v = pParse->pVdbe; /* VDBE under construction */ WhereLoop *pLoop = pLevel->pWLoop; /* The loop being coded */ int iCur; /* Cursor for table getting the filter */ assert( pLoop!=0 ); assert( v!=0 ); assert( pLoop->wsFlags & WHERE_BLOOMFILTER ); addrOnce = sqlite3VdbeAddOp0(v, OP_Once); VdbeCoverage(v); do{ | > > > > | 158051 158052 158053 158054 158055 158056 158057 158058 158059 158060 158061 158062 158063 158064 158065 158066 158067 158068 | int addrCont; /* Jump here to skip a row */ const WhereTerm *pTerm; /* For looping over WHERE clause terms */ const WhereTerm *pWCEnd; /* Last WHERE clause term */ Parse *pParse = pWInfo->pParse; /* Parsing context */ Vdbe *v = pParse->pVdbe; /* VDBE under construction */ WhereLoop *pLoop = pLevel->pWLoop; /* The loop being coded */ int iCur; /* Cursor for table getting the filter */ IndexedExpr *saved_pIdxEpr; /* saved copy of Parse.pIdxEpr */ saved_pIdxEpr = pParse->pIdxEpr; pParse->pIdxEpr = 0; assert( pLoop!=0 ); assert( v!=0 ); assert( pLoop->wsFlags & WHERE_BLOOMFILTER ); addrOnce = sqlite3VdbeAddOp0(v, OP_Once); VdbeCoverage(v); do{ |
︙ | ︙ | |||
158037 158038 158039 158040 158041 158042 158043 | sqlite3ReleaseTempReg(pParse, r1); }else{ Index *pIdx = pLoop->u.btree.pIndex; int n = pLoop->u.btree.nEq; int r1 = sqlite3GetTempRange(pParse, n); int jj; for(jj=0; jj<n; jj++){ | < | | 158111 158112 158113 158114 158115 158116 158117 158118 158119 158120 158121 158122 158123 158124 158125 158126 | sqlite3ReleaseTempReg(pParse, r1); }else{ Index *pIdx = pLoop->u.btree.pIndex; int n = pLoop->u.btree.nEq; int r1 = sqlite3GetTempRange(pParse, n); int jj; for(jj=0; jj<n; jj++){ assert( pIdx->pTable==pItem->pTab ); sqlite3ExprCodeLoadIndexColumn(pParse, pIdx, iCur, jj, r1+jj); } sqlite3VdbeAddOp4Int(v, OP_FilterAdd, pLevel->regFilter, 0, r1, n); sqlite3ReleaseTempRange(pParse, r1, n); } sqlite3VdbeResolveLabel(v, addrCont); sqlite3VdbeAddOp2(v, OP_Next, pLevel->iTabCur, addrTop+1); VdbeCoverage(v); |
︙ | ︙ | |||
158070 158071 158072 158073 158074 158075 158076 158077 158078 158079 158080 158081 158082 158083 | ** not able to do early evaluation of bloom filters that make use of ** the IN operator */ break; } } }while( iLevel < pWInfo->nLevel ); sqlite3VdbeJumpHere(v, addrOnce); } #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Allocate and populate an sqlite3_index_info structure. It is the ** responsibility of the caller to eventually release the structure | > | 158143 158144 158145 158146 158147 158148 158149 158150 158151 158152 158153 158154 158155 158156 158157 | ** not able to do early evaluation of bloom filters that make use of ** the IN operator */ break; } } }while( iLevel < pWInfo->nLevel ); sqlite3VdbeJumpHere(v, addrOnce); pParse->pIdxEpr = saved_pIdxEpr; } #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Allocate and populate an sqlite3_index_info structure. It is the ** responsibility of the caller to eventually release the structure |
︙ | ︙ | |||
162141 162142 162143 162144 162145 162146 162147 162148 162149 162150 162151 162152 162153 162154 | pWInfo->bOrderedInnerLoop = 0; if( pWInfo->pOrderBy ){ pWInfo->nOBSat = pFrom->isOrdered; if( pWInfo->wctrlFlags & WHERE_DISTINCTBY ){ if( pFrom->isOrdered==pWInfo->pOrderBy->nExpr ){ pWInfo->eDistinct = WHERE_DISTINCT_ORDERED; } }else{ pWInfo->revMask = pFrom->revLoop; if( pWInfo->nOBSat<=0 ){ pWInfo->nOBSat = 0; if( nLoop>0 ){ u32 wsFlags = pFrom->aLoop[nLoop-1]->wsFlags; if( (wsFlags & WHERE_ONEROW)==0 | > > > > | 162215 162216 162217 162218 162219 162220 162221 162222 162223 162224 162225 162226 162227 162228 162229 162230 162231 162232 | pWInfo->bOrderedInnerLoop = 0; if( pWInfo->pOrderBy ){ pWInfo->nOBSat = pFrom->isOrdered; if( pWInfo->wctrlFlags & WHERE_DISTINCTBY ){ if( pFrom->isOrdered==pWInfo->pOrderBy->nExpr ){ pWInfo->eDistinct = WHERE_DISTINCT_ORDERED; } if( pWInfo->pSelect->pOrderBy && pWInfo->nOBSat > pWInfo->pSelect->pOrderBy->nExpr ){ pWInfo->nOBSat = pWInfo->pSelect->pOrderBy->nExpr; } }else{ pWInfo->revMask = pFrom->revLoop; if( pWInfo->nOBSat<=0 ){ pWInfo->nOBSat = 0; if( nLoop>0 ){ u32 wsFlags = pFrom->aLoop[nLoop-1]->wsFlags; if( (wsFlags & WHERE_ONEROW)==0 |
︙ | ︙ | |||
162552 162553 162554 162555 162556 162557 162558 162559 162560 162561 162562 162563 162564 162565 | } #endif p->pExpr = sqlite3ExprDup(pParse->db, pExpr, 0); p->iDataCur = pTabItem->iCursor; p->iIdxCur = iIdxCur; p->iIdxCol = i; p->bMaybeNullRow = bMaybeNullRow; #ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS p->zIdxName = pIdx->zName; #endif pParse->pIdxEpr = p; if( p->pIENext==0 ){ sqlite3ParserAddCleanup(pParse, whereIndexedExprCleanup, pParse); } | > > > | 162630 162631 162632 162633 162634 162635 162636 162637 162638 162639 162640 162641 162642 162643 162644 162645 162646 | } #endif p->pExpr = sqlite3ExprDup(pParse->db, pExpr, 0); p->iDataCur = pTabItem->iCursor; p->iIdxCur = iIdxCur; p->iIdxCol = i; p->bMaybeNullRow = bMaybeNullRow; if( sqlite3IndexAffinityStr(pParse->db, pIdx) ){ p->aff = pIdx->zColAff[i]; } #ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS p->zIdxName = pIdx->zName; #endif pParse->pIdxEpr = p; if( p->pIENext==0 ){ sqlite3ParserAddCleanup(pParse, whereIndexedExprCleanup, pParse); } |
︙ | ︙ | |||
240167 240168 240169 240170 240171 240172 240173 | static void fts5SourceIdFunc( sqlite3_context *pCtx, /* Function call context */ int nArg, /* Number of args */ sqlite3_value **apUnused /* Function arguments */ ){ assert( nArg==0 ); UNUSED_PARAM2(nArg, apUnused); | | | 240248 240249 240250 240251 240252 240253 240254 240255 240256 240257 240258 240259 240260 240261 240262 | static void fts5SourceIdFunc( sqlite3_context *pCtx, /* Function call context */ int nArg, /* Number of args */ sqlite3_value **apUnused /* Function arguments */ ){ assert( nArg==0 ); UNUSED_PARAM2(nArg, apUnused); sqlite3_result_text(pCtx, "fts5: 2023-03-17 12:25:10 c5bd0ea3b5b2f3ed8e971c5fd6e85e8f06d8055d74df65612c3794138306e6ba", -1, SQLITE_TRANSIENT); } /* ** Return true if zName is the extension on one of the shadow tables used ** by this module. */ static int fts5ShadowName(const char *zName){ |
︙ | ︙ |
Changes to extsrc/sqlite3.h.
︙ | ︙ | |||
142 143 144 145 146 147 148 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ | | | | | 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.41.2" #define SQLITE_VERSION_NUMBER 3041002 #define SQLITE_SOURCE_ID "2023-03-17 12:25:10 c5bd0ea3b5b2f3ed8e971c5fd6e85e8f06d8055d74df65612c3794138306e6ba" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ |
Changes to skins/README.md.
︙ | ︙ | |||
19 20 21 22 23 24 25 | called "skins/newskin" below but you should use a new original name, of course.) 2. Add files skins/newskin/css.txt, skins/newskin/details.txt, skins/newskin/footer.txt, skins/newskin/header.txt, and skins/newskin/js.txt. Be sure to "fossil add" these files. | | | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | called "skins/newskin" below but you should use a new original name, of course.) 2. Add files skins/newskin/css.txt, skins/newskin/details.txt, skins/newskin/footer.txt, skins/newskin/header.txt, and skins/newskin/js.txt. Be sure to "fossil add" these files. 3. Go to the tools/ directory and rerun "tclsh makemake.tcl". This step rebuilds the various makefiles so that they have dependencies on the skin files you just installed. 4. Edit the BuiltinSkin[] array near the top of the src/skins.c source file so that it describes and references the "newskin" skin. 5. Type "make" to rebuild. |
︙ | ︙ |
Changes to skins/ardoise/css.txt.
︙ | ︙ | |||
305 306 307 308 309 310 311 312 313 314 315 316 317 318 | display: inline-block; box-sizing: border-box; text-decoration: none; text-align: center; white-space: nowrap; cursor: pointer } @media (min-width:550px) { .container { width: 95% } .column, .columns { margin-left: 4% | > > > > > | 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 | display: inline-block; box-sizing: border-box; text-decoration: none; text-align: center; white-space: nowrap; cursor: pointer } input[type=submit]:disabled { color: rgb(70,70,70); background-color: rgb(153,153,153); } @media (min-width:550px) { .container { width: 95% } .column, .columns { margin-left: 4% |
︙ | ︙ | |||
1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 | .u-cf { content: ""; display: table; clear: both } div.forumSel { background-color: #3a3a3a; } .debug { background-color: #330; border: 2px solid #aa0; } .capsumOff { | > > > | 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 | .u-cf { content: ""; display: table; clear: both } div.forumSel { background-color: #3a3a3a; } body.forum .forumPosts.fileage a:visited { color: rgb(72, 144, 224); } .debug { background-color: #330; border: 2px solid #aa0; } .capsumOff { |
︙ | ︙ |
Changes to skins/blitz/css.txt.
︙ | ︙ | |||
561 562 563 564 565 566 567 568 569 570 571 572 573 574 | input[type="submit"]:hover, input[type="submit"]:focus { color: white !important; background-color: #648898; border-color: #648898; } /* Forms ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ input[type="email"], input[type="number"], input[type="search"], input[type="text"], | > > > > > | 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | input[type="submit"]:hover, input[type="submit"]:focus { color: white !important; background-color: #648898; border-color: #648898; } input[type="submit"]:disabled { color: rgb(128,128,128); background-color: rgb(153,153,153); } /* Forms ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ input[type="email"], input[type="number"], input[type="search"], input[type="text"], |
︙ | ︙ | |||
1112 1113 1114 1115 1116 1117 1118 | } span.timelineComment { padding: 0px 5px; } | | | 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 | } span.timelineComment { padding: 0px 5px; } /* Login/Logout ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ table.login_out { } table.login_out .login_out_label { font-weight: 700; text-align: right; |
︙ | ︙ | |||
1264 1265 1266 1267 1268 1269 1270 | .mainmenu:after, .row:after, .u-cf { content: ""; display: table; clear: both; } | > > > > | 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 | .mainmenu:after, .row:after, .u-cf { content: ""; display: table; clear: both; } body.forum .forumPosts.fileage a:visited { color: #648999; } |
Changes to skins/darkmode/css.txt.
︙ | ︙ | |||
124 125 126 127 128 129 130 131 132 133 134 135 136 137 | input[type=button]:hover, input[type=reset]:hover, input[type=submit]:hover { background-color: #FF4500f0; color: rgba(24,24,24,0.8); outline: 0 } .button:focus, button:focus, input[type=button]:focus, input[type=reset]:focus, input[type=submit]:focus { outline: 2px outset #333; border-color: #888; | > > > > | 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | input[type=button]:hover, input[type=reset]:hover, input[type=submit]:hover { background-color: #FF4500f0; color: rgba(24,24,24,0.8); outline: 0 } input[type=submit]:disabled { color: #363636; background-color: #707070; } .button:focus, button:focus, input[type=button]:focus, input[type=reset]:focus, input[type=submit]:focus { outline: 2px outset #333; border-color: #888; |
︙ | ︙ | |||
554 555 556 557 558 559 560 | } body.forum .debug { background-color: #FF4500f0; color: rgba(24,24,24,0.8); } | | > > > > > > > > > > > > | 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 | } body.forum .debug { background-color: #FF4500f0; color: rgba(24,24,24,0.8); } body.forum .forumPosts.fileage tr:hover { background-color: #333; color: rgba(24,24,24,0.8); } body.forum .forumPosts.fileage tr:hover { background-color: #333; color: rgba(24,24,24,0.8); } body.forum .forumPosts.fileage tr:hover > td:nth-child(1), body.forum .forumPosts.fileage tr:hover > td:nth-child(3) { color: #ffffffe0; } body.forum .forumPostBody > div blockquote { border: 1px inset; padding: 0 0.5em; } body.forum .forumPosts.fileage a:visited { color: rgba(98, 150, 205, 0.9); } body.report table.report tr td { color: black } body.report table.report a { color: blue } body.tkt td.tktDspValue { color: black } body.tkt td.tktDspValue a { color: blue } body.branch .brlist > table > tbody > tr:hover:not(.selected), body.branch .brlist > table > tbody > tr.selected { background-color: #442800; } |
Changes to skins/eagle/css.txt.
︙ | ︙ | |||
396 397 398 399 400 401 402 403 404 405 406 407 408 409 | border: 1px solid white; } div.forumSel { background-color: #808080; } div.forumObs { color: white; } .fileage td { font-family: "courier new"; } div.filetreeline:hover { | > > > | 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 | border: 1px solid white; } div.forumSel { background-color: #808080; } div.forumObs { color: white; } body.forum .forumPosts.fileage a:visited { color: rgba(176,176,176,1.0); } .fileage td { font-family: "courier new"; } div.filetreeline:hover { |
︙ | ︙ |
Changes to skins/xekri/css.txt.
︙ | ︙ | |||
1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 | } div.forumPostBody blockquote { border-width: 1pt; border-style: solid; padding: 0 0.5em; border-radius: 0.25em; } .debug { color: black; } body.branch .brlist > table > tbody > tr:hover:not(.selected), body.branch .brlist > table > tbody > tr.selected { | > > > > > > > | 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 | } div.forumPostBody blockquote { border-width: 1pt; border-style: solid; padding: 0 0.5em; border-radius: 0.25em; } body.forum .forumPosts.fileage a { color: #60c0ff; } body.forum .forumPosts.fileage a:visited { color: #40a0ff; } .debug { color: black; } body.branch .brlist > table > tbody > tr:hover:not(.selected), body.branch .brlist > table > tbody > tr.selected { |
︙ | ︙ |
Changes to src/alerts.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Logic for email notification, also known as "alerts" or "subscriptions". ** ** Are you looking for the code that reads and writes the internet ** email protocol? That is not here. See the "smtp.c" file instead. ** Yes, the choice of source code filenames is not the greatest, but ** it is not so bad that changing them seems justified. | | | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | ** ** Logic for email notification, also known as "alerts" or "subscriptions". ** ** Are you looking for the code that reads and writes the internet ** email protocol? That is not here. See the "smtp.c" file instead. ** Yes, the choice of source code filenames is not the greatest, but ** it is not so bad that changing them seems justified. */ #include "config.h" #include "alerts.h" #include <assert.h> #include <time.h> /* ** Maximum size of the subscriberCode blob, in bytes |
︙ | ︙ | |||
57 58 59 60 61 62 63 | @ -- @ CREATE TABLE repository.subscriber( @ subscriberId INTEGER PRIMARY KEY, -- numeric subscriber ID. Internal use @ subscriberCode BLOB DEFAULT (randomblob(32)) UNIQUE, -- UUID for subscriber @ semail TEXT UNIQUE COLLATE nocase,-- email address @ suname TEXT, -- corresponding USER entry @ sverified BOOLEAN DEFAULT true, -- email address verified | | | | 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | @ -- @ CREATE TABLE repository.subscriber( @ subscriberId INTEGER PRIMARY KEY, -- numeric subscriber ID. Internal use @ subscriberCode BLOB DEFAULT (randomblob(32)) UNIQUE, -- UUID for subscriber @ semail TEXT UNIQUE COLLATE nocase,-- email address @ suname TEXT, -- corresponding USER entry @ sverified BOOLEAN DEFAULT true, -- email address verified @ sdonotcall BOOLEAN, -- true for Do Not Call @ sdigest BOOLEAN, -- true for daily digests only @ ssub TEXT, -- baseline subscriptions @ sctime INTDATE, -- When this entry was created. unixtime @ mtime INTDATE, -- Last change. unixtime @ smip TEXT, -- IP address of last change @ lastContact INT -- Last contact. days since 1970 @ ); @ CREATE INDEX repository.subscriberUname @ ON subscriber(suname) WHERE suname IS NOT NULL; @ @ DROP TABLE IF EXISTS repository.pending_alert; @ -- Email notifications that need to be sent. @ -- @ -- The first character of the eventid determines the event type. @ -- Remaining characters determine the specific event. For example, @ -- 'c4413' means check-in with rid=4413. @ -- |
︙ | ︙ | |||
613 614 615 616 617 618 619 | blob_init(&p->out, 0, 0); }else if( fossil_strcmp(p->zDest, "relay")==0 ){ const char *zRelay = 0; emailerGetSetting(p, &zRelay, "email-send-relayhost"); if( zRelay ){ u32 smtpFlags = SMTP_DIRECT; if( mFlags & ALERT_TRACE ) smtpFlags |= SMTP_TRACE_STDOUT; | | > | 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 | blob_init(&p->out, 0, 0); }else if( fossil_strcmp(p->zDest, "relay")==0 ){ const char *zRelay = 0; emailerGetSetting(p, &zRelay, "email-send-relayhost"); if( zRelay ){ u32 smtpFlags = SMTP_DIRECT; if( mFlags & ALERT_TRACE ) smtpFlags |= SMTP_TRACE_STDOUT; p->pSmtp = smtp_session_new(domain_of_addr(p->zFrom), zRelay, smtpFlags); smtp_client_startup(p->pSmtp); } } return p; } /* |
︙ | ︙ |
Changes to src/allrepo.c.
︙ | ︙ | |||
106 107 108 109 110 111 112 113 114 115 116 117 118 119 | ** push Run a "push" on all repositories. Only the --verbose ** option is supported. ** ** rebuild Rebuild on all repositories. The command line options ** supported by the rebuild command itself, if any are ** present, are passed along verbatim. The --force and ** --randomize options are not supported. ** ** repack Look for extra compression in all repositories. ** ** sync Run a "sync" on all repositories. Only the --verbose ** and --unversioned and --share-links options are supported. ** ** set Run the "setting" or "set" commands on all | > > | 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | ** push Run a "push" on all repositories. Only the --verbose ** option is supported. ** ** rebuild Rebuild on all repositories. The command line options ** supported by the rebuild command itself, if any are ** present, are passed along verbatim. The --force and ** --randomize options are not supported. ** ** remote Show remote hosts for all repositories. ** ** repack Look for extra compression in all repositories. ** ** sync Run a "sync" on all repositories. Only the --verbose ** and --unversioned and --share-links options are supported. ** ** set Run the "setting" or "set" commands on all |
︙ | ︙ |
Changes to src/backlink.c.
︙ | ︙ | |||
250 251 252 253 254 255 256 | char *zTarget = blob_buffer(target); int nTarget = blob_size(target); backlink_create(p, zTarget, nTarget); return 1; } | | | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > | | | | | | | | | | | | | | | | | | | | | | | | | 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 | char *zTarget = blob_buffer(target); int nTarget = blob_size(target); backlink_create(p, zTarget, nTarget); return 1; } /* No-op routines for the rendering callbacks that we do not need */ static void mkdn_noop_prolog(Blob *b, void *v){ return; } static void (*mkdn_noop_epilog)(Blob*, void*) = mkdn_noop_prolog; static void mkdn_noop_footnotes(Blob *b1, const Blob *b2, void *v){ return; } static void mkdn_noop_blockcode(Blob *b1, Blob *b2, void *v){ return; } static void (*mkdn_noop_blockquote)(Blob*, Blob*, void*) = mkdn_noop_blockcode; static void (*mkdn_noop_blockhtml)(Blob*, Blob*, void*) = mkdn_noop_blockcode; static void mkdn_noop_header(Blob *b1, Blob *b2, int i, void *v){ return; } static void (*mkdn_noop_hrule)(Blob*, void*) = mkdn_noop_prolog; static void (*mkdn_noop_list)(Blob*, Blob*, int, void*) = mkdn_noop_header; static void (*mkdn_noop_listitem)(Blob*, Blob*, int, void*) = mkdn_noop_header; static void (*mkdn_noop_paragraph)(Blob*, Blob*, void*) = mkdn_noop_blockcode; static void mkdn_noop_table(Blob *b1, Blob *b2, Blob *b3, void *v){ return; } static void (*mkdn_noop_table_cell)(Blob*, Blob*, int, void*) = mkdn_noop_header; static void (*mkdn_noop_table_row)(Blob*, Blob*, int, void*) = mkdn_noop_header; static void mkdn_noop_footnoteitm(Blob *b1, const Blob *b2, int i1, int i2, void *v){ return; } static int mkdn_noop_autolink(Blob *b1, Blob *b2, enum mkd_autolink e, void *v){ return 1; } static int mkdn_noop_codespan(Blob *b1, Blob *b2, int i, void *v){ return 1; } static int mkdn_noop_emphasis(Blob *b1, Blob *b2, char c, void *v){ return 1; } static int (*mkdn_noop_dbl_emphas)(Blob*, Blob*, char, void*) = mkdn_noop_emphasis; static int mkdn_noop_image(Blob *b1, Blob *b2, Blob *b3, Blob *b4, void *v){ return 1; } static int mkdn_noop_linebreak(Blob *b1, void *v){ return 1; } static int mkdn_noop_r_html_tag(Blob *b1, Blob *b2, void *v){ return 1; } static int (*mkdn_noop_tri_emphas)(Blob*, Blob*, char, void*) = mkdn_noop_emphasis; static int mkdn_noop_footnoteref(Blob *b1, const Blob *b2, const Blob *b3, int i1, int i2, void *v){ return 1; } /* ** Scan markdown text and add self-hyperlinks to the BACKLINK table. */ void markdown_extract_links( char *zInputText, Backlink *p ){ struct mkd_renderer html_renderer = { /* prolog */ mkdn_noop_prolog, /* epilog */ mkdn_noop_epilog, /* footnotes */ mkdn_noop_footnotes, /* blockcode */ mkdn_noop_blockcode, /* blockquote */ mkdn_noop_blockquote, /* blockhtml */ mkdn_noop_blockhtml, /* header */ mkdn_noop_header, /* hrule */ mkdn_noop_hrule, /* list */ mkdn_noop_list, /* listitem */ mkdn_noop_listitem, /* paragraph */ mkdn_noop_paragraph, /* table */ mkdn_noop_table, /* table_cell */ mkdn_noop_table_cell, /* table_row */ mkdn_noop_table_row, /* footnoteitm*/ mkdn_noop_footnoteitm, /* autolink */ mkdn_noop_autolink, /* codespan */ mkdn_noop_codespan, /* dbl_emphas */ mkdn_noop_dbl_emphas, /* emphasis */ mkdn_noop_emphasis, /* image */ mkdn_noop_image, /* linebreak */ mkdn_noop_linebreak, /* link */ backlink_md_link, /* r_html_tag */ mkdn_noop_r_html_tag, /* tri_emphas */ mkdn_noop_tri_emphas, /* footnoteref*/ mkdn_noop_footnoteref, 0, /* entity */ 0, /* normal_text */ "*_", /* emphasis characters */ 0 /* client data */ }; Blob out, in; |
︙ | ︙ |
Changes to src/db.c.
︙ | ︙ | |||
2726 2727 2728 2729 2730 2731 2732 | db_multi_exec( "UPDATE user SET cap='s', pw=%Q" " WHERE login=%Q", fossil_random_password(10), zUser ); if( !setupUserOnly ){ db_multi_exec( "INSERT OR IGNORE INTO user(login,pw,cap,info)" | | | 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 | db_multi_exec( "UPDATE user SET cap='s', pw=%Q" " WHERE login=%Q", fossil_random_password(10), zUser ); if( !setupUserOnly ){ db_multi_exec( "INSERT OR IGNORE INTO user(login,pw,cap,info)" " VALUES('anonymous',hex(randomblob(8)),'hz','Anon');" "INSERT OR IGNORE INTO user(login,pw,cap,info)" " VALUES('nobody','','gjorz','Nobody');" "INSERT OR IGNORE INTO user(login,pw,cap,info)" " VALUES('developer','','ei','Dev');" "INSERT OR IGNORE INTO user(login,pw,cap,info)" " VALUES('reader','','kptw','Reader');" ); |
︙ | ︙ |
Changes to src/diff.c.
︙ | ︙ | |||
2884 2885 2886 2887 2888 2889 2890 | ** NULL then the compile-time default is used (which gets propagated ** to JS-side state by certain pages). */ int diff_context_lines(DiffConfig *pCfg){ const int dflt = 5; if(pCfg!=0){ int n = pCfg->nContext; | | | | 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 | ** NULL then the compile-time default is used (which gets propagated ** to JS-side state by certain pages). */ int diff_context_lines(DiffConfig *pCfg){ const int dflt = 5; if(pCfg!=0){ int n = pCfg->nContext; if( n==0 && (pCfg->diffFlags & DIFF_CONTEXT_EX)==0 ) n = dflt; return n<0 ? 0x7ffffff : n; }else{ return dflt; } } /* ** Extract the width of columns for side-by-side diff. Supply an |
︙ | ︙ | |||
3145 3146 3147 3148 3149 3150 3151 | } /* Undocumented and unsupported flags used for development ** debugging and analysis: */ if( find_option("debug",0,0)!=0 ) diffFlags |= DIFF_DEBUG; if( find_option("raw",0,0)!=0 ) diffFlags |= DIFF_RAW; } | | | 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 | } /* Undocumented and unsupported flags used for development ** debugging and analysis: */ if( find_option("debug",0,0)!=0 ) diffFlags |= DIFF_DEBUG; if( find_option("raw",0,0)!=0 ) diffFlags |= DIFF_RAW; } if( (z = find_option("context","c",1))!=0 && (f = atoi(z))!=0 ){ pCfg->nContext = f; diffFlags |= DIFF_CONTEXT_EX; } if( (z = find_option("width","W",1))!=0 && (f = atoi(z))>0 ){ pCfg->wColumn = f; } if( find_option("linenum","n",0)!=0 ) diffFlags |= DIFF_LINENO; |
︙ | ︙ |
Changes to src/diffcmd.c.
︙ | ︙ | |||
366 367 368 369 370 371 372 | */ void diff_begin(DiffConfig *pCfg){ if( (pCfg->diffFlags & DIFF_BROWSER)!=0 ){ tempDiffFilename = fossil_temp_filename(); tempDiffFilename = sqlite3_mprintf("%z.html", tempDiffFilename); diffOut = fossil_freopen(tempDiffFilename,"wb",stdout); if( diffOut==0 ){ | | | 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 | */ void diff_begin(DiffConfig *pCfg){ if( (pCfg->diffFlags & DIFF_BROWSER)!=0 ){ tempDiffFilename = fossil_temp_filename(); tempDiffFilename = sqlite3_mprintf("%z.html", tempDiffFilename); diffOut = fossil_freopen(tempDiffFilename,"wb",stdout); if( diffOut==0 ){ fossil_fatal("unable to create temporary file \"%s\"", tempDiffFilename); } #ifndef _WIN32 signal(SIGINT, diff_www_interrupt); #else SetConsoleCtrlHandler(diff_console_ctrl_handler, TRUE); #endif |
︙ | ︙ | |||
1074 1075 1076 1077 1078 1079 1080 | ** as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** -b|--browser Show the diff output in a web-browser ** --by Shorthand for "--browser -y" ** -ci|--checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program. Overrides "diff-command" | | > | 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 | ** as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** -b|--browser Show the diff output in a web-browser ** --by Shorthand for "--browser -y" ** -ci|--checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program. Overrides "diff-command" ** -c|--context N Show N lines of context around each change, with ** negative N meaning show all content ** --diff-binary BOOL Include binary files with external commands ** --exec-abs-paths Force absolute path names on external commands ** --exec-rel-paths Force relative path names on external commands ** -r|--from VERSION Select VERSION as source for the diff ** -w|--ignore-all-space Ignore white space when comparing lines ** -i|--internal Use internal diff logic ** --json Output formatted as JSON |
︙ | ︙ |
Changes to src/file.c.
︙ | ︙ | |||
57 58 59 60 61 62 63 64 65 66 67 68 69 70 | ** the target pathname of the symbolic link. ** ** RepoFILE Like SymFILE if allow-symlinks is true, or like ** ExtFILE if allow-symlinks is false. In other words, ** symbolic links are only recognized as something different ** from files or directories if allow-symlinks is true. */ #define ExtFILE 0 /* Always follow symlinks */ #define RepoFILE 1 /* Follow symlinks if and only if allow-symlinks is OFF */ #define SymFILE 2 /* Never follow symlinks */ #include <dirent.h> #if defined(_WIN32) # define DIR _WDIR | > | 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | ** the target pathname of the symbolic link. ** ** RepoFILE Like SymFILE if allow-symlinks is true, or like ** ExtFILE if allow-symlinks is false. In other words, ** symbolic links are only recognized as something different ** from files or directories if allow-symlinks is true. */ #include <stdlib.h> #define ExtFILE 0 /* Always follow symlinks */ #define RepoFILE 1 /* Follow symlinks if and only if allow-symlinks is OFF */ #define SymFILE 2 /* Never follow symlinks */ #include <dirent.h> #if defined(_WIN32) # define DIR _WDIR |
︙ | ︙ |
Changes to src/finfo.c.
︙ | ︙ | |||
270 271 272 273 274 275 276 | ** Usage: %fossil cat FILENAME ... ?OPTIONS? ** ** Print on standard output the content of one or more files as they exist ** in the repository. The version currently checked out is shown by default. ** Other versions may be specified using the -r option. ** ** Options: | > | | > > > > > > > > > | > | 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 | ** Usage: %fossil cat FILENAME ... ?OPTIONS? ** ** Print on standard output the content of one or more files as they exist ** in the repository. The version currently checked out is shown by default. ** Other versions may be specified using the -r option. ** ** Options: ** -o|--out OUTFILE For exactly one given FILENAME, write to OUTFILE ** -R|--repository REPO Extract artifacts from repository REPO ** -r VERSION The specific check-in containing the file ** ** See also: [[finfo]] */ void cat_cmd(void){ int i; Blob content, fname; const char *zRev; const char *zFileName; db_find_and_open_repository(0, 0); zRev = find_option("r","r",1); zFileName = find_option("out","o",1); /* We should be done with options.. */ verify_all_options(); if ( zFileName && g.argc>3 ){ fossil_fatal("output file can only be given when retrieving a single file"); } for(i=2; i<g.argc; i++){ file_tree_name(g.argv[i], &fname, 0, 1); blob_zero(&content); historical_blob(zRev, blob_str(&fname), &content, 1); if ( g.argc==3 && zFileName ){ blob_write_to_file(&content, zFileName); }else{ blob_write_to_file(&content, "-"); } blob_reset(&fname); blob_reset(&content); } } /* Values for the debug= query parameter to finfo */ #define FINFO_DEBUG_MLINK 0x01 |
︙ | ︙ |
Changes to src/forum.c.
︙ | ︙ | |||
1390 1391 1392 1393 1394 1395 1396 | ** ** n=N The number of threads to show on each page ** x=X Skip the first X threads ** s=Y Search for term Y. */ void forum_main_page(void){ Stmt q; | | > > | 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 | ** ** n=N The number of threads to show on each page ** x=X Skip the first X threads ** s=Y Search for term Y. */ void forum_main_page(void){ Stmt q; int iLimit = 0, iOfst, iCnt; int srchFlags; const int isSearch = P("s")!=0; char const *zLimit = 0; login_check_credentials(); srchFlags = search_restrict(SRCH_FORUM); if( !g.perm.RdForum ){ login_needed(g.anon.RdForum); return; } style_set_current_feature("forum"); |
︙ | ︙ | |||
1421 1422 1423 1424 1425 1426 1427 | if( (srchFlags & SRCH_FORUM)!=0 ){ if( search_screen(SRCH_FORUM, 0) ){ style_submenu_element("Recent Threads","%R/forum"); style_finish_page(); return; } } | > > > | > > > > > > > > > > | 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 | if( (srchFlags & SRCH_FORUM)!=0 ){ if( search_screen(SRCH_FORUM, 0) ){ style_submenu_element("Recent Threads","%R/forum"); style_finish_page(); return; } } cookie_read_parameter("n","forum-n"); zLimit = P("n"); if( zLimit!=0 ){ iLimit = atoi(zLimit); if( iLimit>=0 && P("udc")!=0 ){ cookie_write_parameter("n","forum-n",0); } } if( iLimit<=0 ){ cgi_replace_query_parameter("n", fossil_strdup("25")) /*for the sake of Max, below*/; iLimit = 25; } style_submenu_entry("n","Max:",4,0); iOfst = atoi(PD("x","0")); iCnt = 0; if( db_table_exists("repository","forumpost") ){ db_prepare(&q, "WITH thread(age,duration,cnt,root,last) AS (" " SELECT" " julianday('now') - max(fmtime)," |
︙ | ︙ |
Changes to src/main.c.
︙ | ︙ | |||
134 135 136 137 138 139 140 141 142 143 144 145 146 147 | void *xPostEval; /* Optional, called after Tcl_Eval*(). */ void *pPostContext; /* Optional, provided to xPostEval(). */ }; #endif struct Global { int argc; char **argv; /* Command-line arguments to the program */ char *nameOfExe; /* Full path of executable. */ const char *zErrlog; /* Log errors to this file, if not NULL */ const char *zPhase; /* Phase of operation, for use by the error log ** and for deriving $canonical_page TH1 variable */ int isConst; /* True if the output is unchanging & cacheable */ const char *zVfsName; /* The VFS to use for database connections */ sqlite3 *db; /* The connection to the databases */ | > | 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | void *xPostEval; /* Optional, called after Tcl_Eval*(). */ void *pPostContext; /* Optional, provided to xPostEval(). */ }; #endif struct Global { int argc; char **argv; /* Command-line arguments to the program */ char **argvOrig; /* Original g.argv prior to removing options */ char *nameOfExe; /* Full path of executable. */ const char *zErrlog; /* Log errors to this file, if not NULL */ const char *zPhase; /* Phase of operation, for use by the error log ** and for deriving $canonical_page TH1 variable */ int isConst; /* True if the output is unchanging & cacheable */ const char *zVfsName; /* The VFS to use for database connections */ sqlite3 *db; /* The connection to the databases */ |
︙ | ︙ | |||
442 443 444 445 446 447 448 | /* Maintenance reminder: we do not stop at a "--" flag here, ** instead delegating that to find_option(). Doing it here ** introduces some weird corner cases, as covered in forum thread ** 4382bbc66757c39f. e.g. (fossil -U -- --args ...) is handled ** differently when we stop at "--" here. */ if( fossil_strcmp(z, "args")==0 ) break; } | | > > > > | 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 | /* Maintenance reminder: we do not stop at a "--" flag here, ** instead delegating that to find_option(). Doing it here ** introduces some weird corner cases, as covered in forum thread ** 4382bbc66757c39f. e.g. (fossil -U -- --args ...) is handled ** differently when we stop at "--" here. */ if( fossil_strcmp(z, "args")==0 ) break; } if( (int)i>=g.argc-1 ){ g.argvOrig = fossil_malloc( sizeof(char*)*(g.argc+1) ); memcpy(g.argvOrig, g.argv, sizeof(g.argv[0])*(g.argc+1)); return; } zFileName = g.argv[i+1]; if( strcmp(zFileName,"-")==0 ){ inFile = stdin; }else if( !file_isfile(zFileName, ExtFILE) ){ fossil_fatal("Not an ordinary file: \"%s\"", zFileName); }else{ |
︙ | ︙ | |||
465 466 467 468 469 470 471 | } inFile = NULL; blob_to_utf8_no_bom(&file, 1); z = blob_str(&file); for(k=0, nLine=1; z[k]; k++) if( z[k]=='\n' ) nLine++; if( nLine>100000000 ) fossil_fatal("too many command-line arguments"); nArg = g.argc + nLine*2; | | | 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 | } inFile = NULL; blob_to_utf8_no_bom(&file, 1); z = blob_str(&file); for(k=0, nLine=1; z[k]; k++) if( z[k]=='\n' ) nLine++; if( nLine>100000000 ) fossil_fatal("too many command-line arguments"); nArg = g.argc + nLine*2; newArgv = fossil_malloc( sizeof(char*)*nArg*2 + 2); for(j=0; j<i; j++) newArgv[j] = g.argv[j]; blob_rewind(&file); while( nLine-->0 && (n = blob_line(&file, &line))>0 ){ /* Reminder: ^^^ nLine check avoids that embedded NUL bytes in the ** --args file causes nLine to be less than blob_line() will end ** up reporting, as such a miscount leads to an illegal memory |
︙ | ︙ | |||
508 509 510 511 512 513 514 515 516 517 518 519 520 521 | } } i += 2; while( (int)i<g.argc ) newArgv[j++] = g.argv[i++]; newArgv[j] = 0; g.argc = j; g.argv = newArgv; } #ifdef FOSSIL_ENABLE_TCL /* ** Make a deep copy of the provided argument array and return it. */ static char **copy_args(int argc, char **argv){ | > > | 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 | } } i += 2; while( (int)i<g.argc ) newArgv[j++] = g.argv[i++]; newArgv[j] = 0; g.argc = j; g.argv = newArgv; g.argvOrig = &g.argv[j+1]; memcpy(g.argvOrig, g.argv, sizeof(g.argv[0])*(j+1)); } #ifdef FOSSIL_ENABLE_TCL /* ** Make a deep copy of the provided argument array and return it. */ static char **copy_args(int argc, char **argv){ |
︙ | ︙ |
Changes to src/markdown.c.
︙ | ︙ | |||
110 111 112 113 114 115 116 | #define MKD_CELL_ALIGN_DEFAULT 0 #define MKD_CELL_ALIGN_LEFT 1 #define MKD_CELL_ALIGN_RIGHT 2 #define MKD_CELL_ALIGN_CENTER 3 /* LEFT | RIGHT */ #define MKD_CELL_ALIGN_MASK 3 #define MKD_CELL_HEAD 4 | < < < < < < < < < < < < < < | 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | #define MKD_CELL_ALIGN_DEFAULT 0 #define MKD_CELL_ALIGN_LEFT 1 #define MKD_CELL_ALIGN_RIGHT 2 #define MKD_CELL_ALIGN_CENTER 3 /* LEFT | RIGHT */ #define MKD_CELL_ALIGN_MASK 3 #define MKD_CELL_HEAD 4 #endif /* INTERFACE */ #define BLOB_COUNT(pBlob,el_type) (blob_size(pBlob)/sizeof(el_type)) #define COUNT_FOOTNOTES(pBlob) BLOB_COUNT(pBlob,struct footnote) #define CAST_AS_FOOTNOTES(pBlob) ((struct footnote*)blob_buffer(pBlob)) |
︙ | ︙ |
Changes to src/rebuild.c.
︙ | ︙ | |||
603 604 605 606 607 608 609 | ** ** fossil rebuild --compress-only ** ** The name for this command is stolen from the "git repack" command that ** does approximately the same thing in Git. */ void repack_command(void){ | | | > > > > > | | | > > > > > > | > > > > > > | > | < < < < > > | < < > > > | 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 | ** ** fossil rebuild --compress-only ** ** The name for this command is stolen from the "git repack" command that ** does approximately the same thing in Git. */ void repack_command(void){ i64 nByte = 0; int nDelta = 0; int runVacuum = 0; verify_all_options(); if( g.argc==3 ){ db_open_repository(g.argv[2]); }else if( g.argc==2 ){ db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); if( g.argc!=2 ){ usage("?REPOSITORY-FILENAME?"); } db_close(1); db_open_repository(g.zRepositoryName); }else{ usage("?REPOSITORY-FILENAME?"); } db_unprotect(PROTECT_ALL); nByte = extra_deltification(&nDelta); if( nDelta>0 ){ if( nDelta==1 ){ fossil_print("1 new delta saves %,lld bytes\n", nByte); }else{ fossil_print("%d new deltas save %,lld bytes\n", nDelta, nByte); } runVacuum = 1; }else{ fossil_print("no new compression opportunities found\n"); } if( runVacuum ){ fossil_print("Vacuuming the database... "); fflush(stdout); db_multi_exec("VACUUM"); fossil_print("done\n"); } } /* ** COMMAND: rebuild ** ** Usage: %fossil rebuild ?REPOSITORY? ?OPTIONS? |
︙ | ︙ |
Changes to src/schema.c.
︙ | ︙ | |||
389 390 391 392 393 394 395 | @ @ -- Assignments of tags to artifacts. Note that we allow tags to @ -- have values assigned to them. So we are not really dealing with @ -- tags here. These are really properties. But we are going to @ -- keep calling them tags because in many cases the value is ignored. @ -- @ CREATE TABLE tagxref( | | > | > | > | 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 | @ @ -- Assignments of tags to artifacts. Note that we allow tags to @ -- have values assigned to them. So we are not really dealing with @ -- tags here. These are really properties. But we are going to @ -- keep calling them tags because in many cases the value is ignored. @ -- @ CREATE TABLE tagxref( @ tagid INTEGER REFERENCES tag, -- The tag being added, removed, @ -- or propagated @ tagtype INTEGER, -- 0:-,cancel 1:+,single 2:*,propagate @ srcid INTEGER REFERENCES blob, -- Artifact tag originates from, or @ -- 0 for propagated tags @ origid INTEGER REFERENCES blob, -- Artifact holding propagated tag @ -- (any artifact type with a P-card) @ value TEXT, -- Value of the tag. Might be NULL. @ mtime TIMESTAMP, -- Time of addition or removal. Julian day @ rid INTEGER REFERENCE blob, -- Artifact tag is applied to @ UNIQUE(rid, tagid) @ ); @ CREATE INDEX tagxref_i1 ON tagxref(tagid, mtime); @ |
︙ | ︙ |
Changes to src/security_audit.c.
︙ | ︙ | |||
98 99 100 101 102 103 104 105 | const char *zAnonCap; /* Capabilities of user "anonymous" and "nobody" */ const char *zDevCap; /* Capabilities of user group "developer" */ const char *zReadCap; /* Capabilities of user group "reader" */ const char *zPubPages; /* GLOB pattern for public pages */ const char *zSelfCap; /* Capabilities of self-registered users */ int hasSelfReg = 0; /* True if able to self-register */ const char *zPublicUrl; /* Canonical access URL */ char *z; | > | | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | const char *zAnonCap; /* Capabilities of user "anonymous" and "nobody" */ const char *zDevCap; /* Capabilities of user group "developer" */ const char *zReadCap; /* Capabilities of user group "reader" */ const char *zPubPages; /* GLOB pattern for public pages */ const char *zSelfCap; /* Capabilities of self-registered users */ int hasSelfReg = 0; /* True if able to self-register */ const char *zPublicUrl; /* Canonical access URL */ Blob cmd; char *z; int n, i; CapabilityString *pCap; char **azCSP; /* Parsed content security policy */ login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; |
︙ | ︙ | |||
689 690 691 692 693 694 695 696 697 698 699 700 701 702 | @ <blockquote><pre> @ INSERT INTO private SELECT rid FROM blob WHERE content IS NULL; @ </pre></blockquote> @ </p> table_of_public_phantoms(); @ </li> } @ </ol> style_finish_page(); } /* ** WEBPAGE: takeitprivate | > > > > > > > > > > > | 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 | @ <blockquote><pre> @ INSERT INTO private SELECT rid FROM blob WHERE content IS NULL; @ </pre></blockquote> @ </p> table_of_public_phantoms(); @ </li> } blob_init(&cmd, 0, 0); for(i=0; g.argvOrig[i]!=0; i++){ blob_append_escaped_arg(&cmd, g.argvOrig[i], 0); } @ <li><p> @ The command that generated this page: @ <blockquote> @ <tt>%h(blob_str(&cmd))</tt> @ </blockquote></li> blob_zero(&cmd); @ </ol> style_finish_page(); } /* ** WEBPAGE: takeitprivate |
︙ | ︙ |
Changes to src/smtp.c.
︙ | ︙ | |||
575 576 577 578 579 580 581 | }while( bMore ); if( iCode!=250 ) return 1; return 0; } /* ** The input is a base email address of the form "local@domain". | | > | | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 | }while( bMore ); if( iCode!=250 ) return 1; return 0; } /* ** The input is a base email address of the form "local@domain". ** Return a pointer to just the "domain" part, or 0 if the string ** contains no "@". */ const char *domain_of_addr(const char *z){ while( z[0] && z[0]!='@' ) z++; if( z[0]==0 ) return 0; return z+1; } /* |
︙ | ︙ | |||
621 622 623 624 625 626 627 | zRelay = find_option("relayhost",0,1); verify_all_options(); if( g.argc<5 ) usage("EMAIL FROM TO ..."); blob_read_from_file(&body, g.argv[2], ExtFILE); zFrom = g.argv[3]; nTo = g.argc-4; azTo = (const char**)g.argv+4; | | | | 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 | zRelay = find_option("relayhost",0,1); verify_all_options(); if( g.argc<5 ) usage("EMAIL FROM TO ..."); blob_read_from_file(&body, g.argv[2], ExtFILE); zFrom = g.argv[3]; nTo = g.argc-4; azTo = (const char**)g.argv+4; zFromDomain = domain_of_addr(zFrom); if( zRelay!=0 && zRelay[0]!= 0) { smtpFlags |= SMTP_DIRECT; zToDomain = zRelay; }else{ zToDomain = domain_of_addr(azTo[0]); } p = smtp_session_new(zFromDomain, zToDomain, smtpFlags, smtpPort); if( p->zErr ){ fossil_fatal("%s", p->zErr); } fossil_print("Connection to \"%s\"\n", p->zHostname); smtp_client_startup(p); |
︙ | ︙ |
Changes to src/style.c.
︙ | ︙ | |||
1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 | @ g.zRepositoryName = %h(g.zRepositoryName)<br /> @ load_average() = %f(load_average())<br /> #ifndef _WIN32 @ RSS = %.2f(fossil_rss()/1000000.0) MB</br /> #endif @ cgi_csrf_safe(0) = %d(cgi_csrf_safe(0))<br /> @ fossil_exe_id() = %h(fossil_exe_id())<br /> @ <hr /> P("HTTP_USER_AGENT"); P("SERVER_SOFTWARE"); cgi_print_all(showAll, 0); if( showAll && blob_size(&g.httpHeader)>0 ){ @ <hr /> @ <pre> | > > > > > > > > > > | 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 | @ g.zRepositoryName = %h(g.zRepositoryName)<br /> @ load_average() = %f(load_average())<br /> #ifndef _WIN32 @ RSS = %.2f(fossil_rss()/1000000.0) MB</br /> #endif @ cgi_csrf_safe(0) = %d(cgi_csrf_safe(0))<br /> @ fossil_exe_id() = %h(fossil_exe_id())<br /> if( g.perm.Admin ){ int k; for(k=0; g.argvOrig[k]; k++){ Blob t; blob_init(&t, 0, 0); blob_append_escaped_arg(&t, g.argvOrig[k], 0); @ argv[%d(k)] = %h(blob_str(&t))<br /> blob_zero(&t); } } @ <hr /> P("HTTP_USER_AGENT"); P("SERVER_SOFTWARE"); cgi_print_all(showAll, 0); if( showAll && blob_size(&g.httpHeader)>0 ){ @ <hr /> @ <pre> |
︙ | ︙ |
Changes to src/style.chat.css.
︙ | ︙ | |||
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | border: 1px solid rgba(0,0,0,0.2); box-shadow: 0.2em 0.2em 0.2em rgba(0, 0, 0, 0.29); padding: 0.25em 0.5em; margin-top: 0; min-width: 9em /*avoid unsightly "underlap" with the neighboring .message-widget-tab element*/; white-space: normal; } body.chat .message-widget-content.wide { /* Special case for when embedding content which we really want to expand, namely iframes. */ width: 98%; } body.chat .message-widget-content label[for] { margin-left: 0.25em; | > > | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | border: 1px solid rgba(0,0,0,0.2); box-shadow: 0.2em 0.2em 0.2em rgba(0, 0, 0, 0.29); padding: 0.25em 0.5em; margin-top: 0; min-width: 9em /*avoid unsightly "underlap" with the neighboring .message-widget-tab element*/; white-space: normal; word-break: break-word /* so that full hashes wrap on narrow screens */; } body.chat .message-widget-content.wide { /* Special case for when embedding content which we really want to expand, namely iframes. */ width: 98%; } body.chat .message-widget-content label[for] { margin-left: 0.25em; |
︙ | ︙ |
Changes to src/tar.c.
︙ | ︙ | |||
702 703 704 705 706 707 708 | zName[n] = 0; *pzName = fossil_strdup(&zName[n+1]); return zName; } /* ** WEBPAGE: tarball | | | | | | | | | | > > > | | | | < | | > | | | 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 | zName[n] = 0; *pzName = fossil_strdup(&zName[n+1]); return zName; } /* ** WEBPAGE: tarball ** URL: /tarball/[VERSION/]NAME.tar.gz ** ** Generate a compressed tarball for the check-in specified by VERSION. ** The tarball is called NAME.tar.gz and has a top-level directory called ** NAME. ** ** The optional VERSION element defaults to "trunk" per the r= rules below. ** All of the following URLs are equivalent: ** ** /tarball/release/xyz.tar.gz ** /tarball?r=release&name=xyz.tar.gz ** /tarball/xyz.tar.gz?r=release ** /tarball?name=release/xyz.tar.gz ** ** Query parameters: ** ** name=[CKIN/]NAME The optional CKIN component of the name= parameter ** identifies the check-in from which the tarball is ** constructed. If CKIN is omitted and there is no ** r= query parameter, then use "trunk". NAME is the ** name of the download file. The top-level directory ** in the generated tarball is called by NAME with the ** file extension removed. ** ** r=TAG TAG identifies the check-in that is turned into a ** compressed tarball. The default value is "trunk". ** If r= is omitted and if the name= query parameter ** contains one "/" character then the of part the ** name= value before the / becomes the TAG and the ** part of the name= value after the / is the download ** filename. If no check-in is specified by either ** name= or r=, then "trunk" is used. ** ** in=PATTERN Only include files that match the comma-separate ** list of GLOB patterns in PATTERN, as with ex= ** ** ex=PATTERN Omit any file that match PATTERN. PATTERN is a ** comma-separated list of GLOB patterns, where each ** pattern can optionally be quoted using ".." or '..'. |
︙ | ︙ |
Changes to src/unversioned.c.
︙ | ︙ | |||
278 279 280 281 282 283 284 285 286 287 288 289 290 291 | ** --glob PATTERN Remove files that match ** --like PATTERN Remove files that match ** ** sync ?URL? Synchronize the state of all unversioned files with ** the remote repository URL. The most recent version ** of each file is propagated to all repositories and ** all prior versions are permanently forgotten. ** ** Options: ** -v|--verbose Extra diagnostic output ** -n|--dry-run Show what would have happened ** ** touch FILE ... Update the TIMESTAMP on all of the listed files ** | > | 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 | ** --glob PATTERN Remove files that match ** --like PATTERN Remove files that match ** ** sync ?URL? Synchronize the state of all unversioned files with ** the remote repository URL. The most recent version ** of each file is propagated to all repositories and ** all prior versions are permanently forgotten. ** The remote account requires the 'y' capability. ** ** Options: ** -v|--verbose Extra diagnostic output ** -n|--dry-run Show what would have happened ** ** touch FILE ... Update the TIMESTAMP on all of the listed files ** |
︙ | ︙ | |||
462 463 464 465 466 467 468 | zNoContent ); } } db_finalize(&q); sqlite3_free(zPattern); }else if( memcmp(zCmd, "revert", nCmd)==0 ){ | | | 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 | zNoContent ); } } db_finalize(&q); sqlite3_free(zPattern); }else if( memcmp(zCmd, "revert", nCmd)==0 ){ unsigned syncFlags = unversioned_sync_flags(SYNC_UNVERSIONED|SYNC_UV_REVERT); g.argv[1] = "sync"; g.argv[2] = "--uv-noop"; sync_unversioned(syncFlags); }else if( memcmp(zCmd, "remove", nCmd)==0 || memcmp(zCmd, "rm", nCmd)==0 || memcmp(zCmd, "delete", nCmd)==0 ){ int i; |
︙ | ︙ |
Changes to src/winhttp.c.
︙ | ︙ | |||
568 569 570 571 572 573 574 575 576 577 578 579 580 581 | HANDLE hStoppedEvent; WSADATA wd; DualSocket ds; int idCnt = 0; int iPort = mnPort; Blob options; wchar_t zTmpPath[MAX_PATH]; const char *zSkin; #if USE_SEE const char *zSavedKey = 0; size_t savedKeySize = 0; #endif blob_zero(&options); | > > | 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 | HANDLE hStoppedEvent; WSADATA wd; DualSocket ds; int idCnt = 0; int iPort = mnPort; Blob options; wchar_t zTmpPath[MAX_PATH]; char *zTempSubDirPath; const char *zTempSubDir = "fossil"; const char *zSkin; #if USE_SEE const char *zSavedKey = 0; size_t savedKeySize = 0; #endif blob_zero(&options); |
︙ | ︙ | |||
659 660 661 662 663 664 665 666 667 668 669 670 671 672 | fossil_fatal("unable to open listening socket on any" " port in the range %d..%d", mnPort, mxPort); } } if( !GetTempPathW(MAX_PATH, zTmpPath) ){ fossil_panic("unable to get path to the temporary directory."); } if( g.fHttpTrace ){ zTempPrefix = mprintf("httptrace"); }else{ zTempPrefix = mprintf("%sfossil_server_P%d", fossil_unicode_to_utf8(zTmpPath), iPort); } fossil_print("Temporary files: %s*\n", zTempPrefix); | > > > > > > | 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 | fossil_fatal("unable to open listening socket on any" " port in the range %d..%d", mnPort, mxPort); } } if( !GetTempPathW(MAX_PATH, zTmpPath) ){ fossil_panic("unable to get path to the temporary directory."); } /* Use a subdirectory for temp files (can then be excluded from virus scan) */ zTempSubDirPath = mprintf("%s%s\\",fossil_path_to_utf8(zTmpPath),zTempSubDir); if ( !file_mkdir(zTempSubDirPath, ExtFILE, 0) || file_isdir(zTempSubDirPath, ExtFILE)==1 ){ wcscpy(zTmpPath, fossil_utf8_to_path(zTempSubDirPath, 1)); } if( g.fHttpTrace ){ zTempPrefix = mprintf("httptrace"); }else{ zTempPrefix = mprintf("%sfossil_server_P%d", fossil_unicode_to_utf8(zTmpPath), iPort); } fossil_print("Temporary files: %s*\n", zTempPrefix); |
︙ | ︙ |
Changes to src/xfer.c.
︙ | ︙ | |||
352 353 354 355 356 357 358 | goto end_accept_unversioned_file; } }else{ nullContent = 1; } /* The isWriter flag must be true in order to land the new file */ | > > | > | 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 | goto end_accept_unversioned_file; } }else{ nullContent = 1; } /* The isWriter flag must be true in order to land the new file */ if( !isWriter ){ blob_appendf(&pXfer->err, "Write permissions for unversioned files missing"); goto end_accept_unversioned_file; } /* Make sure we have a valid g.rcvid marker */ content_rcvid_init(0); /* Check to see if current content really should be overwritten. Ideally, ** a uvfile card should never have been sent unless the overwrite should ** occur. But do not trust the sender. Double-check. |
︙ | ︙ | |||
1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 | ** Server accepts an unversioned file from the client. */ if( blob_eq(&xfer.aToken[0], "uvfile") ){ xfer_accept_unversioned_file(&xfer, g.perm.WrUnver); if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; } }else /* gimme HASH ** | > | 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 | ** Server accepts an unversioned file from the client. */ if( blob_eq(&xfer.aToken[0], "uvfile") ){ xfer_accept_unversioned_file(&xfer, g.perm.WrUnver); if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) fossil_print("%%%%%%%% xfer.err: '%s'\n", blob_str(&xfer.err)); nErr++; break; } }else /* gimme HASH ** |
︙ | ︙ |
Changes to src/zip.c.
︙ | ︙ | |||
862 863 864 865 866 867 868 | archive_cmd(ARCHIVE_SQLAR); } /* ** WEBPAGE: sqlar ** WEBPAGE: zip ** | > > > > > | | > < < > | | | | | | | | > > > | | > | | | | > | | 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 | archive_cmd(ARCHIVE_SQLAR); } /* ** WEBPAGE: sqlar ** WEBPAGE: zip ** ** URLs: ** ** /zip/[VERSION/]NAME.zip ** /sqlar/[VERSION/]NAME.sqlar ** ** Generate a ZIP Archive or an SQL Archive for the check-in specified by ** VERSION. The archive is called NAME.zip or NAME.sqlar and has a top-level ** directory called NAME. ** ** The optional VERSION element defaults to "trunk" per the r= rules below. ** All of the following URLs are equivalent: ** ** /zip/release/xyz.zip ** /zip?r=release&name=xyz.zip ** /zip/xyz.zip?r=release ** /zip?name=release/xyz.zip ** ** Query parameters: ** ** name=[CKIN/]NAME The optional CKIN component of the name= parameter ** identifies the check-in from which the archive is ** constructed. If CKIN is omitted and there is no ** r= query parameter, then use "trunk". NAME is the ** name of the download file. The top-level directory ** in the generated archive is called by NAME with the ** file extension removed. ** ** r=TAG TAG identifies the check-in that is turned into an ** SQL or ZIP archive. The default value is "trunk". ** If r= is omitted and if the name= query parameter ** contains one "/" character then the of part the ** name= value before the / becomes the TAG and the ** part of the name= value after the / is the download ** filename. If no check-in is specified by either ** name= or r=, then "trunk" is used. ** ** in=PATTERN Only include files that match the comma-separate ** list of GLOB patterns in PATTERN, as with ex= ** ** ex=PATTERN Omit any file that match PATTERN. PATTERN is a ** comma-separated list of GLOB patterns, where each ** pattern can optionally be quoted using ".." or '..'. |
︙ | ︙ |
Changes to www/build.wiki.
︙ | ︙ | |||
109 110 111 112 113 114 115 | out wherever they may be found, so that is typically all you need to do.</p> <p>For more advanced use cases, see the [./ssl.wiki#openssl-bin|OpenSSL discussion in the "TLS and Fossil" document].</p> <li><p> | | | | > > > | 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | out wherever they may be found, so that is typically all you need to do.</p> <p>For more advanced use cases, see the [./ssl.wiki#openssl-bin|OpenSSL discussion in the "TLS and Fossil" document].</p> <li><p> To build a statically linked binary, you can <i>try</i> adding the <b>--static</b> option, but [https://stackoverflow.com/questions/3430400/linux-static-linking-is-dead | it may well not work]. If your platform of choice is affected by this, the simplest workaround we're aware of is to build a Fossil container, then [./containers.md#static | extract the static executable from it]. <li><p> To enable the native [./th1.md#tclEval | Tcl integration feature] feature, add the <b>--with-tcl=1</b> and <b>--with-tcl-private-stubs=1</b> options. <li><p> Other configuration options can be seen by running |
︙ | ︙ |
Changes to www/caps/index.md.
|
| | | > > > > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # Administering User Capabilities (a.k.a. Permissions) Fossil includes a powerful [role-based access control system][rbac] which affects which users have which capabilities(^Some parts of the Fossil code call these “permissions” instead, but since there is [a clear and present risk of confusion](#webonly) with operating system level file permissions in this context, we avoid using that term for Fossil’s RBAC capability flags in these pages.) within a given [served][svr] Fossil repository. We call this the “caps” system for short. Fossil stores a user’s caps as an unordered string of ASCII characters, one capability per, [currently](./impl.md#choices) limited to [alphanumerics][an]. Caps are case-sensitive: “**A**” and “**a**” are different user capabilities. This is a complex topic, so some sub-topics have their own documents: |
︙ | ︙ |
Changes to www/changes.wiki.
1 2 3 4 | <title>Change Log</title> <h2 id='v2_21'>Changes for version 2.21 (2023-02-25)</h2> * Users can request a password reset. This feature is disabledby default. Use | > > > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | <title>Change Log</title> <h2 id='v2_22'>Changes for version 2.22 (pending)</h2> <h2 id='v2_21'>Changes for version 2.21 (2023-02-25)</h2> * Users can request a password reset. This feature is disabledby default. Use the new [/help?cmd=self-pw-reset|self-pw-reset property] to enable it. New web pages [/help?cmd=/resetpw|/resetpw] and [/help?cmd=/reqpwreset|/reqpwreset] added. * Add the [/help?cmd=repack|fossil repack] command (together with [/help?cmd=all|fossil all repack]) as a convenient way to optimize the size of one or all of the repositories on a system. * Add the ability to put text descriptions on ticket report formats. * Upgrade the test-find-pivot command to the [/help/merge-base|merge-base command]. |
︙ | ︙ |
Changes to www/containers.md.
︙ | ︙ | |||
8 9 10 11 12 13 14 | [Docker]: https://www.docker.com/ [OCI]: https://opencontainers.org/ ## 1. Quick Start | | | | 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | [Docker]: https://www.docker.com/ [OCI]: https://opencontainers.org/ ## 1. Quick Start Fossil ships a `Dockerfile` at the top of its source tree, [here][DF], which you can build like so: ``` $ docker build -t fossil . ``` If the image built successfully, you can create a container from it and test that it runs: |
︙ | ︙ | |||
53 54 55 56 57 58 59 60 61 62 63 64 65 66 | Contrast the raw “`docker`” commands above, which create an _unversioned_ image called `fossil:latest` and from that a container simply called `fossil`. The unversioned names are more convenient for interactive use, while the versioned ones are good for CI/CD type applications since they avoid a conflict with past versions; it lets you keep old containers around for quick roll-backs while replacing them with fresh ones. ## 2. <a id="storage"></a>Repository Storage Options If you want the container to serve an existing repository, there are at least two right ways to do it. | > > | 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | Contrast the raw “`docker`” commands above, which create an _unversioned_ image called `fossil:latest` and from that a container simply called `fossil`. The unversioned names are more convenient for interactive use, while the versioned ones are good for CI/CD type applications since they avoid a conflict with past versions; it lets you keep old containers around for quick roll-backs while replacing them with fresh ones. [DF]: /file/Dockerfile ## 2. <a id="storage"></a>Repository Storage Options If you want the container to serve an existing repository, there are at least two right ways to do it. |
︙ | ︙ | |||
348 349 350 351 352 353 354 | Our 2-stage build process uses Alpine Linux only as a build host. Once we’ve got everything reduced to the two key static binaries — Fossil and BusyBox — we throw all the rest of it away. A secondary benefit falls out of this process for free: it’s arguably the easiest way to build a purely static Fossil binary for Linux. Most | | > > | 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 | Our 2-stage build process uses Alpine Linux only as a build host. Once we’ve got everything reduced to the two key static binaries — Fossil and BusyBox — we throw all the rest of it away. A secondary benefit falls out of this process for free: it’s arguably the easiest way to build a purely static Fossil binary for Linux. Most modern Linux distros make this [surprisingly difficult][lsl], but Alpine’s back-to-basics nature makes static builds work the way they used to, back in the day. If that’s all you’re after, you can do so as easily as this: ``` $ docker build -t fossil . $ docker create --name fossil-static-tmp fossil $ docker cp fossil-static-tmp:/jail/bin/fossil . $ docker container rm fossil-static-tmp ``` The resulting binary is the single largest file inside that container, at about 6 MiB. (It’s built stripped.) [lsl]: https://stackoverflow.com/questions/3430400/linux-static-linking-is-dead ## 5. <a id="args"></a>Container Build Arguments ### <a id="pkg-vers"></a> 5.1 Package Versions You can override the default versions of Fossil and BusyBox that get |
︙ | ︙ | |||
725 726 727 728 729 730 731 | We’ll assume your Fossil repository stores something called “`myproject`” within `~/museum/myproject/repo.fossil`, named according to the reasons given [above](#repo-inside). We’ll make consistent use of this naming scheme in the examples below so that you will be able to replace the “`myproject`” element of the various file and path names. | > > > | > > > | > | < | 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 | We’ll assume your Fossil repository stores something called “`myproject`” within `~/museum/myproject/repo.fossil`, named according to the reasons given [above](#repo-inside). We’ll make consistent use of this naming scheme in the examples below so that you will be able to replace the “`myproject`” element of the various file and path names. If you use [the stock `Dockerfile`][DF] to generate your base image, `nspawn` won’t recognize it as containing an OS unless you put a line like this into the first stage: ``` COPY containers/os-release /etc/os-release ``` That will let you produce a `systemd` “machine” via the OCI image: ``` $ make container $ docker container export $(make container-version) | machinectl import-tar - myproject ``` Next, create `/etc/systemd/nspawn/myproject.nspawn`: ---- ``` [Exec] WorkingDirectory=/jail Parameters=bin/fossil server \ |
︙ | ︙ |
Changes to www/fossil-v-git.wiki.
︙ | ︙ | |||
356 357 358 359 360 361 362 | embedded into Fossil itself. Fossil's build system and test suite are largely based on Tcl.⁵ All of this is quite portable. About half of Git's code is POSIX C, and about a third is POSIX shell code. This is largely why the so-called "Git for Windows" distributions (both [https://git-scm.com/download/win|first-party] and [https://gitforwindows.org/|third-party]) are actually an | | > | | 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 | embedded into Fossil itself. Fossil's build system and test suite are largely based on Tcl.⁵ All of this is quite portable. About half of Git's code is POSIX C, and about a third is POSIX shell code. This is largely why the so-called "Git for Windows" distributions (both [https://git-scm.com/download/win|first-party] and [https://gitforwindows.org/|third-party]) are actually an [https://www.msys2.org/wiki/Home/|MSYS POSIX portability environment] bundled with all of the Git stuff, because it would be too painful to port Git natively to Windows. Git is a foreign citizen on Windows, speaking to it only through a translator.⁶ While Fossil does lean toward POSIX norms when given a choice — LF-only line endings are treated as first-class citizens over CR+LF, for example — the Windows build of Fossil is truly native. The third-party extensions to Git tend to follow this same pattern. [https://docs.gitlab.com/ee/install/install_methods.html#microsoft-windows | GitLab isn't portable to Windows at all], for example. For that matter, GitLab isn't even officially supported on macOS, the BSDs, or uncommon Linuxes! We have many users who regularly build and run Fossil on all of these systems. <h3 id="vs-linux">2.5 Linux vs. SQLite</h3> |
︙ | ︙ |
Changes to www/glossary.md.
︙ | ︙ | |||
197 198 199 200 201 202 203 | move right 0.1 line dotted right until even with previous line.end move right 0.05 box invis "clones of Fossil itself, SQLite, etc." ljust ``` [asdis]: /help?cmd=autosync | | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | | | | | > > > > > > | < < < < | | | > > > > > > > > > > > > > > > > > > | | | 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 | move right 0.1 line dotted right until even with previous line.end move right 0.05 box invis "clones of Fossil itself, SQLite, etc." ljust ``` [asdis]: /help?cmd=autosync [backup]: ./backup.md [CAP]: ./cap-theorem.md [cloned]: /help?cmd=clone [pull]: /help?cmd=pull [push]: /help?cmd=push [svrcmd]: /help?cmd=server [sync]: /help?cmd=sync [repository]: #repo [repositories]: #repo ## Version / Revision / Hash / UUID <a id="version" name="hash"></a> These terms all mean the same thing: a long hexadecimal [SHA hash value](./hashes.md) that uniquely identifies a particular [check-in](#ci). We’ve listed the alternatives in decreasing preference order: * **Version** and **revision** are near-synonyms in common usage. Fossil’s code and documentation use both interchangeably because Fossil was created to manage the development of the SQLite project, which formerly used [CVS], the Concurrent Versions System. CVS in turn started out as a front-end to [RCS], the Revision Control System, but even though CVS uses “version” in its name, it numbers check-ins using a system derived from RCS’s scheme, which it calls “Revisions” in user-facing output. Fossil inherits this confusion honestly. * **Hash** refers to the [SHA1 or SHA3-256 hash](./hashes.md) of the content of the checked-in data, uniquely identifying that version of the managed files. It is a strictly correct synonym, used more often in low-level contexts than the term “version.” * **UUID** is a deprecated term still found in many parts of the Fossil internals and (decreasingly) its documentation. The problem with using this as a synonym for a Fossil-managed version of the managed files is that there are [standards][UUID] defining the format of a “UUID,” none of which Fossil follows, not even the [version 4][ruuid] (random) format, the type of UUID closest in meaning and usage to a Fossil hash.(^A pre-Fossil 2.0 style SHA1 hash is 160 bits, not the 128 bits many people expect for a proper UUID, and even if you truncate it to 128 bits to create a “good enough” version prefix, the 6 bits reserved in the UUID format for the variant code cannot make a correct declaration except by a random 1:64 chance. The SHA3-256 option allowed in Fossil 2.0 and higher doesn’t help with this confusion, making a Fossil version hash twice as large as a proper UUID. Alas, the term will never be fully obliterated from use since there are columns in the Fossil repository format that use the obsolete term; we cannot change this without breaking backwards compatibility.) You will find all of these synonyms used in the Fossil documentation. Some day we may settle on a single term, but it doesn’t seem likely. [CVS]: https://en.wikipedia.org/wiki/Concurrent_Versions_System [hash]: #version [RCS]: https://en.wikipedia.org/wiki/Revision_Control_System [ruuid]: https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_(random) [snfs]: https://en.wikipedia.org/wiki/Snapshot_(computer_storage)#File_systems [UUID]: https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_(random) [version]: #version ## Check-in <a id="check-in" name="ci"></a> A [version] of the project’s files that have been committed to the [repository]; as such, it is sometimes called a “commit” instead. A check-in is a snapshot of the project at an instant in time, as seen from a single [check-out’s](#co) perspective. It is sometimes styled “`CHECKIN`”, especially in command documentation where any [valid check-in name][ciname] can be used. * There is a harmless conflation of terms here: any of the various synonyms for [version] may be used where “check-in” is more accurate, and vice versa, because there is a 1:1 relation between them. A check-in *has* a version, but a version suffices to uniquely look up a particular commit.[^snapshot] * Combining both sets of synonyms results in a list of terms that is confusing to new Fossil users, but it’s easy enough to internalize the concepts. [Committing][commit] creates a *commit.* It may also be said to create a checked-in *version* of a particular *revision* of the project’s files, thus creating an immutable *snapshot* of the project’s state at the time of the commit. Fossil users find each of these different words for the same concept useful for expressive purposes among ourselves, but to Fossil proper, they all mean the same thing. * Check-ins are immutable. * Check-ins exist only inside the repository. Contrast a [check-out](#co). * Check-ins may have [one or more names][ciname], but only the [hash] is globally unique, across all time; we call it the check-in’s canonical name. The other names are either imprecise, contextual, or change their meaning over time and across [repositories]. [^snapshot]: You may sometimes see the term “snapshot” used as a synonym for a check-in or the version number identifying said check-in. We must warn against this usage because there is a potential confusion here: [the `stash` command][stash] uses the term “snapshot,” as does [the `undo` system][undo] to make a distinction with check-ins. Nevertheless, there is a conceptual overlap here between Fossil and systems that do use the term “snapshot,” the primary distinction being that Fossil will capture only changes to files you’ve [added][add] to the [repository], not to everything in [the check-out directory](#co) at the time of the snapshot. (Thus [the `extras` command][extras].) Contrast a snapshot taken by a virtual machine system or a [snapshotting file system][snfs], which captures changes to everything on the managed storage volume. [add]: /help?cmd=add [ciname]: ./checkin_names.wiki [extras]: /help?cmd=extras [stash]: /help?cmd=stash [undo]: /help?cmd=undo ## Check-out <a id="check-out" name="co"></a> A set of files extracted from a [repository] that represent a particular [check-in](#ci) of the [project](#project). * Unlike a check-in, a check-out is mutable. It may start out as a version of a particular check-in extracted from the repository, but the user is then free to make modifications to the checked-out files. Once those changes are formally [committed][commit], they become a new immutable check-in, derived from its parent check-in. * You can switch from one check-in to another within a check-out directory by passing those names to [the `fossil update` command][update]. * Check-outs relate to repositories in a one-to-many fashion: it is common to have a single repo clone on a machine but to have it [open] in [multiple working directories][mwd]. Check-out directories are associated with the repos they were created from by settings stored in the check-out directory. This is in the `.fslckout` file on POSIX type systems, but for historical compatibility reasons, it’s called `_FOSSIL_` by native Windows builds of Fossil. (Contrast the Cygwin and WSL Fossil binaries, which use POSIX file naming rules.) * In the same way that one cannot extract files from a zip archive |
︙ | ︙ | |||
370 371 372 373 374 375 376 377 | [edoc]: ./embeddeddoc.wiki [fef]: ./fileedit-page.md [fshr]: ./selfhost.wiki [wiki]: ./wikitheory.wiki <div style="height:50em" id="this-space-intentionally-left-blank"></div> | > > > > > > > > > > > > > > > | 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 | [edoc]: ./embeddeddoc.wiki [fef]: ./fileedit-page.md [fshr]: ./selfhost.wiki [wiki]: ./wikitheory.wiki ## <a id="cap"></a>Capability Fossil includes a powerful [role-based access control system][rbac] which affects which users have permission to do certain things within a given [repository]. You can read more about this complex topic [here](./caps/). Some people — and indeed certain parts of Fossil’s own code — use the term “permissions” instead, but since [operating system file permissions also play into this](./caps/#webonly), we prefer the term “capabilities” (or “caps” for short) when talking about Fossil’s RBAC system to avoid a confusion here. [rbac]: https://en.wikipedia.org/wiki/Role-based_access_control <div style="height:50em" id="this-space-intentionally-left-blank"></div> |
Changes to www/index.wiki.
︙ | ︙ | |||
84 85 86 87 88 89 90 | 8. <b>Free and Open-Source</b> - [../COPYRIGHT-BSD2.txt|2-clause BSD license]. <hr> <h3>Latest Release: 2.21 ([/timeline?c=version-2.21|2023-02-25])</h3> * [/uv/download.html|Download] | | | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | 8. <b>Free and Open-Source</b> - [../COPYRIGHT-BSD2.txt|2-clause BSD license]. <hr> <h3>Latest Release: 2.21 ([/timeline?c=version-2.21|2023-02-25])</h3> * [/uv/download.html|Download] * [./changes.wiki#v2_21|Change Summary] * [/timeline?p=version-2.21&bt=version-2.20&y=ci|Check-ins in version 2.21] * [/timeline?df=version-2.21&y=ci|Check-ins derived from the 2.21 release] * [/timeline?t=release|Timeline of all past releases] <hr> <h3>Quick Start</h3> |
︙ | ︙ |
Changes to www/mkindex.tcl.
︙ | ︙ | |||
21 22 23 24 25 26 27 | backup.md {Backing Up a Remote Fossil Repository} blame.wiki {The Annotate/Blame Algorithm Of Fossil} blockchain.md {Is Fossil A Blockchain?} branching.wiki {Branching, Forking, Merging, and Tagging} bugtheory.wiki {Bug Tracking In Fossil} build.wiki {Compiling and Installing Fossil} cap-theorem.md {Fossil and the CAP Theorem} | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | backup.md {Backing Up a Remote Fossil Repository} blame.wiki {The Annotate/Blame Algorithm Of Fossil} blockchain.md {Is Fossil A Blockchain?} branching.wiki {Branching, Forking, Merging, and Tagging} bugtheory.wiki {Bug Tracking In Fossil} build.wiki {Compiling and Installing Fossil} cap-theorem.md {Fossil and the CAP Theorem} caps/ {Administering User Capabilities (a.k.a. Permissions)} caps/admin-v-setup.md {Differences Between Setup and Admin Users} caps/ref.html {User Capability Reference} cgi.wiki {CGI Script Configuration Options} changes.wiki {Fossil Changelog} chat.md {Fossil Chat} checkin_names.wiki {Check-in And Version Names} checkin.wiki {Check-in Checklist} |
︙ | ︙ |
Changes to www/permutedindex.html.
︙ | ︙ | |||
22 23 24 25 26 27 28 | <li> <a href='https://fossil-scm.org/fossil-book/'>Fossil book</a> </ul> <h2 id="pindex">Other Documents:</h2> <ul> <li><a href="tech_overview.wiki">A Technical Overview Of The Design And Implementation Of Fossil</a></li> <li><a href="serverext.wiki">Adding Extensions To A Fossil Server Using CGI Scripts</a></li> <li><a href="adding_code.wiki">Adding New Features To Fossil</a></li> | | | 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | <li> <a href='https://fossil-scm.org/fossil-book/'>Fossil book</a> </ul> <h2 id="pindex">Other Documents:</h2> <ul> <li><a href="tech_overview.wiki">A Technical Overview Of The Design And Implementation Of Fossil</a></li> <li><a href="serverext.wiki">Adding Extensions To A Fossil Server Using CGI Scripts</a></li> <li><a href="adding_code.wiki">Adding New Features To Fossil</a></li> <li><a href="caps/">Administering User Capabilities (a.k.a. Permissions)</a></li> <li><a href="backup.md">Backing Up a Remote Fossil Repository</a></li> <li><a href="whyusefossil.wiki">Benefits Of Version Control</a></li> <li><a href="branching.wiki">Branching, Forking, Merging, and Tagging</a></li> <li><a href="bugtheory.wiki">Bug Tracking In Fossil</a></li> <li><a href="cgi.wiki">CGI Script Configuration Options</a></li> <li><a href="serverext.wiki">CGI Server Extensions</a></li> <li><a href="checkin_names.wiki">Check-in And Version Names</a></li> |
︙ | ︙ |
Changes to www/server/windows/service.md.
︙ | ︙ | |||
44 45 46 47 48 49 50 51 52 53 54 55 56 57 | If you wish to server a directory of repositories, the `fossil winsrv` command requires a slightly different set of options vs. `fossil server`: ``` fossil winsrv create --repository D:/Path/to/Repos --repolist ``` ### <a id='PowerShell'></a>Advanced service installation using PowerShell As great as `fossil winsrv` is, it does not have one to one reflection of all of the `fossil server` [options](/help?cmd=server). When you need to use some of the more advanced options, such as `--https`, `--skin`, or `--extroot`, you will need to use PowerShell to configure and install the Windows service. | > > > > > > > > > > > > | 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | If you wish to server a directory of repositories, the `fossil winsrv` command requires a slightly different set of options vs. `fossil server`: ``` fossil winsrv create --repository D:/Path/to/Repos --repolist ``` ### Choice of Directory Considerations When the Fossil server will be used at times that files may be locked during virus scanning, it is prudent to arrange that its directory used for temporary files is exempted from such scanning. Ordinarily, this will be a subdirectory named "fossil" in the temporary directory given by the Windows GetTempPath(...) API, [namely](https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-gettemppathw#remarks) the value of the first existing environment variable from `%TMP%`, `%TEMP%`, `%USERPROFILE%`, and `%SystemRoot%`; you can look for their actual values in your system by accessing the `/test_env` webpage. Excluding this subdirectory will avoid certain rare failures where the fossil.exe process is unable to use the directory normally during a scan. ### <a id='PowerShell'></a>Advanced service installation using PowerShell As great as `fossil winsrv` is, it does not have one to one reflection of all of the `fossil server` [options](/help?cmd=server). When you need to use some of the more advanced options, such as `--https`, `--skin`, or `--extroot`, you will need to use PowerShell to configure and install the Windows service. |
︙ | ︙ |