Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Difference From release To trunk
2024-03-28
| ||
10:40 | Make the shunned/unshunned list legible (not blue on black) in the xekri skin. ... (Leaf check-in: dbbc4800 user: stephan tags: trunk) | |
10:16 | Fix skin/xekri/css.txt's div.sectionmenu margin-top to avoid truncating the top of the menu. Visible in /ci/CHECKIN_ID pages. ... (check-in: 004b433b user: stephan tags: trunk) | |
2023-11-02
| ||
19:37 | For the "fossil sync" command if the -v option is repeated, then the HTTP_VERBOSE flag is set on the http_exchange() call, resulting in additional debugging output for the wire protocol. ... (check-in: 80896224 user: drh tags: trunk) | |
12:44 | Check if markdown paragraphs contains lists. Fixes issue reported in b598ac56defddb2a. ... (Closed-Leaf check-in: 25028896 user: preben tags: markdown-multiple-sublists) | |
2023-11-01
| ||
18:56 | Version 2.23 ... (check-in: 47362306 user: drh tags: trunk, release, version-2.23) | |
14:13 | Update the built-in SQLite to version 3.44.0. ... (check-in: 72e14351 user: drh tags: trunk) | |
Changes to Dockerfile.
︙ | ︙ | |||
79 80 81 82 83 84 85 | ## --------------------------------------------------------------------- ## RUN! ## --------------------------------------------------------------------- ENV PATH "/bin" EXPOSE 8080/tcp USER fossil | | | > | 79 80 81 82 83 84 85 86 87 88 89 90 91 | ## --------------------------------------------------------------------- ## RUN! ## --------------------------------------------------------------------- ENV PATH "/bin" EXPOSE 8080/tcp USER fossil ENTRYPOINT [ "fossil", "server" ] CMD [ \ "--create", \ "--jsmode", "bundled", \ "--user", "admin", \ "museum/repo.fossil" ] |
Changes to VERSION.
|
| | | 1 | 2.24 |
Changes to auto.def.
︙ | ︙ | |||
453 454 455 456 457 458 459 460 461 462 463 464 465 466 | } } } if {$found} { define FOSSIL_ENABLE_SSL define-append EXTRA_CFLAGS $cflags define-append EXTRA_LDFLAGS $ldflags if {[info exists ssllibs]} { define-append LIBS $ssllibs } else { define-append LIBS -lssl -lcrypto } if {[info exists ::zlib_lib]} { define-append LIBS $::zlib_lib | > > | 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 | } } } if {$found} { define FOSSIL_ENABLE_SSL define-append EXTRA_CFLAGS $cflags define-append EXTRA_LDFLAGS $ldflags define-append CFLAGS $cflags define-append LDFLAGS $ldflags if {[info exists ssllibs]} { define-append LIBS $ssllibs } else { define-append LIBS -lssl -lcrypto } if {[info exists ::zlib_lib]} { define-append LIBS $::zlib_lib |
︙ | ︙ | |||
653 654 655 656 657 658 659 660 661 662 663 664 665 666 | } set version $tclconfig(TCL_VERSION)$tclconfig(TCL_PATCH_LEVEL) msg-result "Found Tcl $version at $tclconfig(TCL_PREFIX)" if {!$tclprivatestubs} { define-append LIBS $libs } define-append EXTRA_CFLAGS $cflags if {[info exists zlibpath] && $zlibpath eq "tree"} { # # NOTE: When using zlib in the source tree, prevent Tcl from # pulling in the system one. # set tclconfig(TCL_LD_FLAGS) [string map [list -lz ""] \ $tclconfig(TCL_LD_FLAGS)] | > | 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 | } set version $tclconfig(TCL_VERSION)$tclconfig(TCL_PATCH_LEVEL) msg-result "Found Tcl $version at $tclconfig(TCL_PREFIX)" if {!$tclprivatestubs} { define-append LIBS $libs } define-append EXTRA_CFLAGS $cflags define-append CFLAGS $cflags if {[info exists zlibpath] && $zlibpath eq "tree"} { # # NOTE: When using zlib in the source tree, prevent Tcl from # pulling in the system one. # set tclconfig(TCL_LD_FLAGS) [string map [list -lz ""] \ $tclconfig(TCL_LD_FLAGS)] |
︙ | ︙ |
Changes to autosetup/autosetup-find-tclsh.
1 2 3 4 5 6 7 8 9 10 11 12 | #!/bin/sh # Looks for a suitable tclsh or jimsh in the PATH # If not found, builds a bootstrap jimsh from source # Prefer $autosetup_tclsh if is set in the environment d=`dirname "$0"` { "$d/jimsh0" "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 PATH="$PATH:$d"; export PATH for tclsh in $autosetup_tclsh jimsh tclsh tclsh8.5 tclsh8.6; do { $tclsh "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 done echo 1>&2 "No installed jimsh or tclsh, building local bootstrap jimsh0" for cc in ${CC_FOR_BUILD:-cc} gcc; do | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #!/bin/sh # Looks for a suitable tclsh or jimsh in the PATH # If not found, builds a bootstrap jimsh from source # Prefer $autosetup_tclsh if is set in the environment d=`dirname "$0"` { "$d/jimsh0" "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 PATH="$PATH:$d"; export PATH for tclsh in $autosetup_tclsh jimsh tclsh tclsh8.5 tclsh8.6; do { $tclsh "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 done echo 1>&2 "No installed jimsh or tclsh, building local bootstrap jimsh0" for cc in ${CC_FOR_BUILD:-cc} gcc; do { $cc -o "$d/jimsh0" "$d/jimsh0.c"; } >/dev/null 2>&1 || continue "$d/jimsh0" "$d/autosetup-test-tclsh" && exit 0 done echo 1>&2 "No working C compiler found. Tried ${CC_FOR_BUILD:-cc} and gcc." echo false |
Changes to extsrc/pikchr.c.
1 | /* This file is automatically generated by Lemon from input grammar | | > | 1 2 3 4 5 6 7 8 9 10 | /* This file is automatically generated by Lemon from input grammar ** source file "pikchr.y". */ /* ** Zero-Clause BSD license: ** ** Copyright (C) 2020-09-01 by D. Richard Hipp <drh@sqlite.org> ** ** Permission to use, copy, modify, and/or distribute this software for ** any purpose with or without fee is hereby granted. |
︙ | ︙ | |||
317 318 319 320 321 322 323 324 325 326 327 328 329 330 | PPoint with; /* Position constraint from WITH clause */ char eWith; /* Type of heading point on WITH clause */ char cw; /* True for clockwise arc */ char larrow; /* Arrow at beginning (<- or <->) */ char rarrow; /* Arrow at end (-> or <->) */ char bClose; /* True if "close" is seen */ char bChop; /* True if "chop" is seen */ unsigned char nTxt; /* Number of text values */ unsigned mProp; /* Masks of properties set so far */ unsigned mCalc; /* Values computed from other constraints */ PToken aTxt[5]; /* Text with .eCode holding TP flags */ int iLayer; /* Rendering order */ int inDir, outDir; /* Entry and exit directions */ int nPath; /* Number of path points */ | > | 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 | PPoint with; /* Position constraint from WITH clause */ char eWith; /* Type of heading point on WITH clause */ char cw; /* True for clockwise arc */ char larrow; /* Arrow at beginning (<- or <->) */ char rarrow; /* Arrow at end (-> or <->) */ char bClose; /* True if "close" is seen */ char bChop; /* True if "chop" is seen */ char bAltAutoFit; /* Always send both h and w into xFit() */ unsigned char nTxt; /* Number of text values */ unsigned mProp; /* Masks of properties set so far */ unsigned mCalc; /* Values computed from other constraints */ PToken aTxt[5]; /* Text with .eCode holding TP flags */ int iLayer; /* Rendering order */ int inDir, outDir; /* Entry and exit directions */ int nPath; /* Number of path points */ |
︙ | ︙ | |||
488 489 490 491 492 493 494 | static void pik_behind(Pik*,PObj*); static PObj *pik_assert(Pik*,PNum,PToken*,PNum); static PObj *pik_position_assert(Pik*,PPoint*,PToken*,PPoint*); static PNum pik_dist(PPoint*,PPoint*); static void pik_add_macro(Pik*,PToken *pId,PToken *pCode); | | | 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 | static void pik_behind(Pik*,PObj*); static PObj *pik_assert(Pik*,PNum,PToken*,PNum); static PObj *pik_position_assert(Pik*,PPoint*,PToken*,PPoint*); static PNum pik_dist(PPoint*,PPoint*); static void pik_add_macro(Pik*,PToken *pId,PToken *pCode); #line 523 "pikchr.c" /**************** End of %include directives **********************************/ /* These constants specify the various numeric values for terminal symbols. ***************** Begin token definitions *************************************/ #ifndef T_ID #define T_ID 1 #define T_EDGEPT 2 #define T_OF 3 |
︙ | ︙ | |||
632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 | ** zero the stack is dynamically sized using realloc() ** pik_parserARG_SDECL A static variable declaration for the %extra_argument ** pik_parserARG_PDECL A parameter declaration for the %extra_argument ** pik_parserARG_PARAM Code to pass %extra_argument as a subroutine parameter ** pik_parserARG_STORE Code to store %extra_argument into yypParser ** pik_parserARG_FETCH Code to extract %extra_argument from yypParser ** pik_parserCTX_* As pik_parserARG_ except for %extra_context ** YYERRORSYMBOL is the code number of the error symbol. If not ** defined, then do no error processing. ** YYNSTATE the combined number of states. ** YYNRULE the number of rules in the grammar ** YYNTOKEN Number of terminal symbols ** YY_MAX_SHIFT Maximum value for shift actions ** YY_MIN_SHIFTREDUCE Minimum value for shift-reduce actions ** YY_MAX_SHIFTREDUCE Maximum value for shift-reduce actions ** YY_ERROR_ACTION The yy_action[] code for syntax error ** YY_ACCEPT_ACTION The yy_action[] code for accept ** YY_NO_ACTION The yy_action[] code for no-op ** YY_MIN_REDUCE Minimum value for reduce actions ** YY_MAX_REDUCE Maximum value for reduce actions */ #ifndef INTERFACE # define INTERFACE 1 #endif /************* Begin control #defines *****************************************/ #define YYCODETYPE unsigned char #define YYNOCODE 136 | > > > > > | 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 | ** zero the stack is dynamically sized using realloc() ** pik_parserARG_SDECL A static variable declaration for the %extra_argument ** pik_parserARG_PDECL A parameter declaration for the %extra_argument ** pik_parserARG_PARAM Code to pass %extra_argument as a subroutine parameter ** pik_parserARG_STORE Code to store %extra_argument into yypParser ** pik_parserARG_FETCH Code to extract %extra_argument from yypParser ** pik_parserCTX_* As pik_parserARG_ except for %extra_context ** YYREALLOC Name of the realloc() function to use ** YYFREE Name of the free() function to use ** YYDYNSTACK True if stack space should be extended on heap ** YYERRORSYMBOL is the code number of the error symbol. If not ** defined, then do no error processing. ** YYNSTATE the combined number of states. ** YYNRULE the number of rules in the grammar ** YYNTOKEN Number of terminal symbols ** YY_MAX_SHIFT Maximum value for shift actions ** YY_MIN_SHIFTREDUCE Minimum value for shift-reduce actions ** YY_MAX_SHIFTREDUCE Maximum value for shift-reduce actions ** YY_ERROR_ACTION The yy_action[] code for syntax error ** YY_ACCEPT_ACTION The yy_action[] code for accept ** YY_NO_ACTION The yy_action[] code for no-op ** YY_MIN_REDUCE Minimum value for reduce actions ** YY_MAX_REDUCE Maximum value for reduce actions ** YY_MIN_DSTRCTR Minimum symbol value that has a destructor ** YY_MAX_DSTRCTR Maximum symbol value that has a destructor */ #ifndef INTERFACE # define INTERFACE 1 #endif /************* Begin control #defines *****************************************/ #define YYCODETYPE unsigned char #define YYNOCODE 136 |
︙ | ︙ | |||
672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 | #define YYSTACKDEPTH 100 #endif #define pik_parserARG_SDECL #define pik_parserARG_PDECL #define pik_parserARG_PARAM #define pik_parserARG_FETCH #define pik_parserARG_STORE #define pik_parserCTX_SDECL Pik *p; #define pik_parserCTX_PDECL ,Pik *p #define pik_parserCTX_PARAM ,p #define pik_parserCTX_FETCH Pik *p=yypParser->p; #define pik_parserCTX_STORE yypParser->p=p; #define YYFALLBACK 1 #define YYNSTATE 164 #define YYNRULE 156 #define YYNRULE_WITH_ACTION 116 #define YYNTOKEN 100 #define YY_MAX_SHIFT 163 #define YY_MIN_SHIFTREDUCE 287 #define YY_MAX_SHIFTREDUCE 442 #define YY_ERROR_ACTION 443 #define YY_ACCEPT_ACTION 444 #define YY_NO_ACTION 445 #define YY_MIN_REDUCE 446 #define YY_MAX_REDUCE 601 /************* End control #defines *******************************************/ #define YY_NLOOKAHEAD ((int)(sizeof(yy_lookahead)/sizeof(yy_lookahead[0]))) /* Define the yytestcase() macro to be a no-op if is not already defined ** otherwise. ** ** Applications can choose to define yytestcase() in the %include section ** to a macro that can assist in verifying code coverage. For production ** code the yytestcase() macro should be turned off. But it is useful ** for testing. */ #ifndef yytestcase # define yytestcase(X) #endif /* Next are the tables used to determine what action to take based on the ** current state and lookahead token. These tables are used to implement ** functions that take a state number and lookahead value and return an ** action integer. ** | > > > > > > > > > > > > > > > > > > > > > | 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 | #define YYSTACKDEPTH 100 #endif #define pik_parserARG_SDECL #define pik_parserARG_PDECL #define pik_parserARG_PARAM #define pik_parserARG_FETCH #define pik_parserARG_STORE #define YYREALLOC realloc #define YYFREE free #define YYDYNSTACK 0 #define pik_parserCTX_SDECL Pik *p; #define pik_parserCTX_PDECL ,Pik *p #define pik_parserCTX_PARAM ,p #define pik_parserCTX_FETCH Pik *p=yypParser->p; #define pik_parserCTX_STORE yypParser->p=p; #define YYFALLBACK 1 #define YYNSTATE 164 #define YYNRULE 156 #define YYNRULE_WITH_ACTION 116 #define YYNTOKEN 100 #define YY_MAX_SHIFT 163 #define YY_MIN_SHIFTREDUCE 287 #define YY_MAX_SHIFTREDUCE 442 #define YY_ERROR_ACTION 443 #define YY_ACCEPT_ACTION 444 #define YY_NO_ACTION 445 #define YY_MIN_REDUCE 446 #define YY_MAX_REDUCE 601 #define YY_MIN_DSTRCTR 100 #define YY_MAX_DSTRCTR 103 /************* End control #defines *******************************************/ #define YY_NLOOKAHEAD ((int)(sizeof(yy_lookahead)/sizeof(yy_lookahead[0]))) /* Define the yytestcase() macro to be a no-op if is not already defined ** otherwise. ** ** Applications can choose to define yytestcase() in the %include section ** to a macro that can assist in verifying code coverage. For production ** code the yytestcase() macro should be turned off. But it is useful ** for testing. */ #ifndef yytestcase # define yytestcase(X) #endif /* Macro to determine if stack space has the ability to grow using ** heap memory. */ #if YYSTACKDEPTH<=0 || YYDYNSTACK # define YYGROWABLESTACK 1 #else # define YYGROWABLESTACK 0 #endif /* Guarantee a minimum number of initial stack slots. */ #if YYSTACKDEPTH<=0 # undef YYSTACKDEPTH # define YYSTACKDEPTH 2 /* Need a minimum stack size */ #endif /* Next are the tables used to determine what action to take based on the ** current state and lookahead token. These tables are used to implement ** functions that take a state number and lookahead value and return an ** action integer. ** |
︙ | ︙ | |||
1248 1249 1250 1251 1252 1253 1254 | int yyhwm; /* High-water mark of the stack */ #endif #ifndef YYNOERRORRECOVERY int yyerrcnt; /* Shifts left before out of the error */ #endif pik_parserARG_SDECL /* A place to hold %extra_argument */ pik_parserCTX_SDECL /* A place to hold %extra_context */ | < | | | < < < < | 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 | int yyhwm; /* High-water mark of the stack */ #endif #ifndef YYNOERRORRECOVERY int yyerrcnt; /* Shifts left before out of the error */ #endif pik_parserARG_SDECL /* A place to hold %extra_argument */ pik_parserCTX_SDECL /* A place to hold %extra_context */ yyStackEntry *yystackEnd; /* Last entry in the stack */ yyStackEntry *yystack; /* The parser stack */ yyStackEntry yystk0[YYSTACKDEPTH]; /* Initial stack space */ }; typedef struct yyParser yyParser; #include <assert.h> #ifndef NDEBUG #include <stdio.h> static FILE *yyTraceFILE = 0; |
︙ | ︙ | |||
1599 1600 1601 1602 1603 1604 1605 | /* 153 */ "edge ::= RIGHT", /* 154 */ "edge ::= LEFT", /* 155 */ "object ::= objectname", }; #endif /* NDEBUG */ | | > | | | | | > | > < | | | | | | | > | < > | > > > > > < < | | < < < < < < < < | 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 | /* 153 */ "edge ::= RIGHT", /* 154 */ "edge ::= LEFT", /* 155 */ "object ::= objectname", }; #endif /* NDEBUG */ #if YYGROWABLESTACK /* ** Try to increase the size of the parser stack. Return the number ** of errors. Return 0 on success. */ static int yyGrowStack(yyParser *p){ int oldSize = 1 + (int)(p->yystackEnd - p->yystack); int newSize; int idx; yyStackEntry *pNew; newSize = oldSize*2 + 100; idx = (int)(p->yytos - p->yystack); if( p->yystack==p->yystk0 ){ pNew = YYREALLOC(0, newSize*sizeof(pNew[0])); if( pNew==0 ) return 1; memcpy(pNew, p->yystack, oldSize*sizeof(pNew[0])); }else{ pNew = YYREALLOC(p->yystack, newSize*sizeof(pNew[0])); if( pNew==0 ) return 1; } p->yystack = pNew; p->yytos = &p->yystack[idx]; #ifndef NDEBUG if( yyTraceFILE ){ fprintf(yyTraceFILE,"%sStack grows from %d to %d entries.\n", yyTracePrompt, oldSize, newSize); } #endif p->yystackEnd = &p->yystack[newSize-1]; return 0; } #endif /* YYGROWABLESTACK */ #if !YYGROWABLESTACK /* For builds that do no have a growable stack, yyGrowStack always ** returns an error. */ # define yyGrowStack(X) 1 #endif /* Datatype of the argument to the memory allocated passed as the ** second argument to pik_parserAlloc() below. This can be changed by ** putting an appropriate #define in the %include section of the input ** grammar. */ #ifndef YYMALLOCARGTYPE # define YYMALLOCARGTYPE size_t #endif /* Initialize a new parser that has already been allocated. */ void pik_parserInit(void *yypRawParser pik_parserCTX_PDECL){ yyParser *yypParser = (yyParser*)yypRawParser; pik_parserCTX_STORE #ifdef YYTRACKMAXSTACKDEPTH yypParser->yyhwm = 0; #endif yypParser->yystack = yypParser->yystk0; yypParser->yystackEnd = &yypParser->yystack[YYSTACKDEPTH-1]; #ifndef YYNOERRORRECOVERY yypParser->yyerrcnt = -1; #endif yypParser->yytos = yypParser->yystack; yypParser->yystack[0].stateno = 0; yypParser->yystack[0].major = 0; } #ifndef pik_parser_ENGINEALWAYSONSTACK /* ** This function allocates a new parser. ** The only argument is a pointer to a function which works like ** malloc. |
︙ | ︙ | |||
1722 1723 1724 1725 1726 1727 1728 | ** Note: during a reduce, the only symbols destroyed are those ** which appear on the RHS of the rule, but which are *not* used ** inside the C code. */ /********* Begin destructor definitions ***************************************/ case 100: /* statement_list */ { | | | | | | 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 | ** Note: during a reduce, the only symbols destroyed are those ** which appear on the RHS of the rule, but which are *not* used ** inside the C code. */ /********* Begin destructor definitions ***************************************/ case 100: /* statement_list */ { #line 511 "pikchr.y" pik_elist_free(p,(yypminor->yy235)); #line 1777 "pikchr.c" } break; case 101: /* statement */ case 102: /* unnamed_statement */ case 103: /* basetype */ { #line 513 "pikchr.y" pik_elem_free(p,(yypminor->yy162)); #line 1786 "pikchr.c" } break; /********* End destructor definitions *****************************************/ default: break; /* If no destructor action specified: do nothing */ } } |
︙ | ︙ | |||
1767 1768 1769 1770 1771 1772 1773 | } /* ** Clear all secondary memory allocations from the parser */ void pik_parserFinalize(void *p){ yyParser *pParser = (yyParser*)p; | > > > > | > > > > > > > > > > > > | > | | 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 | } /* ** Clear all secondary memory allocations from the parser */ void pik_parserFinalize(void *p){ yyParser *pParser = (yyParser*)p; /* In-lined version of calling yy_pop_parser_stack() for each ** element left in the stack */ yyStackEntry *yytos = pParser->yytos; while( yytos>pParser->yystack ){ #ifndef NDEBUG if( yyTraceFILE ){ fprintf(yyTraceFILE,"%sPopping %s\n", yyTracePrompt, yyTokenName[yytos->major]); } #endif if( yytos->major>=YY_MIN_DSTRCTR ){ yy_destructor(pParser, yytos->major, &yytos->minor); } yytos--; } #if YYGROWABLESTACK if( pParser->yystack!=pParser->yystk0 ) YYFREE(pParser->yystack); #endif } #ifndef pik_parser_ENGINEALWAYSONSTACK /* ** Deallocate and destroy a parser. Destructors are called for ** all stack elements before shutting the parser down. |
︙ | ︙ | |||
1951 1952 1953 1954 1955 1956 1957 | fprintf(yyTraceFILE,"%sStack Overflow!\n",yyTracePrompt); } #endif while( yypParser->yytos>yypParser->yystack ) yy_pop_parser_stack(yypParser); /* Here code is inserted which will execute if the parser ** stack every overflows */ /******** Begin %stack_overflow code ******************************************/ | | | | 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 | fprintf(yyTraceFILE,"%sStack Overflow!\n",yyTracePrompt); } #endif while( yypParser->yytos>yypParser->yystack ) yy_pop_parser_stack(yypParser); /* Here code is inserted which will execute if the parser ** stack every overflows */ /******** Begin %stack_overflow code ******************************************/ #line 545 "pikchr.y" pik_error(p, 0, "parser stack overflow"); #line 2024 "pikchr.c" /******** End %stack_overflow code ********************************************/ pik_parserARG_STORE /* Suppress warning about unused %extra_argument var */ pik_parserCTX_STORE } /* ** Print tracing information for a SHIFT action |
︙ | ︙ | |||
1998 1999 2000 2001 2002 2003 2004 | yypParser->yytos++; #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack) ); } #endif | < < | < < < < | > > < < | 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 | yypParser->yytos++; #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack) ); } #endif yytos = yypParser->yytos; if( yytos>yypParser->yystackEnd ){ if( yyGrowStack(yypParser) ){ yypParser->yytos--; yyStackOverflow(yypParser); return; } yytos = yypParser->yytos; assert( yytos <= yypParser->yystackEnd ); } if( yyNewState > YY_MAX_SHIFT ){ yyNewState += YY_MIN_REDUCE - YY_MIN_SHIFTREDUCE; } yytos->stateno = yyNewState; yytos->major = yyMajor; yytos->minor.yy0 = yyMinor; yyTraceShift(yypParser, yyNewState, "Shift"); } /* For rule J, yyRuleInfoLhs[J] contains the symbol on the left-hand side |
︙ | ︙ | |||
2385 2386 2387 2388 2389 2390 2391 | ** { ... } // User supplied code ** #line <lineno> <thisfile> ** break; */ /********** Begin reduce actions **********************************************/ YYMINORTYPE yylhsminor; case 0: /* document ::= statement_list */ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 | ** { ... } // User supplied code ** #line <lineno> <thisfile> ** break; */ /********** Begin reduce actions **********************************************/ YYMINORTYPE yylhsminor; case 0: /* document ::= statement_list */ #line 549 "pikchr.y" {pik_render(p,yymsp[0].minor.yy235);} #line 2451 "pikchr.c" break; case 1: /* statement_list ::= statement */ #line 552 "pikchr.y" { yylhsminor.yy235 = pik_elist_append(p,0,yymsp[0].minor.yy162); } #line 2456 "pikchr.c" yymsp[0].minor.yy235 = yylhsminor.yy235; break; case 2: /* statement_list ::= statement_list EOL statement */ #line 554 "pikchr.y" { yylhsminor.yy235 = pik_elist_append(p,yymsp[-2].minor.yy235,yymsp[0].minor.yy162); } #line 2462 "pikchr.c" yymsp[-2].minor.yy235 = yylhsminor.yy235; break; case 3: /* statement ::= */ #line 557 "pikchr.y" { yymsp[1].minor.yy162 = 0; } #line 2468 "pikchr.c" break; case 4: /* statement ::= direction */ #line 558 "pikchr.y" { pik_set_direction(p,yymsp[0].minor.yy0.eCode); yylhsminor.yy162=0; } #line 2473 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 5: /* statement ::= lvalue ASSIGN rvalue */ #line 559 "pikchr.y" {pik_set_var(p,&yymsp[-2].minor.yy0,yymsp[0].minor.yy21,&yymsp[-1].minor.yy0); yylhsminor.yy162=0;} #line 2479 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 6: /* statement ::= PLACENAME COLON unnamed_statement */ #line 561 "pikchr.y" { yylhsminor.yy162 = yymsp[0].minor.yy162; pik_elem_setname(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0); } #line 2485 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 7: /* statement ::= PLACENAME COLON position */ #line 563 "pikchr.y" { yylhsminor.yy162 = pik_elem_new(p,0,0,0); if(yylhsminor.yy162){ yylhsminor.yy162->ptAt = yymsp[0].minor.yy63; pik_elem_setname(p,yylhsminor.yy162,&yymsp[-2].minor.yy0); }} #line 2492 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 8: /* statement ::= unnamed_statement */ #line 565 "pikchr.y" {yylhsminor.yy162 = yymsp[0].minor.yy162;} #line 2498 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 9: /* statement ::= print prlist */ #line 566 "pikchr.y" {pik_append(p,"<br>\n",5); yymsp[-1].minor.yy162=0;} #line 2504 "pikchr.c" break; case 10: /* statement ::= ASSERT LP expr EQ expr RP */ #line 571 "pikchr.y" {yymsp[-5].minor.yy162=pik_assert(p,yymsp[-3].minor.yy21,&yymsp[-2].minor.yy0,yymsp[-1].minor.yy21);} #line 2509 "pikchr.c" break; case 11: /* statement ::= ASSERT LP position EQ position RP */ #line 573 "pikchr.y" {yymsp[-5].minor.yy162=pik_position_assert(p,&yymsp[-3].minor.yy63,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy63);} #line 2514 "pikchr.c" break; case 12: /* statement ::= DEFINE ID CODEBLOCK */ #line 574 "pikchr.y" {yymsp[-2].minor.yy162=0; pik_add_macro(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0);} #line 2519 "pikchr.c" break; case 13: /* rvalue ::= PLACENAME */ #line 585 "pikchr.y" {yylhsminor.yy21 = pik_lookup_color(p,&yymsp[0].minor.yy0);} #line 2524 "pikchr.c" yymsp[0].minor.yy21 = yylhsminor.yy21; break; case 14: /* pritem ::= FILL */ case 15: /* pritem ::= COLOR */ yytestcase(yyruleno==15); case 16: /* pritem ::= THICKNESS */ yytestcase(yyruleno==16); #line 590 "pikchr.y" {pik_append_num(p,"",pik_value(p,yymsp[0].minor.yy0.z,yymsp[0].minor.yy0.n,0));} #line 2532 "pikchr.c" break; case 17: /* pritem ::= rvalue */ #line 593 "pikchr.y" {pik_append_num(p,"",yymsp[0].minor.yy21);} #line 2537 "pikchr.c" break; case 18: /* pritem ::= STRING */ #line 594 "pikchr.y" {pik_append_text(p,yymsp[0].minor.yy0.z+1,yymsp[0].minor.yy0.n-2,0);} #line 2542 "pikchr.c" break; case 19: /* prsep ::= COMMA */ #line 595 "pikchr.y" {pik_append(p, " ", 1);} #line 2547 "pikchr.c" break; case 20: /* unnamed_statement ::= basetype attribute_list */ #line 598 "pikchr.y" {yylhsminor.yy162 = yymsp[-1].minor.yy162; pik_after_adding_attributes(p,yylhsminor.yy162);} #line 2552 "pikchr.c" yymsp[-1].minor.yy162 = yylhsminor.yy162; break; case 21: /* basetype ::= CLASSNAME */ #line 600 "pikchr.y" {yylhsminor.yy162 = pik_elem_new(p,&yymsp[0].minor.yy0,0,0); } #line 2558 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 22: /* basetype ::= STRING textposition */ #line 602 "pikchr.y" {yymsp[-1].minor.yy0.eCode = yymsp[0].minor.yy188; yylhsminor.yy162 = pik_elem_new(p,0,&yymsp[-1].minor.yy0,0); } #line 2564 "pikchr.c" yymsp[-1].minor.yy162 = yylhsminor.yy162; break; case 23: /* basetype ::= LB savelist statement_list RB */ #line 604 "pikchr.y" { p->list = yymsp[-2].minor.yy235; yymsp[-3].minor.yy162 = pik_elem_new(p,0,0,yymsp[-1].minor.yy235); if(yymsp[-3].minor.yy162) yymsp[-3].minor.yy162->errTok = yymsp[0].minor.yy0; } #line 2570 "pikchr.c" break; case 24: /* savelist ::= */ #line 609 "pikchr.y" {yymsp[1].minor.yy235 = p->list; p->list = 0;} #line 2575 "pikchr.c" break; case 25: /* relexpr ::= expr */ #line 616 "pikchr.y" {yylhsminor.yy72.rAbs = yymsp[0].minor.yy21; yylhsminor.yy72.rRel = 0;} #line 2580 "pikchr.c" yymsp[0].minor.yy72 = yylhsminor.yy72; break; case 26: /* relexpr ::= expr PERCENT */ #line 617 "pikchr.y" {yylhsminor.yy72.rAbs = 0; yylhsminor.yy72.rRel = yymsp[-1].minor.yy21/100;} #line 2586 "pikchr.c" yymsp[-1].minor.yy72 = yylhsminor.yy72; break; case 27: /* optrelexpr ::= */ #line 619 "pikchr.y" {yymsp[1].minor.yy72.rAbs = 0; yymsp[1].minor.yy72.rRel = 1.0;} #line 2592 "pikchr.c" break; case 28: /* attribute_list ::= relexpr alist */ #line 621 "pikchr.y" {pik_add_direction(p,0,&yymsp[-1].minor.yy72);} #line 2597 "pikchr.c" break; case 29: /* attribute ::= numproperty relexpr */ #line 625 "pikchr.y" { pik_set_numprop(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy72); } #line 2602 "pikchr.c" break; case 30: /* attribute ::= dashproperty expr */ #line 626 "pikchr.y" { pik_set_dashed(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy21); } #line 2607 "pikchr.c" break; case 31: /* attribute ::= dashproperty */ #line 627 "pikchr.y" { pik_set_dashed(p,&yymsp[0].minor.yy0,0); } #line 2612 "pikchr.c" break; case 32: /* attribute ::= colorproperty rvalue */ #line 628 "pikchr.y" { pik_set_clrprop(p,&yymsp[-1].minor.yy0,yymsp[0].minor.yy21); } #line 2617 "pikchr.c" break; case 33: /* attribute ::= go direction optrelexpr */ #line 629 "pikchr.y" { pik_add_direction(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy72);} #line 2622 "pikchr.c" break; case 34: /* attribute ::= go direction even position */ #line 630 "pikchr.y" {pik_evenwith(p,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy63);} #line 2627 "pikchr.c" break; case 35: /* attribute ::= CLOSE */ #line 631 "pikchr.y" { pik_close_path(p,&yymsp[0].minor.yy0); } #line 2632 "pikchr.c" break; case 36: /* attribute ::= CHOP */ #line 632 "pikchr.y" { p->cur->bChop = 1; } #line 2637 "pikchr.c" break; case 37: /* attribute ::= FROM position */ #line 633 "pikchr.y" { pik_set_from(p,p->cur,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy63); } #line 2642 "pikchr.c" break; case 38: /* attribute ::= TO position */ #line 634 "pikchr.y" { pik_add_to(p,p->cur,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy63); } #line 2647 "pikchr.c" break; case 39: /* attribute ::= THEN */ #line 635 "pikchr.y" { pik_then(p, &yymsp[0].minor.yy0, p->cur); } #line 2652 "pikchr.c" break; case 40: /* attribute ::= THEN optrelexpr HEADING expr */ case 42: /* attribute ::= GO optrelexpr HEADING expr */ yytestcase(yyruleno==42); #line 637 "pikchr.y" {pik_move_hdg(p,&yymsp[-2].minor.yy72,&yymsp[-1].minor.yy0,yymsp[0].minor.yy21,0,&yymsp[-3].minor.yy0);} #line 2658 "pikchr.c" break; case 41: /* attribute ::= THEN optrelexpr EDGEPT */ case 43: /* attribute ::= GO optrelexpr EDGEPT */ yytestcase(yyruleno==43); #line 638 "pikchr.y" {pik_move_hdg(p,&yymsp[-1].minor.yy72,0,0,&yymsp[0].minor.yy0,&yymsp[-2].minor.yy0);} #line 2664 "pikchr.c" break; case 44: /* attribute ::= AT position */ #line 643 "pikchr.y" { pik_set_at(p,0,&yymsp[0].minor.yy63,&yymsp[-1].minor.yy0); } #line 2669 "pikchr.c" break; case 45: /* attribute ::= SAME */ #line 645 "pikchr.y" {pik_same(p,0,&yymsp[0].minor.yy0);} #line 2674 "pikchr.c" break; case 46: /* attribute ::= SAME AS object */ #line 646 "pikchr.y" {pik_same(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0);} #line 2679 "pikchr.c" break; case 47: /* attribute ::= STRING textposition */ #line 647 "pikchr.y" {pik_add_txt(p,&yymsp[-1].minor.yy0,yymsp[0].minor.yy188);} #line 2684 "pikchr.c" break; case 48: /* attribute ::= FIT */ #line 648 "pikchr.y" {pik_size_to_fit(p,&yymsp[0].minor.yy0,3); } #line 2689 "pikchr.c" break; case 49: /* attribute ::= BEHIND object */ #line 649 "pikchr.y" {pik_behind(p,yymsp[0].minor.yy162);} #line 2694 "pikchr.c" break; case 50: /* withclause ::= DOT_E edge AT position */ case 51: /* withclause ::= edge AT position */ yytestcase(yyruleno==51); #line 657 "pikchr.y" { pik_set_at(p,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy63,&yymsp[-1].minor.yy0); } #line 2700 "pikchr.c" break; case 52: /* numproperty ::= HEIGHT|WIDTH|RADIUS|DIAMETER|THICKNESS */ #line 661 "pikchr.y" {yylhsminor.yy0 = yymsp[0].minor.yy0;} #line 2705 "pikchr.c" yymsp[0].minor.yy0 = yylhsminor.yy0; break; case 53: /* boolproperty ::= CW */ #line 672 "pikchr.y" {p->cur->cw = 1;} #line 2711 "pikchr.c" break; case 54: /* boolproperty ::= CCW */ #line 673 "pikchr.y" {p->cur->cw = 0;} #line 2716 "pikchr.c" break; case 55: /* boolproperty ::= LARROW */ #line 674 "pikchr.y" {p->cur->larrow=1; p->cur->rarrow=0; } #line 2721 "pikchr.c" break; case 56: /* boolproperty ::= RARROW */ #line 675 "pikchr.y" {p->cur->larrow=0; p->cur->rarrow=1; } #line 2726 "pikchr.c" break; case 57: /* boolproperty ::= LRARROW */ #line 676 "pikchr.y" {p->cur->larrow=1; p->cur->rarrow=1; } #line 2731 "pikchr.c" break; case 58: /* boolproperty ::= INVIS */ #line 677 "pikchr.y" {p->cur->sw = -0.00001;} #line 2736 "pikchr.c" break; case 59: /* boolproperty ::= THICK */ #line 678 "pikchr.y" {p->cur->sw *= 1.5;} #line 2741 "pikchr.c" break; case 60: /* boolproperty ::= THIN */ #line 679 "pikchr.y" {p->cur->sw *= 0.67;} #line 2746 "pikchr.c" break; case 61: /* boolproperty ::= SOLID */ #line 680 "pikchr.y" {p->cur->sw = pik_value(p,"thickness",9,0); p->cur->dotted = p->cur->dashed = 0.0;} #line 2752 "pikchr.c" break; case 62: /* textposition ::= */ #line 683 "pikchr.y" {yymsp[1].minor.yy188 = 0;} #line 2757 "pikchr.c" break; case 63: /* textposition ::= textposition CENTER|LJUST|RJUST|ABOVE|BELOW|ITALIC|BOLD|MONO|ALIGNED|BIG|SMALL */ #line 686 "pikchr.y" {yylhsminor.yy188 = (short int)pik_text_position(yymsp[-1].minor.yy188,&yymsp[0].minor.yy0);} #line 2762 "pikchr.c" yymsp[-1].minor.yy188 = yylhsminor.yy188; break; case 64: /* position ::= expr COMMA expr */ #line 689 "pikchr.y" {yylhsminor.yy63.x=yymsp[-2].minor.yy21; yylhsminor.yy63.y=yymsp[0].minor.yy21;} #line 2768 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 65: /* position ::= place PLUS expr COMMA expr */ #line 691 "pikchr.y" {yylhsminor.yy63.x=yymsp[-4].minor.yy63.x+yymsp[-2].minor.yy21; yylhsminor.yy63.y=yymsp[-4].minor.yy63.y+yymsp[0].minor.yy21;} #line 2774 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 66: /* position ::= place MINUS expr COMMA expr */ #line 692 "pikchr.y" {yylhsminor.yy63.x=yymsp[-4].minor.yy63.x-yymsp[-2].minor.yy21; yylhsminor.yy63.y=yymsp[-4].minor.yy63.y-yymsp[0].minor.yy21;} #line 2780 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 67: /* position ::= place PLUS LP expr COMMA expr RP */ #line 694 "pikchr.y" {yylhsminor.yy63.x=yymsp[-6].minor.yy63.x+yymsp[-3].minor.yy21; yylhsminor.yy63.y=yymsp[-6].minor.yy63.y+yymsp[-1].minor.yy21;} #line 2786 "pikchr.c" yymsp[-6].minor.yy63 = yylhsminor.yy63; break; case 68: /* position ::= place MINUS LP expr COMMA expr RP */ #line 696 "pikchr.y" {yylhsminor.yy63.x=yymsp[-6].minor.yy63.x-yymsp[-3].minor.yy21; yylhsminor.yy63.y=yymsp[-6].minor.yy63.y-yymsp[-1].minor.yy21;} #line 2792 "pikchr.c" yymsp[-6].minor.yy63 = yylhsminor.yy63; break; case 69: /* position ::= LP position COMMA position RP */ #line 697 "pikchr.y" {yymsp[-4].minor.yy63.x=yymsp[-3].minor.yy63.x; yymsp[-4].minor.yy63.y=yymsp[-1].minor.yy63.y;} #line 2798 "pikchr.c" break; case 70: /* position ::= LP position RP */ #line 698 "pikchr.y" {yymsp[-2].minor.yy63=yymsp[-1].minor.yy63;} #line 2803 "pikchr.c" break; case 71: /* position ::= expr between position AND position */ #line 700 "pikchr.y" {yylhsminor.yy63 = pik_position_between(yymsp[-4].minor.yy21,yymsp[-2].minor.yy63,yymsp[0].minor.yy63);} #line 2808 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 72: /* position ::= expr LT position COMMA position GT */ #line 702 "pikchr.y" {yylhsminor.yy63 = pik_position_between(yymsp[-5].minor.yy21,yymsp[-3].minor.yy63,yymsp[-1].minor.yy63);} #line 2814 "pikchr.c" yymsp[-5].minor.yy63 = yylhsminor.yy63; break; case 73: /* position ::= expr ABOVE position */ #line 703 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.y += yymsp[-2].minor.yy21;} #line 2820 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 74: /* position ::= expr BELOW position */ #line 704 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.y -= yymsp[-2].minor.yy21;} #line 2826 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 75: /* position ::= expr LEFT OF position */ #line 705 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.x -= yymsp[-3].minor.yy21;} #line 2832 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 76: /* position ::= expr RIGHT OF position */ #line 706 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.x += yymsp[-3].minor.yy21;} #line 2838 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 77: /* position ::= expr ON HEADING EDGEPT OF position */ #line 708 "pikchr.y" {yylhsminor.yy63 = pik_position_at_hdg(yymsp[-5].minor.yy21,&yymsp[-2].minor.yy0,yymsp[0].minor.yy63);} #line 2844 "pikchr.c" yymsp[-5].minor.yy63 = yylhsminor.yy63; break; case 78: /* position ::= expr HEADING EDGEPT OF position */ #line 710 "pikchr.y" {yylhsminor.yy63 = pik_position_at_hdg(yymsp[-4].minor.yy21,&yymsp[-2].minor.yy0,yymsp[0].minor.yy63);} #line 2850 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 79: /* position ::= expr EDGEPT OF position */ #line 712 "pikchr.y" {yylhsminor.yy63 = pik_position_at_hdg(yymsp[-3].minor.yy21,&yymsp[-2].minor.yy0,yymsp[0].minor.yy63);} #line 2856 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 80: /* position ::= expr ON HEADING expr FROM position */ #line 714 "pikchr.y" {yylhsminor.yy63 = pik_position_at_angle(yymsp[-5].minor.yy21,yymsp[-2].minor.yy21,yymsp[0].minor.yy63);} #line 2862 "pikchr.c" yymsp[-5].minor.yy63 = yylhsminor.yy63; break; case 81: /* position ::= expr HEADING expr FROM position */ #line 716 "pikchr.y" {yylhsminor.yy63 = pik_position_at_angle(yymsp[-4].minor.yy21,yymsp[-2].minor.yy21,yymsp[0].minor.yy63);} #line 2868 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 82: /* place ::= edge OF object */ #line 728 "pikchr.y" {yylhsminor.yy63 = pik_place_of_elem(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0);} #line 2874 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 83: /* place2 ::= object */ #line 729 "pikchr.y" {yylhsminor.yy63 = pik_place_of_elem(p,yymsp[0].minor.yy162,0);} #line 2880 "pikchr.c" yymsp[0].minor.yy63 = yylhsminor.yy63; break; case 84: /* place2 ::= object DOT_E edge */ #line 730 "pikchr.y" {yylhsminor.yy63 = pik_place_of_elem(p,yymsp[-2].minor.yy162,&yymsp[0].minor.yy0);} #line 2886 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 85: /* place2 ::= NTH VERTEX OF object */ #line 731 "pikchr.y" {yylhsminor.yy63 = pik_nth_vertex(p,&yymsp[-3].minor.yy0,&yymsp[-2].minor.yy0,yymsp[0].minor.yy162);} #line 2892 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 86: /* object ::= nth */ #line 743 "pikchr.y" {yylhsminor.yy162 = pik_find_nth(p,0,&yymsp[0].minor.yy0);} #line 2898 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 87: /* object ::= nth OF|IN object */ #line 744 "pikchr.y" {yylhsminor.yy162 = pik_find_nth(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0);} #line 2904 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 88: /* objectname ::= THIS */ #line 746 "pikchr.y" {yymsp[0].minor.yy162 = p->cur;} #line 2910 "pikchr.c" break; case 89: /* objectname ::= PLACENAME */ #line 747 "pikchr.y" {yylhsminor.yy162 = pik_find_byname(p,0,&yymsp[0].minor.yy0);} #line 2915 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 90: /* objectname ::= objectname DOT_U PLACENAME */ #line 749 "pikchr.y" {yylhsminor.yy162 = pik_find_byname(p,yymsp[-2].minor.yy162,&yymsp[0].minor.yy0);} #line 2921 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 91: /* nth ::= NTH CLASSNAME */ #line 751 "pikchr.y" {yylhsminor.yy0=yymsp[0].minor.yy0; yylhsminor.yy0.eCode = pik_nth_value(p,&yymsp[-1].minor.yy0); } #line 2927 "pikchr.c" yymsp[-1].minor.yy0 = yylhsminor.yy0; break; case 92: /* nth ::= NTH LAST CLASSNAME */ #line 752 "pikchr.y" {yylhsminor.yy0=yymsp[0].minor.yy0; yylhsminor.yy0.eCode = -pik_nth_value(p,&yymsp[-2].minor.yy0); } #line 2933 "pikchr.c" yymsp[-2].minor.yy0 = yylhsminor.yy0; break; case 93: /* nth ::= LAST CLASSNAME */ #line 753 "pikchr.y" {yymsp[-1].minor.yy0=yymsp[0].minor.yy0; yymsp[-1].minor.yy0.eCode = -1;} #line 2939 "pikchr.c" break; case 94: /* nth ::= LAST */ #line 754 "pikchr.y" {yylhsminor.yy0=yymsp[0].minor.yy0; yylhsminor.yy0.eCode = -1;} #line 2944 "pikchr.c" yymsp[0].minor.yy0 = yylhsminor.yy0; break; case 95: /* nth ::= NTH LB RB */ #line 755 "pikchr.y" {yylhsminor.yy0=yymsp[-1].minor.yy0; yylhsminor.yy0.eCode = pik_nth_value(p,&yymsp[-2].minor.yy0);} #line 2950 "pikchr.c" yymsp[-2].minor.yy0 = yylhsminor.yy0; break; case 96: /* nth ::= NTH LAST LB RB */ #line 756 "pikchr.y" {yylhsminor.yy0=yymsp[-1].minor.yy0; yylhsminor.yy0.eCode = -pik_nth_value(p,&yymsp[-3].minor.yy0);} #line 2956 "pikchr.c" yymsp[-3].minor.yy0 = yylhsminor.yy0; break; case 97: /* nth ::= LAST LB RB */ #line 757 "pikchr.y" {yymsp[-2].minor.yy0=yymsp[-1].minor.yy0; yymsp[-2].minor.yy0.eCode = -1; } #line 2962 "pikchr.c" break; case 98: /* expr ::= expr PLUS expr */ #line 759 "pikchr.y" {yylhsminor.yy21=yymsp[-2].minor.yy21+yymsp[0].minor.yy21;} #line 2967 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 99: /* expr ::= expr MINUS expr */ #line 760 "pikchr.y" {yylhsminor.yy21=yymsp[-2].minor.yy21-yymsp[0].minor.yy21;} #line 2973 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 100: /* expr ::= expr STAR expr */ #line 761 "pikchr.y" {yylhsminor.yy21=yymsp[-2].minor.yy21*yymsp[0].minor.yy21;} #line 2979 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 101: /* expr ::= expr SLASH expr */ #line 762 "pikchr.y" { if( yymsp[0].minor.yy21==0.0 ){ pik_error(p, &yymsp[-1].minor.yy0, "division by zero"); yylhsminor.yy21 = 0.0; } else{ yylhsminor.yy21 = yymsp[-2].minor.yy21/yymsp[0].minor.yy21; } } #line 2988 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 102: /* expr ::= MINUS expr */ #line 766 "pikchr.y" {yymsp[-1].minor.yy21=-yymsp[0].minor.yy21;} #line 2994 "pikchr.c" break; case 103: /* expr ::= PLUS expr */ #line 767 "pikchr.y" {yymsp[-1].minor.yy21=yymsp[0].minor.yy21;} #line 2999 "pikchr.c" break; case 104: /* expr ::= LP expr RP */ #line 768 "pikchr.y" {yymsp[-2].minor.yy21=yymsp[-1].minor.yy21;} #line 3004 "pikchr.c" break; case 105: /* expr ::= LP FILL|COLOR|THICKNESS RP */ #line 769 "pikchr.y" {yymsp[-2].minor.yy21=pik_get_var(p,&yymsp[-1].minor.yy0);} #line 3009 "pikchr.c" break; case 106: /* expr ::= NUMBER */ #line 770 "pikchr.y" {yylhsminor.yy21=pik_atof(&yymsp[0].minor.yy0);} #line 3014 "pikchr.c" yymsp[0].minor.yy21 = yylhsminor.yy21; break; case 107: /* expr ::= ID */ #line 771 "pikchr.y" {yylhsminor.yy21=pik_get_var(p,&yymsp[0].minor.yy0);} #line 3020 "pikchr.c" yymsp[0].minor.yy21 = yylhsminor.yy21; break; case 108: /* expr ::= FUNC1 LP expr RP */ #line 772 "pikchr.y" {yylhsminor.yy21 = pik_func(p,&yymsp[-3].minor.yy0,yymsp[-1].minor.yy21,0.0);} #line 3026 "pikchr.c" yymsp[-3].minor.yy21 = yylhsminor.yy21; break; case 109: /* expr ::= FUNC2 LP expr COMMA expr RP */ #line 773 "pikchr.y" {yylhsminor.yy21 = pik_func(p,&yymsp[-5].minor.yy0,yymsp[-3].minor.yy21,yymsp[-1].minor.yy21);} #line 3032 "pikchr.c" yymsp[-5].minor.yy21 = yylhsminor.yy21; break; case 110: /* expr ::= DIST LP position COMMA position RP */ #line 774 "pikchr.y" {yymsp[-5].minor.yy21 = pik_dist(&yymsp[-3].minor.yy63,&yymsp[-1].minor.yy63);} #line 3038 "pikchr.c" break; case 111: /* expr ::= place2 DOT_XY X */ #line 775 "pikchr.y" {yylhsminor.yy21 = yymsp[-2].minor.yy63.x;} #line 3043 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 112: /* expr ::= place2 DOT_XY Y */ #line 776 "pikchr.y" {yylhsminor.yy21 = yymsp[-2].minor.yy63.y;} #line 3049 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 113: /* expr ::= object DOT_L numproperty */ case 114: /* expr ::= object DOT_L dashproperty */ yytestcase(yyruleno==114); case 115: /* expr ::= object DOT_L colorproperty */ yytestcase(yyruleno==115); #line 777 "pikchr.y" {yylhsminor.yy21=pik_property_of(yymsp[-2].minor.yy162,&yymsp[0].minor.yy0);} #line 3057 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; default: /* (116) lvalue ::= ID */ yytestcase(yyruleno==116); /* (117) lvalue ::= FILL */ yytestcase(yyruleno==117); /* (118) lvalue ::= COLOR */ yytestcase(yyruleno==118); /* (119) lvalue ::= THICKNESS */ yytestcase(yyruleno==119); |
︙ | ︙ | |||
3096 3097 3098 3099 3100 3101 3102 | int yymajor, /* The major type of the error token */ pik_parserTOKENTYPE yyminor /* The minor type of the error token */ ){ pik_parserARG_FETCH pik_parserCTX_FETCH #define TOKEN yyminor /************ Begin %syntax_error code ****************************************/ | | | | 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 | int yymajor, /* The major type of the error token */ pik_parserTOKENTYPE yyminor /* The minor type of the error token */ ){ pik_parserARG_FETCH pik_parserCTX_FETCH #define TOKEN yyminor /************ Begin %syntax_error code ****************************************/ #line 537 "pikchr.y" if( TOKEN.z && TOKEN.z[0] ){ pik_error(p, &TOKEN, "syntax error"); }else{ pik_error(p, 0, "syntax error"); } UNUSED_PARAMETER(yymajor); #line 3168 "pikchr.c" /************ End %syntax_error code ******************************************/ pik_parserARG_STORE /* Suppress warning about unused %extra_argument variable */ pik_parserCTX_STORE } /* ** The following is executed when the parser accepts |
︙ | ︙ | |||
3225 3226 3227 3228 3229 3230 3231 | #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack)); } #endif | < < < < < < < | 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 | #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack)); } #endif if( yypParser->yytos>=yypParser->yystackEnd ){ if( yyGrowStack(yypParser) ){ yyStackOverflow(yypParser); break; } } } yyact = yy_reduce(yypParser,yyruleno,yymajor,yyminor pik_parserCTX_PARAM); }else if( yyact <= YY_MAX_SHIFTREDUCE ){ yy_shift(yypParser,yyact,(YYCODETYPE)yymajor,yyminor); #ifndef YYNOERRORRECOVERY yypParser->yyerrcnt--; #endif |
︙ | ︙ | |||
3380 3381 3382 3383 3384 3385 3386 | assert( iToken<(int)(sizeof(yyFallback)/sizeof(yyFallback[0])) ); return yyFallback[iToken]; #else (void)iToken; return 0; #endif } | | | 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 | assert( iToken<(int)(sizeof(yyFallback)/sizeof(yyFallback[0])) ); return yyFallback[iToken]; #else (void)iToken; return 0; #endif } #line 782 "pikchr.y" /* Chart of the 148 official CSS color names with their ** corresponding RGB values thru Color Module Level 4: ** https://developer.mozilla.org/en-US/docs/Web/CSS/color_value ** |
︙ | ︙ | |||
3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 | { "charwid", 0.08 }, { "circlerad", 0.25 }, { "color", 0.0 }, { "cylht", 0.5 }, { "cylrad", 0.075 }, { "cylwid", 0.75 }, { "dashwid", 0.05 }, { "dotrad", 0.015 }, { "ellipseht", 0.5 }, { "ellipsewid", 0.75 }, { "fileht", 0.75 }, { "filerad", 0.15 }, { "filewid", 0.5 }, { "fill", -1.0 }, | > > | 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 | { "charwid", 0.08 }, { "circlerad", 0.25 }, { "color", 0.0 }, { "cylht", 0.5 }, { "cylrad", 0.075 }, { "cylwid", 0.75 }, { "dashwid", 0.05 }, { "diamondht", 0.75 }, { "diamondwid", 1.0 }, { "dotrad", 0.015 }, { "ellipseht", 0.5 }, { "ellipsewid", 0.75 }, { "fileht", 0.75 }, { "filerad", 0.15 }, { "filewid", 0.5 }, { "fill", -1.0 }, |
︙ | ︙ | |||
3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 | pik_append_dis(p," r=\"", r, "\""); pik_append_style(p,pObj,2); pik_append(p,"\" />\n", -1); } pik_append_txt(p, pObj, 0); } /* Methods for the "ellipse" class */ static void ellipseInit(Pik *p, PObj *pObj){ pObj->w = pik_value(p, "ellipsewid",10,0); pObj->h = pik_value(p, "ellipseht",9,0); } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 | pik_append_dis(p," r=\"", r, "\""); pik_append_style(p,pObj,2); pik_append(p,"\" />\n", -1); } pik_append_txt(p, pObj, 0); } /* Methods for the "diamond" class */ static void diamondInit(Pik *p, PObj *pObj){ pObj->w = pik_value(p, "diamondwid",10,0); pObj->h = pik_value(p, "diamondht",9,0); pObj->bAltAutoFit = 1; } /* Return offset from the center of the box to the compass point ** given by parameter cp */ static PPoint diamondOffset(Pik *p, PObj *pObj, int cp){ PPoint pt = cZeroPoint; PNum w2 = 0.5*pObj->w; PNum w4 = 0.25*pObj->w; PNum h2 = 0.5*pObj->h; PNum h4 = 0.25*pObj->h; switch( cp ){ case CP_C: break; case CP_N: pt.x = 0.0; pt.y = h2; break; case CP_NE: pt.x = w4; pt.y = h4; break; case CP_E: pt.x = w2; pt.y = 0.0; break; case CP_SE: pt.x = w4; pt.y = -h4; break; case CP_S: pt.x = 0.0; pt.y = -h2; break; case CP_SW: pt.x = -w4; pt.y = -h4; break; case CP_W: pt.x = -w2; pt.y = 0.0; break; case CP_NW: pt.x = -w4; pt.y = h4; break; default: assert(0); } UNUSED_PARAMETER(p); return pt; } static void diamondFit(Pik *p, PObj *pObj, PNum w, PNum h){ if( pObj->w<=0 ) pObj->w = w*1.5; if( pObj->h<=0 ) pObj->h = h*1.5; if( pObj->w>0 && pObj->h>0 ){ PNum x = pObj->w*h/pObj->h + w; PNum y = pObj->h*x/pObj->w; pObj->w = x; pObj->h = y; } UNUSED_PARAMETER(p); } static void diamondRender(Pik *p, PObj *pObj){ PNum w2 = 0.5*pObj->w; PNum h2 = 0.5*pObj->h; PPoint pt = pObj->ptAt; if( pObj->sw>=0.0 ){ pik_append_xy(p,"<path d=\"M", pt.x-w2,pt.y); pik_append_xy(p,"L", pt.x,pt.y-h2); pik_append_xy(p,"L", pt.x+w2,pt.y); pik_append_xy(p,"L", pt.x,pt.y+h2); pik_append(p,"Z\" ",-1); pik_append_style(p,pObj,3); pik_append(p,"\" />\n", -1); } pik_append_txt(p, pObj, 0); } /* Methods for the "ellipse" class */ static void ellipseInit(Pik *p, PObj *pObj){ pObj->w = pik_value(p, "ellipsewid",10,0); pObj->h = pik_value(p, "ellipseht",9,0); } |
︙ | ︙ | |||
4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 | /* xNumProp */ 0, /* xCheck */ 0, /* xChop */ boxChop, /* xOffset */ cylinderOffset, /* xFit */ cylinderFit, /* xRender */ cylinderRender }, { /* name */ "dot", /* isline */ 0, /* eJust */ 0, /* xInit */ dotInit, /* xNumProp */ dotNumProp, /* xCheck */ dotCheck, /* xChop */ circleChop, | > > > > > > > > > > > | 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 | /* xNumProp */ 0, /* xCheck */ 0, /* xChop */ boxChop, /* xOffset */ cylinderOffset, /* xFit */ cylinderFit, /* xRender */ cylinderRender }, { /* name */ "diamond", /* isline */ 0, /* eJust */ 0, /* xInit */ diamondInit, /* xNumProp */ 0, /* xCheck */ 0, /* xChop */ boxChop, /* xOffset */ diamondOffset, /* xFit */ diamondFit, /* xRender */ diamondRender }, { /* name */ "dot", /* isline */ 0, /* eJust */ 0, /* xInit */ dotInit, /* xNumProp */ dotNumProp, /* xCheck */ dotCheck, /* xChop */ circleChop, |
︙ | ︙ | |||
5106 5107 5108 5109 5110 5111 5112 | int iErrCol; /* Column of the error token on its line */ int iStart; /* Start position of the error context */ int iEnd; /* End position of the error context */ int iLineno; /* Line number of the error */ int iFirstLineno; /* Line number of start of error context */ int i; /* Loop counter */ int iBump = 0; /* Bump the location of the error cursor */ | | | 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 | int iErrCol; /* Column of the error token on its line */ int iStart; /* Start position of the error context */ int iEnd; /* End position of the error context */ int iLineno; /* Line number of the error */ int iFirstLineno; /* Line number of start of error context */ int i; /* Loop counter */ int iBump = 0; /* Bump the location of the error cursor */ char zLineno[24]; /* Buffer in which to generate line numbers */ iErrPt = (int)(pErr->z - p->sIn.z); if( iErrPt>=(int)p->sIn.n ){ iErrPt = p->sIn.n-1; iBump = 1; }else{ while( iErrPt>0 && (p->sIn.z[iErrPt]=='\n' || p->sIn.z[iErrPt]=='\r') ){ |
︙ | ︙ | |||
6267 6268 6269 6270 6271 6272 6273 | pik_error(0, pFit, "no text to fit to"); return; } if( pObj->type->xFit==0 ) return; pik_bbox_init(&bbox); pik_compute_layout_settings(p); pik_append_txt(p, pObj, &bbox); | > | > > > | | 6360 6361 6362 6363 6364 6365 6366 6367 6368 6369 6370 6371 6372 6373 6374 6375 6376 6377 6378 6379 | pik_error(0, pFit, "no text to fit to"); return; } if( pObj->type->xFit==0 ) return; pik_bbox_init(&bbox); pik_compute_layout_settings(p); pik_append_txt(p, pObj, &bbox); if( (eWhich & 1)!=0 || pObj->bAltAutoFit ){ w = (bbox.ne.x - bbox.sw.x) + p->charWidth; }else{ w = 0; } if( (eWhich & 2)!=0 || pObj->bAltAutoFit ){ PNum h1, h2; h1 = (bbox.ne.y - pObj->ptAt.y); h2 = (pObj->ptAt.y - bbox.sw.y); h = 2.0*( h1<h2 ? h2 : h1 ) + 0.5*p->charHeight; }else{ h = 0; } |
︙ | ︙ | |||
8141 8142 8143 8144 8145 8146 8147 | return TCL_OK; } #endif /* PIKCHR_TCL */ | | | 8238 8239 8240 8241 8242 8243 8244 8245 | return TCL_OK; } #endif /* PIKCHR_TCL */ #line 8270 "pikchr.c" |
Changes to extsrc/pikchr.js.
1 2 3 4 5 | var initPikchrModule = (() => { var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined; return ( | | < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | var initPikchrModule = (() => { var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined; return ( function(moduleArg = {}) { var Module = moduleArg; var readyPromiseResolve, readyPromiseReject; Module["ready"] = new Promise((resolve, reject) => { readyPromiseResolve = resolve; readyPromiseReject = reject; }); var moduleOverrides = Object.assign({}, Module); var arguments_ = []; |
︙ | ︙ | |||
34 35 36 37 38 39 40 | function locateFile(path) { if (Module["locateFile"]) { return Module["locateFile"](path, scriptDirectory); } return scriptDirectory + path; } | | | | | | | | | | < | < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < > < < < < < < < < | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | function locateFile(path) { if (Module["locateFile"]) { return Module["locateFile"](path, scriptDirectory); } return scriptDirectory + path; } var read_, readAsync, readBinary; if (ENVIRONMENT_IS_WEB || ENVIRONMENT_IS_WORKER) { if (ENVIRONMENT_IS_WORKER) { scriptDirectory = self.location.href; } else if (typeof document != "undefined" && document.currentScript) { scriptDirectory = document.currentScript.src; } if (_scriptDir) { scriptDirectory = _scriptDir; } if (scriptDirectory.startsWith("blob:")) { scriptDirectory = ""; } else { scriptDirectory = scriptDirectory.substr(0, scriptDirectory.replace(/[?#].*/, "").lastIndexOf("/") + 1); } { read_ = url => { var xhr = new XMLHttpRequest; xhr.open("GET", url, false); xhr.send(null); return xhr.responseText; }; if (ENVIRONMENT_IS_WORKER) { readBinary = url => { var xhr = new XMLHttpRequest; xhr.open("GET", url, false); xhr.responseType = "arraybuffer"; xhr.send(null); return new Uint8Array(/** @type{!ArrayBuffer} */ (xhr.response)); }; } readAsync = (url, onload, onerror) => { var xhr = new XMLHttpRequest; xhr.open("GET", url, true); xhr.responseType = "arraybuffer"; xhr.onload = () => { if (xhr.status == 200 || (xhr.status == 0 && xhr.response)) { onload(xhr.response); return; } onerror(); }; xhr.onerror = onerror; xhr.send(null); }; } } else {} var out = Module["print"] || console.log.bind(console); var err = Module["printErr"] || console.error.bind(console); Object.assign(Module, moduleOverrides); moduleOverrides = null; if (Module["arguments"]) arguments_ = Module["arguments"]; if (Module["thisProgram"]) thisProgram = Module["thisProgram"]; if (Module["quit"]) quit_ = Module["quit"]; var wasmBinary; if (Module["wasmBinary"]) wasmBinary = Module["wasmBinary"]; if (typeof WebAssembly != "object") { abort("no native wasm support detected"); } var wasmMemory; var ABORT = false; var EXITSTATUS; var /** @type {!Int8Array} */ HEAP8, /** @type {!Uint8Array} */ HEAPU8, /** @type {!Int16Array} */ HEAP16, /** @type {!Uint16Array} */ HEAPU16, /** @type {!Int32Array} */ HEAP32, /** @type {!Uint32Array} */ HEAPU32, /** @type {!Float32Array} */ HEAPF32, /** @type {!Float64Array} */ HEAPF64; function updateMemoryViews() { var b = wasmMemory.buffer; Module["HEAP8"] = HEAP8 = new Int8Array(b); Module["HEAP16"] = HEAP16 = new Int16Array(b); Module["HEAPU8"] = HEAPU8 = new Uint8Array(b); Module["HEAPU16"] = HEAPU16 = new Uint16Array(b); Module["HEAP32"] = HEAP32 = new Int32Array(b); Module["HEAPU32"] = HEAPU32 = new Uint32Array(b); Module["HEAPF32"] = HEAPF32 = new Float32Array(b); Module["HEAPF64"] = HEAPF64 = new Float64Array(b); } var __ATPRERUN__ = []; var __ATINIT__ = []; var __ATPOSTRUN__ = []; var runtimeInitialized = false; function preRun() { if (Module["preRun"]) { if (typeof Module["preRun"] == "function") Module["preRun"] = [ Module["preRun"] ]; while (Module["preRun"].length) { addOnPreRun(Module["preRun"].shift()); } } |
︙ | ︙ | |||
268 269 270 271 272 273 274 | var runDependencyWatcher = null; var dependenciesFulfilled = null; function addRunDependency(id) { runDependencies++; | < | < < | < | < | < | > | > | < | < | | | | | | | < < | | < | | | | | > > > > | > > > > | | | | > | > > > > > > > | > | > | > | | | < | < | > | | | | | < < | < | > | < | > | | < < < < < < < | < > > | | < < < < | > | < | > | > > > > > > > > | > | | > > < > > | > | > | > > | < > > | > > | < < < | | > > > > > | | > | > | > | > > | > > > > > > > | > > | > > > | | | | > > > > > > | > > > > > > | > > > > > | < | > > > | | | < | < > | < < > | > > | < < > > | < < < | < < | > > | | < < | < | > > > > > > > > | > < < < > > > < < > | > < > > > | | | > < < > | > > > > > | < < > | < > | < < > | | | | | < | | < < < | > > | | > > | > | | > > > > > | < | > > > > | > > | < > > > | > > > > > > > > > > > > > > > > > > | < | < | | | > | > > > > | | > > > > > | < < | | 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 | var runDependencyWatcher = null; var dependenciesFulfilled = null; function addRunDependency(id) { runDependencies++; Module["monitorRunDependencies"]?.(runDependencies); } function removeRunDependency(id) { runDependencies--; Module["monitorRunDependencies"]?.(runDependencies); if (runDependencies == 0) { if (runDependencyWatcher !== null) { clearInterval(runDependencyWatcher); runDependencyWatcher = null; } if (dependenciesFulfilled) { var callback = dependenciesFulfilled; dependenciesFulfilled = null; callback(); } } } /** @param {string|number=} what */ function abort(what) { Module["onAbort"]?.(what); what = "Aborted(" + what + ")"; err(what); ABORT = true; EXITSTATUS = 1; what += ". Build with -sASSERTIONS for more info."; /** @suppress {checkTypes} */ var e = new WebAssembly.RuntimeError(what); readyPromiseReject(e); throw e; } var dataURIPrefix = "data:application/octet-stream;base64,"; /** * Indicates whether filename is a base64 data URI. * @noinline */ var isDataURI = filename => filename.startsWith(dataURIPrefix); var wasmBinaryFile; wasmBinaryFile = "pikchr.wasm"; if (!isDataURI(wasmBinaryFile)) { wasmBinaryFile = locateFile(wasmBinaryFile); } function getBinarySync(file) { if (file == wasmBinaryFile && wasmBinary) { return new Uint8Array(wasmBinary); } if (readBinary) { return readBinary(file); } throw "both async and sync fetching of the wasm failed"; } function getBinaryPromise(binaryFile) { if (!wasmBinary && (ENVIRONMENT_IS_WEB || ENVIRONMENT_IS_WORKER)) { if (typeof fetch == "function") { return fetch(binaryFile, { credentials: "same-origin" }).then(response => { if (!response["ok"]) { throw `failed to load wasm binary file at '${binaryFile}'`; } return response["arrayBuffer"](); }).catch(() => getBinarySync(binaryFile)); } } return Promise.resolve().then(() => getBinarySync(binaryFile)); } function instantiateArrayBuffer(binaryFile, imports, receiver) { return getBinaryPromise(binaryFile).then(binary => WebAssembly.instantiate(binary, imports)).then(instance => instance).then(receiver, reason => { err(`failed to asynchronously prepare wasm: ${reason}`); abort(reason); }); } function instantiateAsync(binary, binaryFile, imports, callback) { if (!binary && typeof WebAssembly.instantiateStreaming == "function" && !isDataURI(binaryFile) && typeof fetch == "function") { return fetch(binaryFile, { credentials: "same-origin" }).then(response => { /** @suppress {checkTypes} */ var result = WebAssembly.instantiateStreaming(response, imports); return result.then(callback, function(reason) { err(`wasm streaming compile failed: ${reason}`); err("falling back to ArrayBuffer instantiation"); return instantiateArrayBuffer(binaryFile, imports, callback); }); }); } return instantiateArrayBuffer(binaryFile, imports, callback); } function createWasm() { var info = { "a": wasmImports }; /** @param {WebAssembly.Module=} module*/ function receiveInstance(instance, module) { wasmExports = instance.exports; wasmMemory = wasmExports["d"]; updateMemoryViews(); addOnInit(wasmExports["e"]); removeRunDependency("wasm-instantiate"); return wasmExports; } addRunDependency("wasm-instantiate"); function receiveInstantiationResult(result) { receiveInstance(result["instance"]); } if (Module["instantiateWasm"]) { try { return Module["instantiateWasm"](info, receiveInstance); } catch (e) { err(`Module.instantiateWasm callback failed with error: ${e}`); readyPromiseReject(e); } } instantiateAsync(wasmBinary, wasmBinaryFile, info, receiveInstantiationResult).catch(readyPromiseReject); return {}; } /** @constructor */ function ExitStatus(status) { this.name = "ExitStatus"; this.message = `Program terminated with exit(${status})`; this.status = status; } var callRuntimeCallbacks = callbacks => { while (callbacks.length > 0) { callbacks.shift()(Module); } }; /** * @param {number} ptr * @param {string} type */ function getValue(ptr, type = "i8") { if (type.endsWith("*")) type = "*"; switch (type) { case "i1": return HEAP8[((ptr) >> 0)]; case "i8": return HEAP8[((ptr) >> 0)]; case "i16": return HEAP16[((ptr) >> 1)]; case "i32": return HEAP32[((ptr) >> 2)]; case "i64": abort("to do getValue(i64) use WASM_BIGINT"); case "float": return HEAPF32[((ptr) >> 2)]; case "double": return HEAPF64[((ptr) >> 3)]; case "*": return HEAPU32[((ptr) >> 2)]; default: abort(`invalid type for getValue: ${type}`); } } var noExitRuntime = Module["noExitRuntime"] || true; /** * @param {number} ptr * @param {number} value * @param {string} type */ function setValue(ptr, value, type = "i8") { if (type.endsWith("*")) type = "*"; switch (type) { case "i1": HEAP8[((ptr) >> 0)] = value; break; case "i8": HEAP8[((ptr) >> 0)] = value; break; case "i16": HEAP16[((ptr) >> 1)] = value; break; case "i32": HEAP32[((ptr) >> 2)] = value; break; case "i64": abort("to do setValue(i64) use WASM_BIGINT"); case "float": HEAPF32[((ptr) >> 2)] = value; break; case "double": HEAPF64[((ptr) >> 3)] = value; break; case "*": HEAPU32[((ptr) >> 2)] = value; break; default: abort(`invalid type for setValue: ${type}`); } } var UTF8Decoder = typeof TextDecoder != "undefined" ? new TextDecoder("utf8") : undefined; /** * Given a pointer 'idx' to a null-terminated UTF8-encoded string in the given * array that contains uint8 values, returns a copy of that string as a * Javascript String object. * heapOrArray is either a regular array, or a JavaScript typed array view. * @param {number} idx * @param {number=} maxBytesToRead * @return {string} */ var UTF8ArrayToString = (heapOrArray, idx, maxBytesToRead) => { var endIdx = idx + maxBytesToRead; var endPtr = idx; while (heapOrArray[endPtr] && !(endPtr >= endIdx)) ++endPtr; if (endPtr - idx > 16 && heapOrArray.buffer && UTF8Decoder) { return UTF8Decoder.decode(heapOrArray.subarray(idx, endPtr)); } var str = ""; while (idx < endPtr) { var u0 = heapOrArray[idx++]; if (!(u0 & 128)) { str += String.fromCharCode(u0); continue; } var u1 = heapOrArray[idx++] & 63; if ((u0 & 224) == 192) { str += String.fromCharCode(((u0 & 31) << 6) | u1); continue; } var u2 = heapOrArray[idx++] & 63; if ((u0 & 240) == 224) { u0 = ((u0 & 15) << 12) | (u1 << 6) | u2; } else { u0 = ((u0 & 7) << 18) | (u1 << 12) | (u2 << 6) | (heapOrArray[idx++] & 63); } if (u0 < 65536) { str += String.fromCharCode(u0); } else { var ch = u0 - 65536; str += String.fromCharCode(55296 | (ch >> 10), 56320 | (ch & 1023)); } } return str; }; /** * Given a pointer 'ptr' to a null-terminated UTF8-encoded string in the * emscripten HEAP, returns a copy of that string as a Javascript String object. * * @param {number} ptr * @param {number=} maxBytesToRead - An optional length that specifies the * maximum number of bytes to read. You can omit this parameter to scan the * string until the first 0 byte. If maxBytesToRead is passed, and the string * at [ptr, ptr+maxBytesToReadr[ contains a null byte in the middle, then the * string will cut short at that byte index (i.e. maxBytesToRead will not * produce a string of exact length [ptr, ptr+maxBytesToRead[) N.B. mixing * frequent uses of UTF8ToString() with and without maxBytesToRead may throw * JS JIT optimizations off, so it is worth to consider consistently using one * @return {string} */ var UTF8ToString = (ptr, maxBytesToRead) => ptr ? UTF8ArrayToString(HEAPU8, ptr, maxBytesToRead) : ""; var ___assert_fail = (condition, filename, line, func) => { abort(`Assertion failed: ${UTF8ToString(condition)}, at: ` + [ filename ? UTF8ToString(filename) : "unknown filename", line, func ? UTF8ToString(func) : "unknown function" ]); }; var abortOnCannotGrowMemory = requestedSize => { abort("OOM"); }; var _emscripten_resize_heap = requestedSize => { var oldSize = HEAPU8.length; requestedSize >>>= 0; abortOnCannotGrowMemory(requestedSize); }; var runtimeKeepaliveCounter = 0; var keepRuntimeAlive = () => noExitRuntime || runtimeKeepaliveCounter > 0; var _proc_exit = code => { EXITSTATUS = code; if (!keepRuntimeAlive()) { Module["onExit"]?.(code); ABORT = true; } quit_(code, new ExitStatus(code)); }; /** @param {boolean|number=} implicit */ var exitJS = (status, implicit) => { EXITSTATUS = status; _proc_exit(status); }; var _exit = exitJS; var getCFunc = ident => { var func = Module["_" + ident]; return func; }; var writeArrayToMemory = (array, buffer) => { HEAP8.set(array, buffer); }; var lengthBytesUTF8 = str => { var len = 0; for (var i = 0; i < str.length; ++i) { var c = str.charCodeAt(i); if (c <= 127) { len++; } else if (c <= 2047) { len += 2; } else if (c >= 55296 && c <= 57343) { len += 4; ++i; } else { len += 3; } } return len; }; var stringToUTF8Array = (str, heap, outIdx, maxBytesToWrite) => { if (!(maxBytesToWrite > 0)) return 0; var startIdx = outIdx; var endIdx = outIdx + maxBytesToWrite - 1; for (var i = 0; i < str.length; ++i) { var u = str.charCodeAt(i); if (u >= 55296 && u <= 57343) { var u1 = str.charCodeAt(++i); u = 65536 + ((u & 1023) << 10) | (u1 & 1023); } if (u <= 127) { if (outIdx >= endIdx) break; heap[outIdx++] = u; } else if (u <= 2047) { if (outIdx + 1 >= endIdx) break; heap[outIdx++] = 192 | (u >> 6); heap[outIdx++] = 128 | (u & 63); } else if (u <= 65535) { if (outIdx + 2 >= endIdx) break; heap[outIdx++] = 224 | (u >> 12); heap[outIdx++] = 128 | ((u >> 6) & 63); heap[outIdx++] = 128 | (u & 63); } else { if (outIdx + 3 >= endIdx) break; heap[outIdx++] = 240 | (u >> 18); heap[outIdx++] = 128 | ((u >> 12) & 63); heap[outIdx++] = 128 | ((u >> 6) & 63); heap[outIdx++] = 128 | (u & 63); } } heap[outIdx] = 0; return outIdx - startIdx; }; var stringToUTF8 = (str, outPtr, maxBytesToWrite) => stringToUTF8Array(str, HEAPU8, outPtr, maxBytesToWrite); var stringToUTF8OnStack = str => { var size = lengthBytesUTF8(str) + 1; var ret = stackAlloc(size); stringToUTF8(str, ret, size); return ret; }; /** * @param {string|null=} returnType * @param {Array=} argTypes * @param {Arguments|Array=} args * @param {Object=} opts */ var ccall = (ident, returnType, argTypes, args, opts) => { var toC = { "string": str => { var ret = 0; if (str !== null && str !== undefined && str !== 0) { ret = stringToUTF8OnStack(str); } return ret; }, "array": arr => { var ret = stackAlloc(arr.length); writeArrayToMemory(arr, ret); return ret; |
︙ | ︙ | |||
596 597 598 599 600 601 602 | var ret = func.apply(null, cArgs); function onDone(ret) { if (stack !== 0) stackRestore(stack); return convertReturnValue(ret); } ret = onDone(ret); return ret; | | > > > > | < | | | | | | | | < < | < < | < < | | < > | < < | < | 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 | var ret = func.apply(null, cArgs); function onDone(ret) { if (stack !== 0) stackRestore(stack); return convertReturnValue(ret); } ret = onDone(ret); return ret; }; /** * @param {string=} returnType * @param {Array=} argTypes * @param {Object=} opts */ var cwrap = (ident, returnType, argTypes, opts) => { var numericArgs = !argTypes || argTypes.every(type => type === "number" || type === "boolean"); var numericRet = returnType !== "string"; if (numericRet && numericArgs && !opts) { return getCFunc(ident); } return function() { return ccall(ident, returnType, argTypes, arguments, opts); }; }; var wasmImports = { /** @export */ a: ___assert_fail, /** @export */ b: _emscripten_resize_heap, /** @export */ c: _exit }; var wasmExports = createWasm(); var ___wasm_call_ctors = () => (___wasm_call_ctors = wasmExports["e"])(); var _pikchr = Module["_pikchr"] = (a0, a1, a2, a3, a4) => (_pikchr = Module["_pikchr"] = wasmExports["f"])(a0, a1, a2, a3, a4); var stackSave = () => (stackSave = wasmExports["h"])(); var stackRestore = a0 => (stackRestore = wasmExports["i"])(a0); var stackAlloc = a0 => (stackAlloc = wasmExports["j"])(a0); Module["stackAlloc"] = stackAlloc; Module["stackSave"] = stackSave; Module["stackRestore"] = stackRestore; Module["cwrap"] = cwrap; Module["setValue"] = setValue; Module["getValue"] = getValue; var calledRun; dependenciesFulfilled = function runCaller() { if (!calledRun) run(); if (!calledRun) dependenciesFulfilled = runCaller; }; function run() { if (runDependencies > 0) { return; } preRun(); if (runDependencies > 0) { return; } |
︙ | ︙ | |||
697 698 699 700 701 702 703 | Module["preInit"].pop()(); } } run(); | | < < | | 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 | Module["preInit"].pop()(); } } run(); return moduleArg.ready } ); })(); if (typeof exports === 'object' && typeof module === 'object') module.exports = initPikchrModule; else if (typeof define === 'function' && define['amd']) define([], () => initPikchrModule); |
Changes to extsrc/pikchr.wasm.
cannot compute difference between binary files
Changes to extsrc/shell.c.
︙ | ︙ | |||
248 249 250 251 252 253 254 | #endif #undef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #include <windows.h> /* string conversion routines only needed on Win32 */ extern char *sqlite3_win32_unicode_to_utf8(LPCWSTR); | < < > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > | > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | > | 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 | #endif #undef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #include <windows.h> /* string conversion routines only needed on Win32 */ extern char *sqlite3_win32_unicode_to_utf8(LPCWSTR); extern LPWSTR sqlite3_win32_utf8_to_unicode(const char *zText); #endif /* Use console I/O package as a direct INCLUDE. */ #define SQLITE_INTERNAL_LINKAGE static #ifdef SQLITE_SHELL_FIDDLE /* Deselect most features from the console I/O package for Fiddle. */ # define SQLITE_CIO_NO_REDIRECT # define SQLITE_CIO_NO_CLASSIFY # define SQLITE_CIO_NO_TRANSLATE # define SQLITE_CIO_NO_SETMODE #endif /************************* Begin ../ext/consio/console_io.h ******************/ /* ** 2023 November 1 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ******************************************************************************** ** This file exposes various interfaces used for console and other I/O ** by the SQLite project command-line tools. These interfaces are used ** at either source conglomeration time, compilation time, or run time. ** This source provides for either inclusion into conglomerated, ** "single-source" forms or separate compilation then linking. ** ** Platform dependencies are "hidden" here by various stratagems so ** that, provided certain conditions are met, the programs using this ** source or object code compiled from it need no explicit conditional ** compilation in their source for their console and stream I/O. ** ** The symbols and functionality exposed here are not a public API. ** This code may change in tandem with other project code as needed. ** ** When this .h file and its companion .c are directly incorporated into ** a source conglomeration (such as shell.c), the preprocessor symbol ** CIO_WIN_WC_XLATE is defined as 0 or 1, reflecting whether console I/O ** translation for Windows is effected for the build. */ #define HAVE_CONSOLE_IO_H 1 #ifndef SQLITE_INTERNAL_LINKAGE # define SQLITE_INTERNAL_LINKAGE extern /* external to translation unit */ # include <stdio.h> #else # define SHELL_NO_SYSINC /* Better yet, modify mkshellc.tcl for this. */ #endif #ifndef SQLITE3_H /* # include "sqlite3.h" */ #endif #ifndef SQLITE_CIO_NO_CLASSIFY /* Define enum for use with following function. */ typedef enum StreamsAreConsole { SAC_NoConsole = 0, SAC_InConsole = 1, SAC_OutConsole = 2, SAC_ErrConsole = 4, SAC_AnyConsole = 0x7 } StreamsAreConsole; /* ** Classify the three standard I/O streams according to whether ** they are connected to a console attached to the process. ** ** Returns the bit-wise OR of SAC_{In,Out,Err}Console values, ** or SAC_NoConsole if none of the streams reaches a console. ** ** This function should be called before any I/O is done with ** the given streams. As a side-effect, the given inputs are ** recorded so that later I/O operations on them may be done ** differently than the C library FILE* I/O would be done, ** iff the stream is used for the I/O functions that follow, ** and to support the ones that use an implicit stream. ** ** On some platforms, stream or console mode alteration (aka ** "Setup") may be made which is undone by consoleRestore(). */ SQLITE_INTERNAL_LINKAGE StreamsAreConsole consoleClassifySetup( FILE *pfIn, FILE *pfOut, FILE *pfErr ); /* A usual call for convenience: */ #define SQLITE_STD_CONSOLE_INIT() consoleClassifySetup(stdin,stdout,stderr) /* ** After an initial call to consoleClassifySetup(...), renew ** the same setup it effected. (A call not after is an error.) ** This will restore state altered by consoleRestore(); ** ** Applications which run an inferior (child) process which ** inherits the same I/O streams may call this function after ** such a process exits to guard against console mode changes. */ SQLITE_INTERNAL_LINKAGE void consoleRenewSetup(void); /* ** Undo any side-effects left by consoleClassifySetup(...). ** ** This should be called after consoleClassifySetup() and ** before the process terminates normally. It is suitable ** for use with the atexit() C library procedure. After ** this call, no console I/O should be done until one of ** console{Classify or Renew}Setup(...) is called again. ** ** Applications which run an inferior (child) process that ** inherits the same I/O streams might call this procedure ** before so that said process will have a console setup ** however users have configured it or come to expect. */ SQLITE_INTERNAL_LINKAGE void SQLITE_CDECL consoleRestore( void ); #else /* defined(SQLITE_CIO_NO_CLASSIFY) */ # define consoleClassifySetup(i,o,e) # define consoleRenewSetup() # define consoleRestore() #endif /* defined(SQLITE_CIO_NO_CLASSIFY) */ #ifndef SQLITE_CIO_NO_REDIRECT /* ** Set stream to be used for the functions below which write ** to "the designated X stream", where X is Output or Error. ** Returns the previous value. ** ** Alternatively, pass the special value, invalidFileStream, ** to get the designated stream value without setting it. ** ** Before the designated streams are set, they default to ** those passed to consoleClassifySetup(...), and before ** that is called they default to stdout and stderr. ** ** It is error to close a stream so designated, then, without ** designating another, use the corresponding {o,e}Emit(...). */ SQLITE_INTERNAL_LINKAGE FILE *invalidFileStream; SQLITE_INTERNAL_LINKAGE FILE *setOutputStream(FILE *pf); # ifdef CONSIO_SET_ERROR_STREAM SQLITE_INTERNAL_LINKAGE FILE *setErrorStream(FILE *pf); # endif #else # define setOutputStream(pf) # define setErrorStream(pf) #endif /* !defined(SQLITE_CIO_NO_REDIRECT) */ #ifndef SQLITE_CIO_NO_TRANSLATE /* ** Emit output like fprintf(). If the output is going to the ** console and translation from UTF-8 is necessary, perform ** the needed translation. Otherwise, write formatted output ** to the provided stream almost as-is, possibly with newline ** translation as specified by set{Binary,Text}Mode(). */ SQLITE_INTERNAL_LINKAGE int fPrintfUtf8(FILE *pfO, const char *zFormat, ...); /* Like fPrintfUtf8 except stream is always the designated output. */ SQLITE_INTERNAL_LINKAGE int oPrintfUtf8(const char *zFormat, ...); /* Like fPrintfUtf8 except stream is always the designated error. */ SQLITE_INTERNAL_LINKAGE int ePrintfUtf8(const char *zFormat, ...); /* ** Emit output like fputs(). If the output is going to the ** console and translation from UTF-8 is necessary, perform ** the needed translation. Otherwise, write given text to the ** provided stream almost as-is, possibly with newline ** translation as specified by set{Binary,Text}Mode(). */ SQLITE_INTERNAL_LINKAGE int fPutsUtf8(const char *z, FILE *pfO); /* Like fPutsUtf8 except stream is always the designated output. */ SQLITE_INTERNAL_LINKAGE int oPutsUtf8(const char *z); /* Like fPutsUtf8 except stream is always the designated error. */ SQLITE_INTERNAL_LINKAGE int ePutsUtf8(const char *z); /* ** Emit output like fPutsUtf8(), except that the length of the ** accepted char or character sequence is limited by nAccept. ** ** Returns the number of accepted char values. */ #ifdef CONSIO_SPUTB SQLITE_INTERNAL_LINKAGE int fPutbUtf8(FILE *pfOut, const char *cBuf, int nAccept); /* Like fPutbUtf8 except stream is always the designated output. */ #endif SQLITE_INTERNAL_LINKAGE int oPutbUtf8(const char *cBuf, int nAccept); /* Like fPutbUtf8 except stream is always the designated error. */ #ifdef CONSIO_EPUTB SQLITE_INTERNAL_LINKAGE int ePutbUtf8(const char *cBuf, int nAccept); #endif /* ** Collect input like fgets(...) with special provisions for input ** from the console on platforms that require same. Defers to the ** C library fgets() when input is not from the console. Newline ** translation may be done as set by set{Binary,Text}Mode(). As a ** convenience, pfIn==NULL is treated as stdin. */ SQLITE_INTERNAL_LINKAGE char* fGetsUtf8(char *cBuf, int ncMax, FILE *pfIn); /* Like fGetsUtf8 except stream is always the designated input. */ /* SQLITE_INTERNAL_LINKAGE char* iGetsUtf8(char *cBuf, int ncMax); */ #endif /* !defined(SQLITE_CIO_NO_TRANSLATE) */ #ifndef SQLITE_CIO_NO_SETMODE /* ** Set given stream for binary mode, where newline translation is ** not done, or for text mode where, for some platforms, newlines ** are translated to the platform's conventional char sequence. ** If bFlush true, flush the stream. ** ** An additional side-effect is that if the stream is one passed ** to consoleClassifySetup() as an output, it is flushed first. ** ** Note that binary/text mode has no effect on console I/O ** translation. On all platforms, newline to the console starts ** a new line and CR,LF chars from the console become a newline. */ SQLITE_INTERNAL_LINKAGE void setBinaryMode(FILE *, short bFlush); SQLITE_INTERNAL_LINKAGE void setTextMode(FILE *, short bFlush); #endif #ifdef SQLITE_CIO_PROMPTED_IN typedef struct Prompts { int numPrompts; const char **azPrompts; } Prompts; /* ** Macros for use of a line editor. ** ** The following macros define operations involving use of a ** line-editing library or simple console interaction. ** A "T" argument is a text (char *) buffer or filename. ** A "N" argument is an integer. ** ** SHELL_ADD_HISTORY(T) // Record text as line(s) of history. ** SHELL_READ_HISTORY(T) // Read history from file named by T. ** SHELL_WRITE_HISTORY(T) // Write history to file named by T. ** SHELL_STIFLE_HISTORY(N) // Limit history to N entries. ** ** A console program which does interactive console input is ** expected to call: ** SHELL_READ_HISTORY(T) before collecting such input; ** SHELL_ADD_HISTORY(T) as record-worthy input is taken; ** SHELL_STIFLE_HISTORY(N) after console input ceases; then ** SHELL_WRITE_HISTORY(T) before the program exits. */ /* ** Retrieve a single line of input text from an input stream. ** ** If pfIn is the input stream passed to consoleClassifySetup(), ** and azPrompt is not NULL, then a prompt is issued before the ** line is collected, as selected by the isContinuation flag. ** Array azPrompt[{0,1}] holds the {main,continuation} prompt. ** ** If zBufPrior is not NULL then it is a buffer from a prior ** call to this routine that can be reused, or will be freed. ** ** The result is stored in space obtained from malloc() and ** must either be freed by the caller or else passed back to ** this function as zBufPrior for reuse. ** ** This function may call upon services of a line-editing ** library to interactively collect line edited input. */ SQLITE_INTERNAL_LINKAGE char * shellGetLine(FILE *pfIn, char *zBufPrior, int nLen, short isContinuation, Prompts azPrompt); #endif /* defined(SQLITE_CIO_PROMPTED_IN) */ /* ** TBD: Define an interface for application(s) to generate ** completion candidates for use by the line-editor. ** ** This may be premature; the CLI is the only application ** that does this. Yet, getting line-editing melded into ** console I/O is desirable because a line-editing library ** may have to establish console operating mode, possibly ** in a way that interferes with the above functionality. */ #if !(defined(SQLITE_CIO_NO_UTF8SCAN)&&defined(SQLITE_CIO_NO_TRANSLATE)) /* Skip over as much z[] input char sequence as is valid UTF-8, ** limited per nAccept char's or whole characters and containing ** no char cn such that ((1<<cn) & ccm)!=0. On return, the ** sequence z:return (inclusive:exclusive) is validated UTF-8. ** Limit: nAccept>=0 => char count, nAccept<0 => character */ SQLITE_INTERNAL_LINKAGE const char* zSkipValidUtf8(const char *z, int nAccept, long ccm); #endif /************************* End ../ext/consio/console_io.h ********************/ /************************* Begin ../ext/consio/console_io.c ******************/ /* ** 2023 November 4 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ******************************************************************************** ** This file implements various interfaces used for console and stream I/O ** by the SQLite project command-line tools, as explained in console_io.h . ** Functions prefixed by "SQLITE_INTERNAL_LINKAGE" behave as described there. */ #ifndef SQLITE_CDECL # define SQLITE_CDECL #endif #ifndef SHELL_NO_SYSINC # include <stdarg.h> # include <string.h> # include <stdlib.h> # include <limits.h> # include <assert.h> /* # include "sqlite3.h" */ #endif #ifndef HAVE_CONSOLE_IO_H # include "console_io.h" #endif #if defined(_MSC_VER) # pragma warning(disable : 4204) #endif #ifndef SQLITE_CIO_NO_TRANSLATE # if (defined(_WIN32) || defined(WIN32)) && !SQLITE_OS_WINRT # ifndef SHELL_NO_SYSINC # include <io.h> # include <fcntl.h> # undef WIN32_LEAN_AND_MEAN # define WIN32_LEAN_AND_MEAN # include <windows.h> # endif # define CIO_WIN_WC_XLATE 1 /* Use WCHAR Windows APIs for console I/O */ # else # ifndef SHELL_NO_SYSINC # include <unistd.h> # endif # define CIO_WIN_WC_XLATE 0 /* Use plain C library stream I/O at console */ # endif #else # define CIO_WIN_WC_XLATE 0 /* Not exposing translation routines at all */ #endif #if CIO_WIN_WC_XLATE /* Character used to represent a known-incomplete UTF-8 char group (�) */ static WCHAR cBadGroup = 0xfffd; #endif #if CIO_WIN_WC_XLATE static HANDLE handleOfFile(FILE *pf){ int fileDesc = _fileno(pf); union { intptr_t osfh; HANDLE fh; } fid = { (fileDesc>=0)? _get_osfhandle(fileDesc) : (intptr_t)INVALID_HANDLE_VALUE }; return fid.fh; } #endif #ifndef SQLITE_CIO_NO_TRANSLATE typedef struct PerStreamTags { # if CIO_WIN_WC_XLATE HANDLE hx; DWORD consMode; char acIncomplete[4]; # else short reachesConsole; # endif FILE *pf; } PerStreamTags; /* Define NULL-like value for things which can validly be 0. */ # define SHELL_INVALID_FILE_PTR ((FILE *)~0) # if CIO_WIN_WC_XLATE # define SHELL_INVALID_CONS_MODE 0xFFFF0000 # endif # if CIO_WIN_WC_XLATE # define PST_INITIALIZER { INVALID_HANDLE_VALUE, SHELL_INVALID_CONS_MODE, \ {0,0,0,0}, SHELL_INVALID_FILE_PTR } # else # define PST_INITIALIZER { 0, SHELL_INVALID_FILE_PTR } # endif /* Quickly say whether a known output is going to the console. */ # if CIO_WIN_WC_XLATE static short pstReachesConsole(PerStreamTags *ppst){ return (ppst->hx != INVALID_HANDLE_VALUE); } # else # define pstReachesConsole(ppst) 0 # endif # if CIO_WIN_WC_XLATE static void restoreConsoleArb(PerStreamTags *ppst){ if( pstReachesConsole(ppst) ) SetConsoleMode(ppst->hx, ppst->consMode); } # else # define restoreConsoleArb(ppst) # endif /* Say whether FILE* appears to be a console, collect associated info. */ static short streamOfConsole(FILE *pf, /* out */ PerStreamTags *ppst){ # if CIO_WIN_WC_XLATE short rv = 0; DWORD dwCM = SHELL_INVALID_CONS_MODE; HANDLE fh = handleOfFile(pf); ppst->pf = pf; if( INVALID_HANDLE_VALUE != fh ){ rv = (GetFileType(fh) == FILE_TYPE_CHAR && GetConsoleMode(fh,&dwCM)); } ppst->hx = (rv)? fh : INVALID_HANDLE_VALUE; ppst->consMode = dwCM; return rv; # else ppst->pf = pf; ppst->reachesConsole = ( (short)isatty(fileno(pf)) ); return ppst->reachesConsole; # endif } # ifndef ENABLE_VIRTUAL_TERMINAL_PROCESSING # define ENABLE_VIRTUAL_TERMINAL_PROCESSING (0x4) # endif # if CIO_WIN_WC_XLATE /* Define console modes for use with the Windows Console API. */ # define SHELL_CONI_MODE \ (ENABLE_ECHO_INPUT | ENABLE_INSERT_MODE | ENABLE_LINE_INPUT | 0x80 \ | ENABLE_QUICK_EDIT_MODE | ENABLE_EXTENDED_FLAGS | ENABLE_PROCESSED_INPUT) # define SHELL_CONO_MODE (ENABLE_PROCESSED_OUTPUT | ENABLE_WRAP_AT_EOL_OUTPUT \ | ENABLE_VIRTUAL_TERMINAL_PROCESSING) # endif typedef struct ConsoleInfo { PerStreamTags pstSetup[3]; PerStreamTags pstDesignated[3]; StreamsAreConsole sacSetup; } ConsoleInfo; static short isValidStreamInfo(PerStreamTags *ppst){ return (ppst->pf != SHELL_INVALID_FILE_PTR); } static ConsoleInfo consoleInfo = { { /* pstSetup */ PST_INITIALIZER, PST_INITIALIZER, PST_INITIALIZER }, { /* pstDesignated[] */ PST_INITIALIZER, PST_INITIALIZER, PST_INITIALIZER }, SAC_NoConsole /* sacSetup */ }; SQLITE_INTERNAL_LINKAGE FILE* invalidFileStream = (FILE *)~0; # if CIO_WIN_WC_XLATE static void maybeSetupAsConsole(PerStreamTags *ppst, short odir){ if( pstReachesConsole(ppst) ){ DWORD cm = odir? SHELL_CONO_MODE : SHELL_CONI_MODE; SetConsoleMode(ppst->hx, cm); } } # else # define maybeSetupAsConsole(ppst,odir) # endif SQLITE_INTERNAL_LINKAGE void consoleRenewSetup(void){ # if CIO_WIN_WC_XLATE int ix = 0; while( ix < 6 ){ PerStreamTags *ppst = (ix<3)? &consoleInfo.pstSetup[ix] : &consoleInfo.pstDesignated[ix-3]; maybeSetupAsConsole(ppst, (ix % 3)>0); ++ix; } # endif } SQLITE_INTERNAL_LINKAGE StreamsAreConsole consoleClassifySetup( FILE *pfIn, FILE *pfOut, FILE *pfErr ){ StreamsAreConsole rv = SAC_NoConsole; FILE* apf[3] = { pfIn, pfOut, pfErr }; int ix; for( ix = 2; ix >= 0; --ix ){ PerStreamTags *ppst = &consoleInfo.pstSetup[ix]; if( streamOfConsole(apf[ix], ppst) ){ rv |= (SAC_InConsole<<ix); } consoleInfo.pstDesignated[ix] = *ppst; if( ix > 0 ) fflush(apf[ix]); } consoleInfo.sacSetup = rv; consoleRenewSetup(); return rv; } SQLITE_INTERNAL_LINKAGE void SQLITE_CDECL consoleRestore( void ){ # if CIO_WIN_WC_XLATE static ConsoleInfo *pci = &consoleInfo; if( pci->sacSetup ){ int ix; for( ix=0; ix<3; ++ix ){ if( pci->sacSetup & (SAC_InConsole<<ix) ){ PerStreamTags *ppst = &pci->pstSetup[ix]; SetConsoleMode(ppst->hx, ppst->consMode); } } } # endif } #endif /* !defined(SQLITE_CIO_NO_TRANSLATE) */ #ifdef SQLITE_CIO_INPUT_REDIR /* Say whether given FILE* is among those known, via either ** consoleClassifySetup() or set{Output,Error}Stream, as ** readable, and return an associated PerStreamTags pointer ** if so. Otherwise, return 0. */ static PerStreamTags * isKnownReadable(FILE *pf){ static PerStreamTags *apst[] = { &consoleInfo.pstDesignated[0], &consoleInfo.pstSetup[0], 0 }; int ix = 0; do { if( apst[ix]->pf == pf ) break; } while( apst[++ix] != 0 ); return apst[ix]; } #endif #ifndef SQLITE_CIO_NO_TRANSLATE /* Say whether given FILE* is among those known, via either ** consoleClassifySetup() or set{Output,Error}Stream, as ** writable, and return an associated PerStreamTags pointer ** if so. Otherwise, return 0. */ static PerStreamTags * isKnownWritable(FILE *pf){ static PerStreamTags *apst[] = { &consoleInfo.pstDesignated[1], &consoleInfo.pstDesignated[2], &consoleInfo.pstSetup[1], &consoleInfo.pstSetup[2], 0 }; int ix = 0; do { if( apst[ix]->pf == pf ) break; } while( apst[++ix] != 0 ); return apst[ix]; } static FILE *designateEmitStream(FILE *pf, unsigned chix){ FILE *rv = consoleInfo.pstDesignated[chix].pf; if( pf == invalidFileStream ) return rv; else{ /* Setting a possibly new output stream. */ PerStreamTags *ppst = isKnownWritable(pf); if( ppst != 0 ){ PerStreamTags pst = *ppst; consoleInfo.pstDesignated[chix] = pst; }else streamOfConsole(pf, &consoleInfo.pstDesignated[chix]); } return rv; } SQLITE_INTERNAL_LINKAGE FILE *setOutputStream(FILE *pf){ return designateEmitStream(pf, 1); } # ifdef CONSIO_SET_ERROR_STREAM SQLITE_INTERNAL_LINKAGE FILE *setErrorStream(FILE *pf){ return designateEmitStream(pf, 2); } # endif #endif /* !defined(SQLITE_CIO_NO_TRANSLATE) */ #ifndef SQLITE_CIO_NO_SETMODE # if CIO_WIN_WC_XLATE static void setModeFlushQ(FILE *pf, short bFlush, int mode){ if( bFlush ) fflush(pf); _setmode(_fileno(pf), mode); } # else # define setModeFlushQ(f, b, m) if(b) fflush(f) # endif SQLITE_INTERNAL_LINKAGE void setBinaryMode(FILE *pf, short bFlush){ setModeFlushQ(pf, bFlush, _O_BINARY); } SQLITE_INTERNAL_LINKAGE void setTextMode(FILE *pf, short bFlush){ setModeFlushQ(pf, bFlush, _O_TEXT); } # undef setModeFlushQ #else /* defined(SQLITE_CIO_NO_SETMODE) */ # define setBinaryMode(f, bFlush) do{ if((bFlush)) fflush(f); }while(0) # define setTextMode(f, bFlush) do{ if((bFlush)) fflush(f); }while(0) #endif /* defined(SQLITE_CIO_NO_SETMODE) */ #ifndef SQLITE_CIO_NO_TRANSLATE # if CIO_WIN_WC_XLATE /* Write buffer cBuf as output to stream known to reach console, ** limited to ncTake char's. Return ncTake on success, else 0. */ static int conZstrEmit(PerStreamTags *ppst, const char *z, int ncTake){ int rv = 0; if( z!=NULL ){ int nwc = MultiByteToWideChar(CP_UTF8,0, z,ncTake, 0,0); if( nwc > 0 ){ WCHAR *zw = sqlite3_malloc64(nwc*sizeof(WCHAR)); if( zw!=NULL ){ nwc = MultiByteToWideChar(CP_UTF8,0, z,ncTake, zw,nwc); if( nwc > 0 ){ /* Translation from UTF-8 to UTF-16, then WCHARs out. */ if( WriteConsoleW(ppst->hx, zw,nwc, 0, NULL) ){ rv = ncTake; } } sqlite3_free(zw); } } } return rv; } /* For {f,o,e}PrintfUtf8() when stream is known to reach console. */ static int conioVmPrintf(PerStreamTags *ppst, const char *zFormat, va_list ap){ char *z = sqlite3_vmprintf(zFormat, ap); if( z ){ int rv = conZstrEmit(ppst, z, (int)strlen(z)); sqlite3_free(z); return rv; }else return 0; } # endif /* CIO_WIN_WC_XLATE */ # ifdef CONSIO_GET_EMIT_STREAM static PerStreamTags * getDesignatedEmitStream(FILE *pf, unsigned chix, PerStreamTags *ppst){ PerStreamTags *rv = isKnownWritable(pf); short isValid = (rv!=0)? isValidStreamInfo(rv) : 0; if( rv != 0 && isValid ) return rv; streamOfConsole(pf, ppst); return ppst; } # endif /* Get stream info, either for designated output or error stream when ** chix equals 1 or 2, or for an arbitrary stream when chix == 0. ** In either case, ppst references a caller-owned PerStreamTags ** struct which may be filled in if none of the known writable ** streams is being held by consoleInfo. The ppf parameter is a ** byref output when chix!=0 and a byref input when chix==0. */ static PerStreamTags * getEmitStreamInfo(unsigned chix, PerStreamTags *ppst, /* in/out */ FILE **ppf){ PerStreamTags *ppstTry; FILE *pfEmit; if( chix > 0 ){ ppstTry = &consoleInfo.pstDesignated[chix]; if( !isValidStreamInfo(ppstTry) ){ ppstTry = &consoleInfo.pstSetup[chix]; pfEmit = ppst->pf; }else pfEmit = ppstTry->pf; if( !isValidStreamInfo(ppstTry) ){ pfEmit = (chix > 1)? stderr : stdout; ppstTry = ppst; streamOfConsole(pfEmit, ppstTry); } *ppf = pfEmit; }else{ ppstTry = isKnownWritable(*ppf); if( ppstTry != 0 ) return ppstTry; streamOfConsole(*ppf, ppst); return ppst; } return ppstTry; } SQLITE_INTERNAL_LINKAGE int oPrintfUtf8(const char *zFormat, ...){ va_list ap; int rv; FILE *pfOut; PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ # if CIO_WIN_WC_XLATE PerStreamTags *ppst = getEmitStreamInfo(1, &pst, &pfOut); # else getEmitStreamInfo(1, &pst, &pfOut); # endif assert(zFormat!=0); va_start(ap, zFormat); # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ){ rv = conioVmPrintf(ppst, zFormat, ap); }else{ # endif rv = vfprintf(pfOut, zFormat, ap); # if CIO_WIN_WC_XLATE } # endif va_end(ap); return rv; } SQLITE_INTERNAL_LINKAGE int ePrintfUtf8(const char *zFormat, ...){ va_list ap; int rv; FILE *pfErr; PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ # if CIO_WIN_WC_XLATE PerStreamTags *ppst = getEmitStreamInfo(2, &pst, &pfErr); # else getEmitStreamInfo(2, &pst, &pfErr); # endif assert(zFormat!=0); va_start(ap, zFormat); # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ){ rv = conioVmPrintf(ppst, zFormat, ap); }else{ # endif rv = vfprintf(pfErr, zFormat, ap); # if CIO_WIN_WC_XLATE } # endif va_end(ap); return rv; } SQLITE_INTERNAL_LINKAGE int fPrintfUtf8(FILE *pfO, const char *zFormat, ...){ va_list ap; int rv; PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ # if CIO_WIN_WC_XLATE PerStreamTags *ppst = getEmitStreamInfo(0, &pst, &pfO); # else getEmitStreamInfo(0, &pst, &pfO); # endif assert(zFormat!=0); va_start(ap, zFormat); # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ){ maybeSetupAsConsole(ppst, 1); rv = conioVmPrintf(ppst, zFormat, ap); if( 0 == isKnownWritable(ppst->pf) ) restoreConsoleArb(ppst); }else{ # endif rv = vfprintf(pfO, zFormat, ap); # if CIO_WIN_WC_XLATE } # endif va_end(ap); return rv; } SQLITE_INTERNAL_LINKAGE int fPutsUtf8(const char *z, FILE *pfO){ PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ # if CIO_WIN_WC_XLATE PerStreamTags *ppst = getEmitStreamInfo(0, &pst, &pfO); # else getEmitStreamInfo(0, &pst, &pfO); # endif assert(z!=0); # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ){ int rv; maybeSetupAsConsole(ppst, 1); rv = conZstrEmit(ppst, z, (int)strlen(z)); if( 0 == isKnownWritable(ppst->pf) ) restoreConsoleArb(ppst); return rv; }else { # endif return (fputs(z, pfO)<0)? 0 : (int)strlen(z); # if CIO_WIN_WC_XLATE } # endif } SQLITE_INTERNAL_LINKAGE int ePutsUtf8(const char *z){ FILE *pfErr; PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ # if CIO_WIN_WC_XLATE PerStreamTags *ppst = getEmitStreamInfo(2, &pst, &pfErr); # else getEmitStreamInfo(2, &pst, &pfErr); # endif assert(z!=0); # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ) return conZstrEmit(ppst, z, (int)strlen(z)); else { # endif return (fputs(z, pfErr)<0)? 0 : (int)strlen(z); # if CIO_WIN_WC_XLATE } # endif } SQLITE_INTERNAL_LINKAGE int oPutsUtf8(const char *z){ FILE *pfOut; PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ # if CIO_WIN_WC_XLATE PerStreamTags *ppst = getEmitStreamInfo(1, &pst, &pfOut); # else getEmitStreamInfo(1, &pst, &pfOut); # endif assert(z!=0); # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ) return conZstrEmit(ppst, z, (int)strlen(z)); else { # endif return (fputs(z, pfOut)<0)? 0 : (int)strlen(z); # if CIO_WIN_WC_XLATE } # endif } #endif /* !defined(SQLITE_CIO_NO_TRANSLATE) */ #if !(defined(SQLITE_CIO_NO_UTF8SCAN) && defined(SQLITE_CIO_NO_TRANSLATE)) /* Skip over as much z[] input char sequence as is valid UTF-8, ** limited per nAccept char's or whole characters and containing ** no char cn such that ((1<<cn) & ccm)!=0. On return, the ** sequence z:return (inclusive:exclusive) is validated UTF-8. ** Limit: nAccept>=0 => char count, nAccept<0 => character */ SQLITE_INTERNAL_LINKAGE const char* zSkipValidUtf8(const char *z, int nAccept, long ccm){ int ng = (nAccept<0)? -nAccept : 0; const char *pcLimit = (nAccept>=0)? z+nAccept : 0; assert(z!=0); while( (pcLimit)? (z<pcLimit) : (ng-- != 0) ){ char c = *z; if( (c & 0x80) == 0 ){ if( ccm != 0L && c < 0x20 && ((1L<<c) & ccm) != 0 ) return z; ++z; /* ASCII */ }else if( (c & 0xC0) != 0xC0 ) return z; /* not a lead byte */ else{ const char *zt = z+1; /* Got lead byte, look at trail bytes.*/ do{ if( pcLimit && zt >= pcLimit ) return z; else{ char ct = *zt++; if( ct==0 || (zt-z)>4 || (ct & 0xC0)!=0x80 ){ /* Trailing bytes are too few, too many, or invalid. */ return z; } } } while( ((c <<= 1) & 0x40) == 0x40 ); /* Eat lead byte's count. */ z = zt; } } return z; } #endif /*!(defined(SQLITE_CIO_NO_UTF8SCAN)&&defined(SQLITE_CIO_NO_TRANSLATE))*/ #ifndef SQLITE_CIO_NO_TRANSLATE # ifdef CONSIO_SPUTB SQLITE_INTERNAL_LINKAGE int fPutbUtf8(FILE *pfO, const char *cBuf, int nAccept){ assert(pfO!=0); # if CIO_WIN_WC_XLATE PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ PerStreamTags *ppst = getEmitStreamInfo(0, &pst, &pfO); if( pstReachesConsole(ppst) ){ int rv; maybeSetupAsConsole(ppst, 1); rv = conZstrEmit(ppst, cBuf, nAccept); if( 0 == isKnownWritable(ppst->pf) ) restoreConsoleArb(ppst); return rv; }else { # endif return (int)fwrite(cBuf, 1, nAccept, pfO); # if CIO_WIN_WC_XLATE } # endif } # endif SQLITE_INTERNAL_LINKAGE int oPutbUtf8(const char *cBuf, int nAccept){ FILE *pfOut; PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ # if CIO_WIN_WC_XLATE PerStreamTags *ppst = getEmitStreamInfo(1, &pst, &pfOut); # else getEmitStreamInfo(1, &pst, &pfOut); # endif # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ){ return conZstrEmit(ppst, cBuf, nAccept); }else { # endif return (int)fwrite(cBuf, 1, nAccept, pfOut); # if CIO_WIN_WC_XLATE } # endif } # ifdef CONSIO_EPUTB SQLITE_INTERNAL_LINKAGE int ePutbUtf8(const char *cBuf, int nAccept){ FILE *pfErr; PerStreamTags pst = PST_INITIALIZER; /* for unknown streams */ PerStreamTags *ppst = getEmitStreamInfo(2, &pst, &pfErr); # if CIO_WIN_WC_XLATE if( pstReachesConsole(ppst) ){ return conZstrEmit(ppst, cBuf, nAccept); }else { # endif return (int)fwrite(cBuf, 1, nAccept, pfErr); # if CIO_WIN_WC_XLATE } # endif } # endif /* defined(CONSIO_EPUTB) */ SQLITE_INTERNAL_LINKAGE char* fGetsUtf8(char *cBuf, int ncMax, FILE *pfIn){ if( pfIn==0 ) pfIn = stdin; # if CIO_WIN_WC_XLATE if( pfIn == consoleInfo.pstSetup[0].pf && (consoleInfo.sacSetup & SAC_InConsole)!=0 ){ # if CIO_WIN_WC_XLATE==1 # define SHELL_GULP 150 /* Count of WCHARS to be gulped at a time */ WCHAR wcBuf[SHELL_GULP+1]; int lend = 0, noc = 0; if( ncMax > 0 ) cBuf[0] = 0; while( noc < ncMax-8-1 && !lend ){ /* There is room for at least 2 more characters and a 0-terminator. */ int na = (ncMax > SHELL_GULP*4+1 + noc)? SHELL_GULP : (ncMax-1 - noc)/4; # undef SHELL_GULP DWORD nbr = 0; BOOL bRC = ReadConsoleW(consoleInfo.pstSetup[0].hx, wcBuf, na, &nbr, 0); if( bRC && nbr>0 && (wcBuf[nbr-1]&0xF800)==0xD800 ){ /* Last WHAR read is first of a UTF-16 surrogate pair. Grab its mate. */ DWORD nbrx; bRC &= ReadConsoleW(consoleInfo.pstSetup[0].hx, wcBuf+nbr, 1, &nbrx, 0); if( bRC ) nbr += nbrx; } if( !bRC || (noc==0 && nbr==0) ) return 0; if( nbr > 0 ){ int nmb = WideCharToMultiByte(CP_UTF8, 0, wcBuf,nbr,0,0,0,0); if( nmb != 0 && noc+nmb <= ncMax ){ int iseg = noc; nmb = WideCharToMultiByte(CP_UTF8, 0, wcBuf,nbr,cBuf+noc,nmb,0,0); noc += nmb; /* Fixup line-ends as coded by Windows for CR (or "Enter".) ** This is done without regard for any setMode{Text,Binary}() ** call that might have been done on the interactive input. */ if( noc > 0 ){ if( cBuf[noc-1]=='\n' ){ lend = 1; if( noc > 1 && cBuf[noc-2]=='\r' ) cBuf[--noc-1] = '\n'; } } /* Check for ^Z (anywhere in line) too, to act as EOF. */ while( iseg < noc ){ if( cBuf[iseg]=='\x1a' ){ noc = iseg; /* Chop ^Z and anything following. */ lend = 1; /* Counts as end of line too. */ break; } ++iseg; } }else break; /* Drop apparent garbage in. (Could assert.) */ }else break; } /* If got nothing, (after ^Z chop), must be at end-of-file. */ if( noc > 0 ){ cBuf[noc] = 0; return cBuf; }else return 0; # endif }else{ # endif return fgets(cBuf, ncMax, pfIn); # if CIO_WIN_WC_XLATE } # endif } #endif /* !defined(SQLITE_CIO_NO_TRANSLATE) */ #if defined(_MSC_VER) # pragma warning(default : 4204) #endif #undef SHELL_INVALID_FILE_PTR /************************* End ../ext/consio/console_io.c ********************/ #ifndef SQLITE_SHELL_FIDDLE /* From here onward, fgets() is redirected to the console_io library. */ # define fgets(b,n,f) fGetsUtf8(b,n,f) /* * Define macros for emitting output text in various ways: * sputz(s, z) => emit 0-terminated string z to given stream s * sputf(s, f, ...) => emit varargs per format f to given stream s * oputz(z) => emit 0-terminated string z to default stream * oputf(f, ...) => emit varargs per format f to default stream * eputz(z) => emit 0-terminated string z to error stream * eputf(f, ...) => emit varargs per format f to error stream * oputb(b, n) => emit char buffer b[0..n-1] to default stream * * Note that the default stream is whatever has been last set via: * setOutputStream(FILE *pf) * This is normally the stream that CLI normal output goes to. * For the stand-alone CLI, it is stdout with no .output redirect. * * The ?putz(z) forms are required for the Fiddle builds for string literal * output, in aid of enforcing format string to argument correspondence. */ # define sputz(s,z) fPutsUtf8(z,s) # define sputf fPrintfUtf8 # define oputz(z) oPutsUtf8(z) # define oputf oPrintfUtf8 # define eputz(z) ePutsUtf8(z) # define eputf ePrintfUtf8 # define oputb(buf,na) oPutbUtf8(buf,na) #else /* For Fiddle, all console handling and emit redirection is omitted. */ /* These next 3 macros are for emitting formatted output. When complaints * from the WASM build are issued for non-formatted output, (when a mere * string literal is to be emitted, the ?putz(z) forms should be used. * (This permits compile-time checking of format string / argument mismatch.) */ # define oputf(fmt, ...) printf(fmt,__VA_ARGS__) # define eputf(fmt, ...) fprintf(stderr,fmt,__VA_ARGS__) # define sputf(fp,fmt, ...) fprintf(fp,fmt,__VA_ARGS__) /* These next 3 macros are for emitting simple string literals. */ # define oputz(z) fputs(z,stdout) # define eputz(z) fputs(z,stderr) # define sputz(fp,z) fputs(z,fp) # define oputb(buf,na) fwrite(buf,1,na,stdout) #endif /* True if the timer is enabled */ static int enableTimer = 0; /* A version of strcmp() that works with NULL values */ static int cli_strcmp(const char *a, const char *b){ |
︙ | ︙ | |||
345 346 347 348 349 350 351 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer ){ sqlite3_int64 iEnd = timeOfDay(); struct rusage sEnd; getrusage(RUSAGE_SELF, &sEnd); | | | | | | 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer ){ sqlite3_int64 iEnd = timeOfDay(); struct rusage sEnd; getrusage(RUSAGE_SELF, &sEnd); sputf(stdout, "Run Time: real %.3f user %f sys %f\n", (iEnd - iBegin)*0.001, timeDiff(&sBegin.ru_utime, &sEnd.ru_utime), timeDiff(&sBegin.ru_stime, &sEnd.ru_stime)); } } #define BEGIN_TIMER beginTimer() #define END_TIMER endTimer() #define HAS_TIMER 1 |
︙ | ︙ | |||
424 425 426 427 428 429 430 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer && getProcessTimesAddr){ FILETIME ftCreation, ftExit, ftKernelEnd, ftUserEnd; sqlite3_int64 ftWallEnd = timeOfDay(); getProcessTimesAddr(hProcess,&ftCreation,&ftExit,&ftKernelEnd,&ftUserEnd); | | | | | | 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer && getProcessTimesAddr){ FILETIME ftCreation, ftExit, ftKernelEnd, ftUserEnd; sqlite3_int64 ftWallEnd = timeOfDay(); getProcessTimesAddr(hProcess,&ftCreation,&ftExit,&ftKernelEnd,&ftUserEnd); sputf(stdout, "Run Time: real %.3f user %f sys %f\n", (ftWallEnd - ftWallBegin)*0.001, timeDiff(&ftUserBegin, &ftUserEnd), timeDiff(&ftKernelBegin, &ftKernelEnd)); } } #define BEGIN_TIMER beginTimer() #define END_TIMER endTimer() #define HAS_TIMER hasTimer() |
︙ | ︙ | |||
464 465 466 467 468 469 470 | /* ** Treat stdin as an interactive input if the following variable ** is true. Otherwise, assume stdin is connected to a file or pipe. */ static int stdin_is_interactive = 1; /* | < < < < < < < < < < < < < < < < < < < | | | | 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 | /* ** Treat stdin as an interactive input if the following variable ** is true. Otherwise, assume stdin is connected to a file or pipe. */ static int stdin_is_interactive = 1; /* ** On Windows systems we need to know if standard output is a console ** in order to show that UTF-16 translation is done in the sign-on ** banner. The following variable is true if it is the console. */ static int stdout_is_console = 1; /* ** The following is the open SQLite database. We make a pointer ** to this database a static variable so that it can be accessed ** by the SIGINT handler to interrupt database processing. |
︙ | ︙ | |||
607 608 609 610 611 612 613 | shell_strncpy(dynPrompt.dynamicPrompt, "(..", 4); }else if( dynPrompt.inParenLevel<0 ){ shell_strncpy(dynPrompt.dynamicPrompt, ")x!", 4); }else{ shell_strncpy(dynPrompt.dynamicPrompt, "(x.", 4); dynPrompt.dynamicPrompt[2] = (char)('0'+dynPrompt.inParenLevel); } | | > < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 | shell_strncpy(dynPrompt.dynamicPrompt, "(..", 4); }else if( dynPrompt.inParenLevel<0 ){ shell_strncpy(dynPrompt.dynamicPrompt, ")x!", 4); }else{ shell_strncpy(dynPrompt.dynamicPrompt, "(x.", 4); dynPrompt.dynamicPrompt[2] = (char)('0'+dynPrompt.inParenLevel); } shell_strncpy(dynPrompt.dynamicPrompt+3, continuePrompt+3, PROMPT_LEN_MAX-4); } } return dynPrompt.dynamicPrompt; } #endif /* !defined(SQLITE_OMIT_DYNAPROMPT) */ /* Indicate out-of-memory and exit. */ static void shell_out_of_memory(void){ eputz("Error: out of memory\n"); exit(1); } /* Check a pointer to see if it is NULL. If it is NULL, exit with an ** out-of-memory error. */ static void shell_check_oom(const void *p){ |
︙ | ︙ | |||
883 884 885 886 887 888 889 | static void SQLITE_CDECL iotracePrintf(const char *zFormat, ...){ va_list ap; char *z; if( iotrace==0 ) return; va_start(ap, zFormat); z = sqlite3_vmprintf(zFormat, ap); va_end(ap); | | | | | | | | 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 | static void SQLITE_CDECL iotracePrintf(const char *zFormat, ...){ va_list ap; char *z; if( iotrace==0 ) return; va_start(ap, zFormat); z = sqlite3_vmprintf(zFormat, ap); va_end(ap); sputf(iotrace, "%s", z); sqlite3_free(z); } #endif /* ** Output string zUtf to Out stream as w characters. If w is negative, ** then right-justify the text. W is the width in UTF-8 characters, not ** in bytes. This is different from the %*.*s specification in printf ** since with %*.*s the width is measured in bytes, not characters. */ static void utf8_width_print(int w, const char *zUtf){ int i; int n; int aw = w<0 ? -w : w; if( zUtf==0 ) zUtf = ""; for(i=n=0; zUtf[i]; i++){ if( (zUtf[i]&0xc0)!=0x80 ){ n++; if( n==aw ){ do{ i++; }while( (zUtf[i]&0xc0)==0x80 ); break; } } } if( n>=aw ){ oputf("%.*s", i, zUtf); }else if( w<0 ){ oputf("%*s%s", aw-n, "", zUtf); }else{ oputf("%s%*s", zUtf, aw-n, ""); } } /* ** Determines if a string is a number of not. */ |
︙ | ︙ | |||
973 974 975 976 977 978 979 | /* ** Return open FILE * if zFile exists, can be opened for read ** and is an ordinary file or a character stream source. ** Otherwise return 0. */ static FILE * openChrSource(const char *zFile){ | | | | | 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 | /* ** Return open FILE * if zFile exists, can be opened for read ** and is an ordinary file or a character stream source. ** Otherwise return 0. */ static FILE * openChrSource(const char *zFile){ #if defined(_WIN32) || defined(WIN32) struct __stat64 x = {0}; # define STAT_CHR_SRC(mode) ((mode & (_S_IFCHR|_S_IFIFO|_S_IFREG))!=0) /* On Windows, open first, then check the stream nature. This order ** is necessary because _stat() and sibs, when checking a named pipe, ** effectively break the pipe as its supplier sees it. */ FILE *rv = fopen(zFile, "rb"); if( rv==0 ) return 0; if( _fstat64(_fileno(rv), &x) != 0 || !STAT_CHR_SRC(x.st_mode)){ fclose(rv); rv = 0; } return rv; #else struct stat x = {0}; |
︙ | ︙ | |||
1036 1037 1038 1039 1040 1041 1042 | if( n>0 && zLine[n-1]=='\n' ){ n--; if( n>0 && zLine[n-1]=='\r' ) n--; zLine[n] = 0; break; } } | < < < < < < < < < < < < < < < < < | 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 | if( n>0 && zLine[n-1]=='\n' ){ n--; if( n>0 && zLine[n-1]=='\r' ) n--; zLine[n] = 0; break; } } return zLine; } /* ** Retrieve a single line of input text. ** ** If in==0 then read from standard input and prompt before each line. |
︙ | ︙ | |||
1079 1080 1081 1082 1083 1084 1085 | char *zPrompt; char *zResult; if( in!=0 ){ zResult = local_getline(zPrior, in); }else{ zPrompt = isContinuation ? CONTINUATION_PROMPT : mainPrompt; #if SHELL_USE_LOCAL_GETLINE | | | 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 | char *zPrompt; char *zResult; if( in!=0 ){ zResult = local_getline(zPrior, in); }else{ zPrompt = isContinuation ? CONTINUATION_PROMPT : mainPrompt; #if SHELL_USE_LOCAL_GETLINE sputz(stdout, zPrompt); fflush(stdout); do{ zResult = local_getline(zPrior, stdin); zPrior = 0; /* ^C trap creates a false EOF, so let "interrupt" thread catch up. */ if( zResult==0 ) sqlite3_sleep(50); }while( zResult==0 && seenInterrupt>0 ); |
︙ | ︙ | |||
1326 1327 1328 1329 1330 1331 1332 | sqlite3_value **apVal ){ double r = sqlite3_value_double(apVal[0]); int n = nVal>=2 ? sqlite3_value_int(apVal[1]) : 26; char z[400]; if( n<1 ) n = 1; if( n>350 ) n = 350; | | | 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 | sqlite3_value **apVal ){ double r = sqlite3_value_double(apVal[0]); int n = nVal>=2 ? sqlite3_value_int(apVal[1]) : 26; char z[400]; if( n<1 ) n = 1; if( n>350 ) n = 350; sqlite3_snprintf(sizeof(z), z, "%#+.*e", n, r); sqlite3_result_text(pCtx, z, -1, SQLITE_TRANSIENT); } /* ** SQL function: shell_module_schema(X) ** |
︙ | ︙ | |||
4984 4985 4986 4987 4988 4989 4990 | #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Return that member of a generate_series(...) sequence whose 0-based ** index is ix. The 0th member is given by smBase. The sequence members ** progress per ix increment by smStep. */ | | > | | > | > > | | | | 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 | #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Return that member of a generate_series(...) sequence whose 0-based ** index is ix. The 0th member is given by smBase. The sequence members ** progress per ix increment by smStep. */ static sqlite3_int64 genSeqMember( sqlite3_int64 smBase, sqlite3_int64 smStep, sqlite3_uint64 ix ){ static const sqlite3_uint64 mxI64 = ((sqlite3_uint64)0x7fffffff)<<32 | 0xffffffff; if( ix>=mxI64 ){ /* Get ix into signed i64 range. */ ix -= mxI64; /* With 2's complement ALU, this next can be 1 step, but is split into * 2 for UBSAN's satisfaction (and hypothetical 1's complement ALUs.) */ smBase += (mxI64/2) * smStep; smBase += (mxI64 - mxI64/2) * smStep; } /* Under UBSAN (or on 1's complement machines), must do this last term * in steps to avoid the dreaded (and harmless) signed multiply overlow. */ if( ix>=2 ){ sqlite3_int64 ix2 = (sqlite3_int64)ix/2; smBase += ix2*smStep; ix -= ix2; |
︙ | ︙ | |||
13057 13058 13059 13060 13061 13062 13063 | #endif /* Copy the entire schema of database [db] into [dbm]. */ if( rc==SQLITE_OK ){ sqlite3_stmt *pSql = 0; rc = idxPrintfPrepareStmt(pNew->db, &pSql, pzErrmsg, "SELECT sql FROM sqlite_schema WHERE name NOT LIKE 'sqlite_%%'" | | | 13804 13805 13806 13807 13808 13809 13810 13811 13812 13813 13814 13815 13816 13817 13818 | #endif /* Copy the entire schema of database [db] into [dbm]. */ if( rc==SQLITE_OK ){ sqlite3_stmt *pSql = 0; rc = idxPrintfPrepareStmt(pNew->db, &pSql, pzErrmsg, "SELECT sql FROM sqlite_schema WHERE name NOT LIKE 'sqlite_%%'" " AND sql NOT LIKE 'CREATE VIRTUAL %%' ORDER BY rowid" ); while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pSql) ){ const char *zSql = (const char*)sqlite3_column_text(pSql, 0); if( zSql ) rc = sqlite3_exec(pNew->dbm, zSql, 0, 0, pzErrmsg); } idxFinalize(&rc, pSql); } |
︙ | ︙ | |||
13258 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 13270 13271 | sqlite3_free(p); } } #endif /* ifndef SQLITE_OMIT_VIRTUALTABLE */ /************************* End ../ext/expert/sqlite3expert.c ********************/ #if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_ENABLE_DBPAGE_VTAB) #define SQLITE_SHELL_HAVE_RECOVER 1 #else #define SQLITE_SHELL_HAVE_RECOVER 0 #endif #if SQLITE_SHELL_HAVE_RECOVER | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 14005 14006 14007 14008 14009 14010 14011 14012 14013 14014 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 14025 14026 14027 14028 14029 14030 14031 14032 14033 14034 14035 14036 14037 14038 14039 14040 14041 14042 14043 14044 14045 14046 14047 14048 14049 14050 14051 14052 14053 14054 14055 14056 14057 14058 14059 14060 14061 14062 14063 14064 14065 14066 14067 14068 14069 14070 14071 14072 14073 14074 14075 14076 14077 14078 14079 14080 14081 14082 14083 14084 14085 14086 14087 14088 14089 14090 14091 14092 14093 14094 14095 14096 14097 14098 14099 14100 14101 14102 14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 14117 14118 14119 14120 14121 14122 14123 14124 14125 14126 14127 14128 14129 14130 14131 14132 14133 14134 14135 14136 14137 14138 14139 14140 14141 14142 14143 14144 14145 14146 14147 14148 14149 14150 14151 14152 14153 14154 14155 14156 14157 14158 14159 14160 14161 14162 14163 14164 14165 14166 14167 14168 14169 14170 14171 14172 14173 14174 14175 14176 14177 14178 14179 14180 14181 14182 14183 14184 14185 14186 14187 14188 14189 14190 14191 14192 14193 14194 14195 14196 14197 14198 14199 14200 14201 14202 14203 14204 14205 14206 14207 14208 14209 14210 14211 14212 14213 14214 14215 14216 14217 14218 14219 14220 14221 14222 14223 14224 14225 14226 14227 14228 14229 14230 14231 14232 14233 14234 14235 14236 14237 14238 14239 14240 14241 14242 14243 14244 14245 14246 14247 14248 14249 14250 14251 14252 14253 14254 14255 14256 14257 14258 14259 14260 14261 14262 14263 14264 14265 14266 14267 14268 14269 14270 14271 14272 14273 14274 14275 14276 14277 14278 14279 14280 14281 14282 14283 14284 14285 14286 14287 14288 14289 14290 14291 14292 14293 14294 14295 14296 14297 14298 14299 14300 14301 14302 14303 14304 14305 14306 14307 14308 14309 14310 14311 14312 14313 14314 14315 14316 14317 14318 14319 14320 14321 14322 14323 14324 14325 14326 14327 14328 14329 14330 14331 14332 14333 14334 14335 14336 14337 14338 14339 14340 14341 14342 14343 14344 14345 14346 14347 14348 14349 14350 14351 14352 14353 14354 14355 14356 14357 14358 14359 14360 14361 14362 14363 14364 14365 14366 14367 14368 14369 14370 14371 14372 14373 14374 14375 14376 14377 14378 14379 14380 14381 14382 14383 14384 14385 14386 14387 14388 14389 14390 14391 14392 14393 14394 14395 14396 14397 14398 14399 14400 14401 14402 14403 14404 14405 14406 14407 14408 14409 14410 14411 14412 14413 14414 14415 14416 14417 14418 14419 14420 14421 14422 14423 14424 14425 14426 14427 14428 14429 14430 14431 14432 14433 14434 14435 14436 14437 14438 14439 14440 14441 14442 14443 14444 14445 14446 14447 14448 14449 14450 14451 14452 14453 14454 14455 14456 14457 14458 14459 14460 14461 14462 14463 14464 14465 14466 14467 14468 14469 14470 14471 14472 14473 14474 14475 14476 14477 14478 14479 14480 14481 14482 14483 14484 14485 14486 14487 14488 14489 14490 14491 14492 14493 14494 14495 14496 14497 14498 14499 14500 14501 14502 14503 14504 14505 14506 14507 14508 14509 14510 14511 14512 14513 14514 14515 14516 14517 14518 14519 14520 14521 14522 14523 14524 14525 14526 14527 14528 14529 14530 14531 14532 14533 14534 14535 14536 14537 14538 14539 14540 14541 14542 14543 14544 14545 14546 14547 14548 14549 14550 14551 14552 14553 14554 14555 14556 14557 14558 14559 14560 14561 14562 14563 14564 14565 14566 14567 14568 14569 14570 14571 14572 14573 14574 14575 14576 14577 14578 14579 14580 14581 14582 14583 14584 14585 14586 14587 14588 14589 14590 14591 14592 14593 14594 14595 14596 14597 14598 14599 14600 14601 14602 14603 14604 14605 14606 14607 14608 14609 14610 14611 14612 14613 14614 14615 14616 14617 14618 14619 14620 14621 14622 14623 14624 14625 14626 14627 14628 14629 14630 14631 14632 14633 14634 14635 14636 14637 14638 14639 14640 14641 14642 14643 14644 14645 14646 14647 14648 14649 14650 14651 14652 14653 14654 14655 14656 14657 14658 14659 14660 14661 14662 14663 14664 14665 14666 14667 14668 14669 14670 14671 14672 14673 14674 14675 14676 14677 14678 14679 14680 14681 14682 14683 14684 14685 14686 14687 14688 14689 14690 14691 14692 14693 14694 14695 14696 14697 14698 14699 14700 14701 14702 14703 14704 14705 14706 14707 14708 14709 14710 14711 14712 14713 14714 14715 14716 14717 14718 14719 14720 14721 14722 14723 14724 14725 14726 14727 14728 14729 14730 14731 14732 14733 14734 14735 14736 14737 14738 14739 14740 14741 14742 14743 14744 14745 14746 14747 14748 14749 14750 14751 14752 14753 14754 14755 14756 14757 14758 14759 14760 14761 14762 14763 14764 14765 14766 14767 14768 14769 14770 14771 14772 14773 14774 14775 14776 14777 14778 14779 14780 14781 14782 14783 14784 14785 14786 14787 14788 14789 14790 14791 14792 14793 14794 14795 14796 14797 14798 14799 14800 14801 14802 14803 14804 14805 14806 14807 14808 14809 14810 14811 14812 14813 14814 14815 14816 14817 14818 14819 14820 14821 14822 14823 14824 14825 14826 14827 14828 14829 14830 14831 14832 14833 14834 14835 14836 14837 14838 14839 14840 14841 14842 14843 14844 14845 14846 14847 14848 14849 14850 14851 14852 14853 14854 14855 14856 14857 14858 14859 14860 14861 14862 14863 14864 14865 14866 14867 14868 14869 14870 14871 14872 14873 14874 14875 14876 14877 14878 14879 14880 14881 14882 14883 14884 14885 14886 14887 14888 14889 14890 14891 14892 14893 14894 14895 14896 14897 14898 14899 14900 14901 14902 14903 14904 14905 14906 14907 14908 14909 14910 14911 14912 14913 14914 14915 14916 14917 14918 14919 14920 14921 14922 14923 14924 14925 14926 14927 14928 14929 14930 14931 14932 14933 14934 14935 14936 14937 14938 14939 14940 14941 14942 14943 14944 14945 14946 14947 14948 14949 14950 14951 14952 14953 14954 14955 14956 14957 14958 14959 14960 14961 14962 14963 14964 14965 14966 14967 14968 14969 14970 14971 14972 14973 14974 14975 14976 14977 14978 14979 14980 14981 14982 14983 14984 14985 14986 14987 14988 14989 14990 14991 14992 14993 14994 14995 14996 14997 14998 14999 15000 15001 15002 15003 15004 15005 15006 15007 15008 15009 15010 15011 15012 15013 15014 15015 15016 15017 15018 15019 15020 15021 15022 15023 15024 15025 15026 15027 15028 15029 15030 15031 15032 15033 15034 15035 15036 15037 15038 15039 15040 15041 15042 15043 15044 15045 15046 15047 15048 15049 15050 15051 15052 15053 15054 15055 15056 15057 15058 15059 15060 15061 15062 15063 15064 15065 15066 15067 15068 15069 15070 15071 15072 15073 15074 15075 15076 15077 15078 15079 15080 15081 15082 15083 15084 15085 15086 15087 15088 15089 15090 15091 15092 15093 15094 15095 15096 15097 15098 15099 15100 15101 15102 15103 15104 15105 15106 15107 15108 15109 15110 15111 15112 15113 15114 15115 15116 15117 15118 15119 15120 15121 15122 15123 15124 15125 15126 15127 15128 15129 15130 15131 15132 15133 15134 15135 15136 | sqlite3_free(p); } } #endif /* ifndef SQLITE_OMIT_VIRTUALTABLE */ /************************* End ../ext/expert/sqlite3expert.c ********************/ /************************* Begin ../ext/intck/sqlite3intck.h ******************/ /* ** 2024-02-08 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* */ /* ** Incremental Integrity-Check Extension ** ------------------------------------- ** ** This module contains code to check whether or not an SQLite database ** is well-formed or corrupt. This is the same task as performed by SQLite's ** built-in "PRAGMA integrity_check" command. This module differs from ** "PRAGMA integrity_check" in that: ** ** + It is less thorough - this module does not detect certain types ** of corruption that are detected by the PRAGMA command. However, ** it does detect all kinds of corruption that are likely to cause ** errors in SQLite applications. ** ** + It is slower. Sometimes up to three times slower. ** ** + It allows integrity-check operations to be split into multiple ** transactions, so that the database does not need to be read-locked ** for the duration of the integrity-check. ** ** One way to use the API to run integrity-check on the "main" database ** of handle db is: ** ** int rc = SQLITE_OK; ** sqlite3_intck *p = 0; ** ** sqlite3_intck_open(db, "main", &p); ** while( SQLITE_OK==sqlite3_intck_step(p) ){ ** const char *zMsg = sqlite3_intck_message(p); ** if( zMsg ) printf("corruption: %s\n", zMsg); ** } ** rc = sqlite3_intck_error(p, &zErr); ** if( rc!=SQLITE_OK ){ ** printf("error occured (rc=%d), (errmsg=%s)\n", rc, zErr); ** } ** sqlite3_intck_close(p); ** ** Usually, the sqlite3_intck object opens a read transaction within the ** first call to sqlite3_intck_step() and holds it open until the ** integrity-check is complete. However, if sqlite3_intck_unlock() is ** called, the read transaction is ended and a new read transaction opened ** by the subsequent call to sqlite3_intck_step(). */ #ifndef _SQLITE_INTCK_H #define _SQLITE_INTCK_H /* #include "sqlite3.h" */ #ifdef __cplusplus extern "C" { #endif /* ** An ongoing incremental integrity-check operation is represented by an ** opaque pointer of the following type. */ typedef struct sqlite3_intck sqlite3_intck; /* ** Open a new incremental integrity-check object. If successful, populate ** output variable (*ppOut) with the new object handle and return SQLITE_OK. ** Or, if an error occurs, set (*ppOut) to NULL and return an SQLite error ** code (e.g. SQLITE_NOMEM). ** ** The integrity-check will be conducted on database zDb (which must be "main", ** "temp", or the name of an attached database) of database handle db. Once ** this function has been called successfully, the caller should not use ** database handle db until the integrity-check object has been destroyed ** using sqlite3_intck_close(). */ int sqlite3_intck_open( sqlite3 *db, /* Database handle */ const char *zDb, /* Database name ("main", "temp" etc.) */ sqlite3_intck **ppOut /* OUT: New sqlite3_intck handle */ ); /* ** Close and release all resources associated with a handle opened by an ** earlier call to sqlite3_intck_open(). The results of using an ** integrity-check handle after it has been passed to this function are ** undefined. */ void sqlite3_intck_close(sqlite3_intck *pCk); /* ** Do the next step of the integrity-check operation specified by the handle ** passed as the only argument. This function returns SQLITE_DONE if the ** integrity-check operation is finished, or an SQLite error code if ** an error occurs, or SQLITE_OK if no error occurs but the integrity-check ** is not finished. It is not considered an error if database corruption ** is encountered. ** ** Following a successful call to sqlite3_intck_step() (one that returns ** SQLITE_OK), sqlite3_intck_message() returns a non-NULL value if ** corruption was detected in the db. ** ** If an error occurs and a value other than SQLITE_OK or SQLITE_DONE is ** returned, then the integrity-check handle is placed in an error state. ** In this state all subsequent calls to sqlite3_intck_step() or ** sqlite3_intck_unlock() will immediately return the same error. The ** sqlite3_intck_error() method may be used to obtain an English language ** error message in this case. */ int sqlite3_intck_step(sqlite3_intck *pCk); /* ** If the previous call to sqlite3_intck_step() encountered corruption ** within the database, then this function returns a pointer to a buffer ** containing a nul-terminated string describing the corruption in ** English. If the previous call to sqlite3_intck_step() did not encounter ** corruption, or if there was no previous call, this function returns ** NULL. */ const char *sqlite3_intck_message(sqlite3_intck *pCk); /* ** Close any read-transaction opened by an earlier call to ** sqlite3_intck_step(). Any subsequent call to sqlite3_intck_step() will ** open a new transaction. Return SQLITE_OK if successful, or an SQLite error ** code otherwise. ** ** If an error occurs, then the integrity-check handle is placed in an error ** state. In this state all subsequent calls to sqlite3_intck_step() or ** sqlite3_intck_unlock() will immediately return the same error. The ** sqlite3_intck_error() method may be used to obtain an English language ** error message in this case. */ int sqlite3_intck_unlock(sqlite3_intck *pCk); /* ** If an error has occurred in an earlier call to sqlite3_intck_step() ** or sqlite3_intck_unlock(), then this method returns the associated ** SQLite error code. Additionally, if pzErr is not NULL, then (*pzErr) ** may be set to point to a nul-terminated string containing an English ** language error message. Or, if no error message is available, to ** NULL. ** ** If no error has occurred within sqlite3_intck_step() or ** sqlite_intck_unlock() calls on the handle passed as the first argument, ** then SQLITE_OK is returned and (*pzErr) set to NULL. */ int sqlite3_intck_error(sqlite3_intck *pCk, const char **pzErr); /* ** This API is used for testing only. It returns the full-text of an SQL ** statement used to test object zObj, which may be a table or index. ** The returned buffer is valid until the next call to either this function ** or sqlite3_intck_close() on the same sqlite3_intck handle. */ const char *sqlite3_intck_test_sql(sqlite3_intck *pCk, const char *zObj); #ifdef __cplusplus } /* end of the 'extern "C"' block */ #endif #endif /* ifndef _SQLITE_INTCK_H */ /************************* End ../ext/intck/sqlite3intck.h ********************/ /************************* Begin ../ext/intck/sqlite3intck.c ******************/ /* ** 2024-02-08 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* */ /* #include "sqlite3intck.h" */ #include <string.h> #include <assert.h> #include <stdio.h> #include <stdlib.h> /* ** nKeyVal: ** The number of values that make up the 'key' for the current pCheck ** statement. ** ** rc: ** Error code returned by most recent sqlite3_intck_step() or ** sqlite3_intck_unlock() call. This is set to SQLITE_DONE when ** the integrity-check operation is finished. ** ** zErr: ** If the object has entered the error state, this is the error message. ** Is freed using sqlite3_free() when the object is deleted. ** ** zTestSql: ** The value returned by the most recent call to sqlite3_intck_testsql(). ** Each call to testsql() frees the previous zTestSql value (using ** sqlite3_free()) and replaces it with the new value it will return. */ struct sqlite3_intck { sqlite3 *db; const char *zDb; /* Copy of zDb parameter to _open() */ char *zObj; /* Current object. Or NULL. */ sqlite3_stmt *pCheck; /* Current check statement */ char *zKey; int nKeyVal; char *zMessage; int bCorruptSchema; int rc; /* Error code */ char *zErr; /* Error message */ char *zTestSql; /* Returned by sqlite3_intck_test_sql() */ }; /* ** Some error has occurred while using database p->db. Save the error message ** and error code currently held by the database handle in p->rc and p->zErr. */ static void intckSaveErrmsg(sqlite3_intck *p){ p->rc = sqlite3_errcode(p->db); sqlite3_free(p->zErr); p->zErr = sqlite3_mprintf("%s", sqlite3_errmsg(p->db)); } /* ** If the handle passed as the first argument is already in the error state, ** then this function is a no-op (returns NULL immediately). Otherwise, if an ** error occurs within this function, it leaves an error in said handle. ** ** Otherwise, this function attempts to prepare SQL statement zSql and ** return the resulting statement handle to the user. */ static sqlite3_stmt *intckPrepare(sqlite3_intck *p, const char *zSql){ sqlite3_stmt *pRet = 0; if( p->rc==SQLITE_OK ){ p->rc = sqlite3_prepare_v2(p->db, zSql, -1, &pRet, 0); if( p->rc!=SQLITE_OK ){ intckSaveErrmsg(p); assert( pRet==0 ); } } return pRet; } /* ** If the handle passed as the first argument is already in the error state, ** then this function is a no-op (returns NULL immediately). Otherwise, if an ** error occurs within this function, it leaves an error in said handle. ** ** Otherwise, this function treats argument zFmt as a printf() style format ** string. It formats it according to the trailing arguments and then ** attempts to prepare the results and return the resulting prepared ** statement. */ static sqlite3_stmt *intckPrepareFmt(sqlite3_intck *p, const char *zFmt, ...){ sqlite3_stmt *pRet = 0; va_list ap; char *zSql = 0; va_start(ap, zFmt); zSql = sqlite3_vmprintf(zFmt, ap); if( p->rc==SQLITE_OK && zSql==0 ){ p->rc = SQLITE_NOMEM; } pRet = intckPrepare(p, zSql); sqlite3_free(zSql); va_end(ap); return pRet; } /* ** Finalize SQL statement pStmt. If an error occurs and the handle passed ** as the first argument does not already contain an error, store the ** error in the handle. */ static void intckFinalize(sqlite3_intck *p, sqlite3_stmt *pStmt){ int rc = sqlite3_finalize(pStmt); if( p->rc==SQLITE_OK && rc!=SQLITE_OK ){ intckSaveErrmsg(p); } } /* ** If there is already an error in handle p, return it. Otherwise, call ** sqlite3_step() on the statement handle and return that value. */ static int intckStep(sqlite3_intck *p, sqlite3_stmt *pStmt){ if( p->rc ) return p->rc; return sqlite3_step(pStmt); } /* ** Execute SQL statement zSql. There is no way to obtain any results ** returned by the statement. This function uses the sqlite3_intck error ** code convention. */ static void intckExec(sqlite3_intck *p, const char *zSql){ sqlite3_stmt *pStmt = 0; pStmt = intckPrepare(p, zSql); intckStep(p, pStmt); intckFinalize(p, pStmt); } /* ** A wrapper around sqlite3_mprintf() that uses the sqlite3_intck error ** code convention. */ static char *intckMprintf(sqlite3_intck *p, const char *zFmt, ...){ va_list ap; char *zRet = 0; va_start(ap, zFmt); zRet = sqlite3_vmprintf(zFmt, ap); if( p->rc==SQLITE_OK ){ if( zRet==0 ){ p->rc = SQLITE_NOMEM; } }else{ sqlite3_free(zRet); zRet = 0; } return zRet; } /* ** This is used by sqlite3_intck_unlock() to save the vector key value ** required to restart the current pCheck query as a nul-terminated string ** in p->zKey. */ static void intckSaveKey(sqlite3_intck *p){ int ii; char *zSql = 0; sqlite3_stmt *pStmt = 0; sqlite3_stmt *pXinfo = 0; const char *zDir = 0; assert( p->pCheck ); assert( p->zKey==0 ); pXinfo = intckPrepareFmt(p, "SELECT group_concat(desc, '') FROM %Q.sqlite_schema s, " "pragma_index_xinfo(%Q, %Q) " "WHERE s.type='index' AND s.name=%Q", p->zDb, p->zObj, p->zDb, p->zObj ); if( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pXinfo) ){ zDir = (const char*)sqlite3_column_text(pXinfo, 0); } if( zDir==0 ){ /* Object is a table, not an index. This is the easy case,as there are ** no DESC columns or NULL values in a primary key. */ const char *zSep = "SELECT '(' || "; for(ii=0; ii<p->nKeyVal; ii++){ zSql = intckMprintf(p, "%z%squote(?)", zSql, zSep); zSep = " || ', ' || "; } zSql = intckMprintf(p, "%z || ')'", zSql); }else{ /* Object is an index. */ assert( p->nKeyVal>1 ); for(ii=p->nKeyVal; ii>0; ii--){ int bLastIsDesc = zDir[ii-1]=='1'; int bLastIsNull = sqlite3_column_type(p->pCheck, ii)==SQLITE_NULL; const char *zLast = sqlite3_column_name(p->pCheck, ii); char *zLhs = 0; char *zRhs = 0; char *zWhere = 0; if( bLastIsNull ){ if( bLastIsDesc ) continue; zWhere = intckMprintf(p, "'%s IS NOT NULL'", zLast); }else{ const char *zOp = bLastIsDesc ? "<" : ">"; zWhere = intckMprintf(p, "'%s %s ' || quote(?%d)", zLast, zOp, ii); } if( ii>1 ){ const char *zLhsSep = ""; const char *zRhsSep = ""; int jj; for(jj=0; jj<ii-1; jj++){ const char *zAlias = (const char*)sqlite3_column_name(p->pCheck,jj+1); zLhs = intckMprintf(p, "%z%s%s", zLhs, zLhsSep, zAlias); zRhs = intckMprintf(p, "%z%squote(?%d)", zRhs, zRhsSep, jj+1); zLhsSep = ","; zRhsSep = " || ',' || "; } zWhere = intckMprintf(p, "'(%z) IS (' || %z || ') AND ' || %z", zLhs, zRhs, zWhere); } zWhere = intckMprintf(p, "'WHERE ' || %z", zWhere); zSql = intckMprintf(p, "%z%s(quote( %z ) )", zSql, (zSql==0 ? "VALUES" : ",\n "), zWhere ); } zSql = intckMprintf(p, "WITH wc(q) AS (\n%z\n)" "SELECT 'VALUES' || group_concat('(' || q || ')', ',\n ') FROM wc" , zSql ); } pStmt = intckPrepare(p, zSql); if( p->rc==SQLITE_OK ){ for(ii=0; ii<p->nKeyVal; ii++){ sqlite3_bind_value(pStmt, ii+1, sqlite3_column_value(p->pCheck, ii+1)); } if( SQLITE_ROW==sqlite3_step(pStmt) ){ p->zKey = intckMprintf(p,"%s",(const char*)sqlite3_column_text(pStmt, 0)); } intckFinalize(p, pStmt); } sqlite3_free(zSql); intckFinalize(p, pXinfo); } /* ** Find the next database object (table or index) to check. If successful, ** set sqlite3_intck.zObj to point to a nul-terminated buffer containing ** the object's name before returning. */ static void intckFindObject(sqlite3_intck *p){ sqlite3_stmt *pStmt = 0; char *zPrev = p->zObj; p->zObj = 0; assert( p->rc==SQLITE_OK ); assert( p->pCheck==0 ); pStmt = intckPrepareFmt(p, "WITH tables(table_name) AS (" " SELECT name" " FROM %Q.sqlite_schema WHERE (type='table' OR type='index') AND rootpage" " UNION ALL " " SELECT 'sqlite_schema'" ")" "SELECT table_name FROM tables " "WHERE ?1 IS NULL OR table_name%s?1 " "ORDER BY 1" , p->zDb, (p->zKey ? ">=" : ">") ); if( p->rc==SQLITE_OK ){ sqlite3_bind_text(pStmt, 1, zPrev, -1, SQLITE_TRANSIENT); if( sqlite3_step(pStmt)==SQLITE_ROW ){ p->zObj = intckMprintf(p,"%s",(const char*)sqlite3_column_text(pStmt, 0)); } } intckFinalize(p, pStmt); /* If this is a new object, ensure the previous key value is cleared. */ if( sqlite3_stricmp(p->zObj, zPrev) ){ sqlite3_free(p->zKey); p->zKey = 0; } sqlite3_free(zPrev); } /* ** Return the size in bytes of the first token in nul-terminated buffer z. ** For the purposes of this call, a token is either: ** ** * a quoted SQL string, * * a contiguous series of ascii alphabet characters, or * * any other single byte. */ static int intckGetToken(const char *z){ char c = z[0]; int iRet = 1; if( c=='\'' || c=='"' || c=='`' ){ while( 1 ){ if( z[iRet]==c ){ iRet++; if( z[iRet]!=c ) break; } iRet++; } } else if( c=='[' ){ while( z[iRet++]!=']' && z[iRet] ); } else if( (c>='A' && c<='Z') || (c>='a' && c<='z') ){ while( (z[iRet]>='A' && z[iRet]<='Z') || (z[iRet]>='a' && z[iRet]<='z') ){ iRet++; } } return iRet; } /* ** Return true if argument c is an ascii whitespace character. */ static int intckIsSpace(char c){ return (c==' ' || c=='\t' || c=='\n' || c=='\r'); } /* ** Argument z points to the text of a CREATE INDEX statement. This function ** identifies the part of the text that contains either the index WHERE ** clause (if iCol<0) or the iCol'th column of the index. ** ** If (iCol<0), the identified fragment does not include the "WHERE" keyword, ** only the expression that follows it. If (iCol>=0) then the identified ** fragment does not include any trailing sort-order keywords - "ASC" or ** "DESC". ** ** If the CREATE INDEX statement does not contain the requested field or ** clause, NULL is returned and (*pnByte) is set to 0. Otherwise, a pointer to ** the identified fragment is returned and output parameter (*pnByte) set ** to its size in bytes. */ static const char *intckParseCreateIndex(const char *z, int iCol, int *pnByte){ int iOff = 0; int iThisCol = 0; int iStart = 0; int nOpen = 0; const char *zRet = 0; int nRet = 0; int iEndOfCol = 0; /* Skip forward until the first "(" token */ while( z[iOff]!='(' ){ iOff += intckGetToken(&z[iOff]); if( z[iOff]=='\0' ) return 0; } assert( z[iOff]=='(' ); nOpen = 1; iOff++; iStart = iOff; while( z[iOff] ){ const char *zToken = &z[iOff]; int nToken = 0; /* Check if this is the end of the current column - either a "," or ")" ** when nOpen==1. */ if( nOpen==1 ){ if( z[iOff]==',' || z[iOff]==')' ){ if( iCol==iThisCol ){ int iEnd = iEndOfCol ? iEndOfCol : iOff; nRet = (iEnd - iStart); zRet = &z[iStart]; break; } iStart = iOff+1; while( intckIsSpace(z[iStart]) ) iStart++; iThisCol++; } if( z[iOff]==')' ) break; } if( z[iOff]=='(' ) nOpen++; if( z[iOff]==')' ) nOpen--; nToken = intckGetToken(zToken); if( (nToken==3 && 0==sqlite3_strnicmp(zToken, "ASC", nToken)) || (nToken==4 && 0==sqlite3_strnicmp(zToken, "DESC", nToken)) ){ iEndOfCol = iOff; }else if( 0==intckIsSpace(zToken[0]) ){ iEndOfCol = 0; } iOff += nToken; } /* iStart is now the byte offset of 1 byte passed the final ')' in the ** CREATE INDEX statement. Try to find a WHERE clause to return. */ while( zRet==0 && z[iOff] ){ int n = intckGetToken(&z[iOff]); if( n==5 && 0==sqlite3_strnicmp(&z[iOff], "where", 5) ){ zRet = &z[iOff+5]; nRet = (int)strlen(zRet); } iOff += n; } /* Trim any whitespace from the start and end of the returned string. */ if( zRet ){ while( intckIsSpace(zRet[0]) ){ nRet--; zRet++; } while( nRet>0 && intckIsSpace(zRet[nRet-1]) ) nRet--; } *pnByte = nRet; return zRet; } /* ** User-defined SQL function wrapper for intckParseCreateIndex(): ** ** SELECT parse_create_index(<sql>, <icol>); */ static void intckParseCreateIndexFunc( sqlite3_context *pCtx, int nVal, sqlite3_value **apVal ){ const char *zSql = (const char*)sqlite3_value_text(apVal[0]); int idx = sqlite3_value_int(apVal[1]); const char *zRes = 0; int nRes = 0; assert( nVal==2 ); if( zSql ){ zRes = intckParseCreateIndex(zSql, idx, &nRes); } sqlite3_result_text(pCtx, zRes, nRes, SQLITE_TRANSIENT); } /* ** Return true if sqlite3_intck.db has automatic indexes enabled, false ** otherwise. */ static int intckGetAutoIndex(sqlite3_intck *p){ int bRet = 0; sqlite3_stmt *pStmt = 0; pStmt = intckPrepare(p, "PRAGMA automatic_index"); if( SQLITE_ROW==intckStep(p, pStmt) ){ bRet = sqlite3_column_int(pStmt, 0); } intckFinalize(p, pStmt); return bRet; } /* ** Return true if zObj is an index, or false otherwise. */ static int intckIsIndex(sqlite3_intck *p, const char *zObj){ int bRet = 0; sqlite3_stmt *pStmt = 0; pStmt = intckPrepareFmt(p, "SELECT 1 FROM %Q.sqlite_schema WHERE name=%Q AND type='index'", p->zDb, zObj ); if( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pStmt) ){ bRet = 1; } intckFinalize(p, pStmt); return bRet; } /* ** Return a pointer to a nul-terminated buffer containing the SQL statement ** used to check database object zObj (a table or index) for corruption. ** If parameter zPrev is not NULL, then it must be a string containing the ** vector key required to restart the check where it left off last time. ** If pnKeyVal is not NULL, then (*pnKeyVal) is set to the number of ** columns in the vector key value for the specified object. ** ** This function uses the sqlite3_intck error code convention. */ static char *intckCheckObjectSql( sqlite3_intck *p, /* Integrity check object */ const char *zObj, /* Object (table or index) to scan */ const char *zPrev, /* Restart key vector, if any */ int *pnKeyVal /* OUT: Number of key-values for this scan */ ){ char *zRet = 0; sqlite3_stmt *pStmt = 0; int bAutoIndex = 0; int bIsIndex = 0; const char *zCommon = /* Relation without_rowid also contains just one row. Column "b" is ** set to true if the table being examined is a WITHOUT ROWID table, ** or false otherwise. */ ", without_rowid(b) AS (" " SELECT EXISTS (" " SELECT 1 FROM tabname, pragma_index_list(tab, db) AS l" " WHERE origin='pk' " " AND NOT EXISTS (SELECT 1 FROM sqlite_schema WHERE name=l.name)" " )" ")" "" /* Table idx_cols contains 1 row for each column in each index on the ** table being checked. Columns are: ** ** idx_name: Name of the index. ** idx_ispk: True if this index is the PK of a WITHOUT ROWID table. ** col_name: Name of indexed column, or NULL for index on expression. ** col_expr: Indexed expression, including COLLATE clause. ** col_alias: Alias used for column in 'intck_wrapper' table. */ ", idx_cols(idx_name, idx_ispk, col_name, col_expr, col_alias) AS (" " SELECT l.name, (l.origin=='pk' AND w.b), i.name, COALESCE((" " SELECT parse_create_index(sql, i.seqno) FROM " " sqlite_schema WHERE name = l.name" " ), format('\"%w\"', i.name) || ' COLLATE ' || quote(i.coll))," " 'c' || row_number() OVER ()" " FROM " " tabname t," " without_rowid w," " pragma_index_list(t.tab, t.db) l," " pragma_index_xinfo(l.name) i" " WHERE i.key" " UNION ALL" " SELECT '', 1, '_rowid_', '_rowid_', 'r1' FROM without_rowid WHERE b=0" ")" "" "" /* ** For a PK declared as "PRIMARY KEY(a, b) ... WITHOUT ROWID", where ** the intck_wrapper aliases of "a" and "b" are "c1" and "c2": ** ** o_pk: "o.c1, o.c2" ** i_pk: "i.'a', i.'b'" ** ... ** n_pk: 2 */ ", tabpk(db, tab, idx, o_pk, i_pk, q_pk, eq_pk, ps_pk, pk_pk, n_pk) AS (" " WITH pkfields(f, a) AS (" " SELECT i.col_name, i.col_alias FROM idx_cols i WHERE i.idx_ispk" " )" " SELECT t.db, t.tab, t.idx, " " group_concat(a, ', '), " " group_concat('i.'||quote(f), ', '), " " group_concat('quote(o.'||a||')', ' || '','' || '), " " format('(%s)==(%s)'," " group_concat('o.'||a, ', '), " " group_concat(format('\"%w\"', f), ', ')" " )," " group_concat('%s', ',')," " group_concat('quote('||a||')', ', '), " " count(*)" " FROM tabname t, pkfields" ")" "" ", idx(name, match_expr, partial, partial_alias, idx_ps, idx_idx) AS (" " SELECT idx_name," " format('(%s,%s) IS (%s,%s)', " " group_concat(i.col_expr, ', '), i_pk," " group_concat('o.'||i.col_alias, ', '), o_pk" " ), " " parse_create_index(" " (SELECT sql FROM sqlite_schema WHERE name=idx_name), -1" " )," " 'cond' || row_number() OVER ()" " , group_concat('%s', ',')" " , group_concat('quote('||i.col_alias||')', ', ')" " FROM tabpk t, " " without_rowid w," " idx_cols i" " WHERE i.idx_ispk==0 " " GROUP BY idx_name" ")" "" ", wrapper_with(s) AS (" " SELECT 'intck_wrapper AS (\n SELECT\n ' || (" " WITH f(a, b) AS (" " SELECT col_expr, col_alias FROM idx_cols" " UNION ALL " " SELECT partial, partial_alias FROM idx WHERE partial IS NOT NULL" " )" " SELECT group_concat(format('%s AS %s', a, b), ',\n ') FROM f" " )" " || format('\n FROM %Q.%Q ', t.db, t.tab)" /* If the object being checked is a table, append "NOT INDEXED". ** Otherwise, append "INDEXED BY <index>", and then, if the index ** is a partial index " WHERE <condition>". */ " || CASE WHEN t.idx IS NULL THEN " " 'NOT INDEXED'" " ELSE" " format('INDEXED BY %Q%s', t.idx, ' WHERE '||i.partial)" " END" " || '\n)'" " FROM tabname t LEFT JOIN idx i ON (i.name=t.idx)" ")" "" ; bAutoIndex = intckGetAutoIndex(p); if( bAutoIndex ) intckExec(p, "PRAGMA automatic_index = 0"); bIsIndex = intckIsIndex(p, zObj); if( bIsIndex ){ pStmt = intckPrepareFmt(p, /* Table idxname contains a single row. The first column, "db", contains ** the name of the db containing the table (e.g. "main") and the second, ** "tab", the name of the table itself. */ "WITH tabname(db, tab, idx) AS (" " SELECT %Q, (SELECT tbl_name FROM %Q.sqlite_schema WHERE name=%Q), %Q " ")" "" ", whereclause(w_c) AS (%s)" "" "%s" /* zCommon */ "" ", case_statement(c) AS (" " SELECT " " 'CASE WHEN (' || group_concat(col_alias, ', ') || ', 1) IS (\n' " " || ' SELECT ' || group_concat(col_expr, ', ') || ', 1 FROM '" " || format('%%Q.%%Q NOT INDEXED WHERE %%s\n', t.db, t.tab, p.eq_pk)" " || ' )\n THEN NULL\n '" " || 'ELSE format(''surplus entry ('" " || group_concat('%%s', ',') || ',' || p.ps_pk" " || ') in index ' || t.idx || ''', ' " " || group_concat('quote('||i.col_alias||')', ', ') || ', ' || p.pk_pk" " || ')'" " || '\n END AS error_message'" " FROM tabname t, tabpk p, idx_cols i WHERE i.idx_name=t.idx" ")" "" ", thiskey(k, n) AS (" " SELECT group_concat(i.col_alias, ', ') || ', ' || p.o_pk, " " count(*) + p.n_pk " " FROM tabpk p, idx_cols i WHERE i.idx_name=p.idx" ")" "" ", main_select(m, n) AS (" " SELECT format(" " 'WITH %%s\n' ||" " ', idx_checker AS (\n' ||" " ' SELECT %%s,\n' ||" " ' %%s\n' || " " ' FROM intck_wrapper AS o\n' ||" " ')\n'," " ww.s, c, t.k" " ), t.n" " FROM case_statement, wrapper_with ww, thiskey t" ")" "SELECT m || " " group_concat('SELECT * FROM idx_checker ' || w_c, ' UNION ALL '), n" " FROM " "main_select, whereclause " , p->zDb, p->zDb, zObj, zObj , zPrev ? zPrev : "VALUES('')", zCommon ); }else{ pStmt = intckPrepareFmt(p, /* Table tabname contains a single row. The first column, "db", contains ** the name of the db containing the table (e.g. "main") and the second, ** "tab", the name of the table itself. */ "WITH tabname(db, tab, idx, prev) AS (SELECT %Q, %Q, NULL, %Q)" "" "%s" /* zCommon */ /* expr(e) contains one row for each index on table zObj. Value e ** is set to an expression that evaluates to NULL if the required ** entry is present in the index, or an error message otherwise. */ ", expr(e, p) AS (" " SELECT format('CASE WHEN EXISTS \n" " (SELECT 1 FROM %%Q.%%Q AS i INDEXED BY %%Q WHERE %%s%%s)\n" " THEN NULL\n" " ELSE format(''entry (%%s,%%s) missing from index %%s'', %%s, %%s)\n" " END\n'" " , t.db, t.tab, i.name, i.match_expr, ' AND (' || partial || ')'," " i.idx_ps, t.ps_pk, i.name, i.idx_idx, t.pk_pk)," " CASE WHEN partial IS NULL THEN NULL ELSE i.partial_alias END" " FROM tabpk t, idx i" ")" ", numbered(ii, cond, e) AS (" " SELECT 0, 'n.ii=0', 'NULL'" " UNION ALL " " SELECT row_number() OVER ()," " '(n.ii='||row_number() OVER ()||COALESCE(' AND '||p||')', ')'), e" " FROM expr" ")" ", counter_with(w) AS (" " SELECT 'WITH intck_counter(ii) AS (\n ' || " " group_concat('SELECT '||ii, ' UNION ALL\n ') " " || '\n)' FROM numbered" ")" "" ", case_statement(c) AS (" " SELECT 'CASE ' || " " group_concat(format('\n WHEN %%s THEN (%%s)', cond, e), '') ||" " '\nEND AS error_message'" " FROM numbered" ")" "" /* This table contains a single row consisting of a single value - ** the text of an SQL expression that may be used by the main SQL ** statement to output an SQL literal that can be used to resume ** the scan if it is suspended. e.g. for a rowid table, an expression ** like: ** ** format('(%d,%d)', _rowid_, n.ii) */ ", thiskey(k, n) AS (" " SELECT o_pk || ', ii', n_pk+1 FROM tabpk" ")" "" ", whereclause(w_c) AS (" " SELECT CASE WHEN prev!='' THEN " " '\nWHERE (' || o_pk ||', n.ii) > ' || prev" " ELSE ''" " END" " FROM tabpk, tabname" ")" "" ", main_select(m, n) AS (" " SELECT format(" " '%%s, %%s\nSELECT %%s,\n%%s\nFROM intck_wrapper AS o" ", intck_counter AS n%%s\nORDER BY %%s', " " w, ww.s, c, thiskey.k, whereclause.w_c, t.o_pk" " ), thiskey.n" " FROM case_statement, tabpk t, counter_with, " " wrapper_with ww, thiskey, whereclause" ")" "SELECT m, n FROM main_select", p->zDb, zObj, zPrev, zCommon ); } while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pStmt) ){ zRet = intckMprintf(p, "%s", (const char*)sqlite3_column_text(pStmt, 0)); if( pnKeyVal ){ *pnKeyVal = sqlite3_column_int(pStmt, 1); } } intckFinalize(p, pStmt); if( bAutoIndex ) intckExec(p, "PRAGMA automatic_index = 1"); return zRet; } /* ** Open a new integrity-check object. */ int sqlite3_intck_open( sqlite3 *db, /* Database handle to operate on */ const char *zDbArg, /* "main", "temp" etc. */ sqlite3_intck **ppOut /* OUT: New integrity-check handle */ ){ sqlite3_intck *pNew = 0; int rc = SQLITE_OK; const char *zDb = zDbArg ? zDbArg : "main"; int nDb = (int)strlen(zDb); pNew = (sqlite3_intck*)sqlite3_malloc(sizeof(*pNew) + nDb + 1); if( pNew==0 ){ rc = SQLITE_NOMEM; }else{ memset(pNew, 0, sizeof(*pNew)); pNew->db = db; pNew->zDb = (const char*)&pNew[1]; memcpy(&pNew[1], zDb, nDb+1); rc = sqlite3_create_function(db, "parse_create_index", 2, SQLITE_UTF8, 0, intckParseCreateIndexFunc, 0, 0 ); if( rc!=SQLITE_OK ){ sqlite3_intck_close(pNew); pNew = 0; } } *ppOut = pNew; return rc; } /* ** Free the integrity-check object. */ void sqlite3_intck_close(sqlite3_intck *p){ if( p ){ sqlite3_finalize(p->pCheck); sqlite3_create_function( p->db, "parse_create_index", 1, SQLITE_UTF8, 0, 0, 0, 0 ); sqlite3_free(p->zObj); sqlite3_free(p->zKey); sqlite3_free(p->zTestSql); sqlite3_free(p->zErr); sqlite3_free(p->zMessage); sqlite3_free(p); } } /* ** Step the integrity-check object. */ int sqlite3_intck_step(sqlite3_intck *p){ if( p->rc==SQLITE_OK ){ if( p->zMessage ){ sqlite3_free(p->zMessage); p->zMessage = 0; } if( p->bCorruptSchema ){ p->rc = SQLITE_DONE; }else if( p->pCheck==0 ){ intckFindObject(p); if( p->rc==SQLITE_OK ){ if( p->zObj ){ char *zSql = 0; zSql = intckCheckObjectSql(p, p->zObj, p->zKey, &p->nKeyVal); p->pCheck = intckPrepare(p, zSql); sqlite3_free(zSql); sqlite3_free(p->zKey); p->zKey = 0; }else{ p->rc = SQLITE_DONE; } }else if( p->rc==SQLITE_CORRUPT ){ p->rc = SQLITE_OK; p->zMessage = intckMprintf(p, "%s", "corruption found while reading database schema" ); p->bCorruptSchema = 1; } } if( p->pCheck ){ assert( p->rc==SQLITE_OK ); if( sqlite3_step(p->pCheck)==SQLITE_ROW ){ /* Normal case, do nothing. */ }else{ intckFinalize(p, p->pCheck); p->pCheck = 0; p->nKeyVal = 0; if( p->rc==SQLITE_CORRUPT ){ p->rc = SQLITE_OK; p->zMessage = intckMprintf(p, "corruption found while scanning database object %s", p->zObj ); } } } } return p->rc; } /* ** Return a message describing the corruption encountered by the most recent ** call to sqlite3_intck_step(), or NULL if no corruption was encountered. */ const char *sqlite3_intck_message(sqlite3_intck *p){ assert( p->pCheck==0 || p->zMessage==0 ); if( p->zMessage ){ return p->zMessage; } if( p->pCheck ){ return (const char*)sqlite3_column_text(p->pCheck, 0); } return 0; } /* ** Return the error code and message. */ int sqlite3_intck_error(sqlite3_intck *p, const char **pzErr){ if( pzErr ) *pzErr = p->zErr; return (p->rc==SQLITE_DONE ? SQLITE_OK : p->rc); } /* ** Close any read transaction the integrity-check object is holding open ** on the database. */ int sqlite3_intck_unlock(sqlite3_intck *p){ if( p->rc==SQLITE_OK && p->pCheck ){ assert( p->zKey==0 && p->nKeyVal>0 ); intckSaveKey(p); intckFinalize(p, p->pCheck); p->pCheck = 0; } return p->rc; } /* ** Return the SQL statement used to check object zObj. Or, if zObj is ** NULL, the current SQL statement. */ const char *sqlite3_intck_test_sql(sqlite3_intck *p, const char *zObj){ sqlite3_free(p->zTestSql); if( zObj ){ p->zTestSql = intckCheckObjectSql(p, zObj, 0, 0); }else{ if( p->zObj ){ p->zTestSql = intckCheckObjectSql(p, p->zObj, p->zKey, 0); }else{ sqlite3_free(p->zTestSql); p->zTestSql = 0; } } return p->zTestSql; } /************************* End ../ext/intck/sqlite3intck.c ********************/ #if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_ENABLE_DBPAGE_VTAB) #define SQLITE_SHELL_HAVE_RECOVER 1 #else #define SQLITE_SHELL_HAVE_RECOVER 0 #endif #if SQLITE_SHELL_HAVE_RECOVER |
︙ | ︙ | |||
14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 | iOff += nPointer; /* Load the "byte of payload including overflow" field */ if( bNextPage || iOff>pCsr->nPage ){ bNextPage = 1; }else{ iOff += dbdataGetVarintU32(&pCsr->aPage[iOff], &nPayload); } /* If this is a leaf intkey cell, load the rowid */ if( bHasRowid && !bNextPage && iOff<pCsr->nPage ){ iOff += dbdataGetVarint(&pCsr->aPage[iOff], &pCsr->iIntkey); } | > | 15968 15969 15970 15971 15972 15973 15974 15975 15976 15977 15978 15979 15980 15981 15982 | iOff += nPointer; /* Load the "byte of payload including overflow" field */ if( bNextPage || iOff>pCsr->nPage ){ bNextPage = 1; }else{ iOff += dbdataGetVarintU32(&pCsr->aPage[iOff], &nPayload); if( nPayload>0x7fffff00 ) nPayload &= 0x3fff; } /* If this is a leaf intkey cell, load the rowid */ if( bHasRowid && !bNextPage && iOff<pCsr->nPage ){ iOff += dbdataGetVarint(&pCsr->aPage[iOff], &pCsr->iIntkey); } |
︙ | ︙ | |||
17419 17420 17421 17422 17423 17424 17425 17426 17427 17428 17429 17430 17431 17432 | u8 scanstatsOn; /* True to display scan stats before each finalize */ u8 openMode; /* SHELL_OPEN_NORMAL, _APPENDVFS, or _ZIPFILE */ u8 doXdgOpen; /* Invoke start/open/xdg-open in output_reset() */ u8 nEqpLevel; /* Depth of the EQP output graph */ u8 eTraceType; /* SHELL_TRACE_* value for type of trace */ u8 bSafeMode; /* True to prohibit unsafe operations */ u8 bSafeModePersist; /* The long-term value of bSafeMode */ ColModeOpts cmOpts; /* Option values affecting columnar mode output */ unsigned statsOn; /* True to display memory stats before each finalize */ unsigned mEqpLines; /* Mask of vertical lines in the EQP output graph */ int inputNesting; /* Track nesting level of .read and other redirects */ int outCount; /* Revert to stdout when reaching zero */ int cnt; /* Number of records displayed so far */ int lineno; /* Line number of last line read from in */ | > | 19285 19286 19287 19288 19289 19290 19291 19292 19293 19294 19295 19296 19297 19298 19299 | u8 scanstatsOn; /* True to display scan stats before each finalize */ u8 openMode; /* SHELL_OPEN_NORMAL, _APPENDVFS, or _ZIPFILE */ u8 doXdgOpen; /* Invoke start/open/xdg-open in output_reset() */ u8 nEqpLevel; /* Depth of the EQP output graph */ u8 eTraceType; /* SHELL_TRACE_* value for type of trace */ u8 bSafeMode; /* True to prohibit unsafe operations */ u8 bSafeModePersist; /* The long-term value of bSafeMode */ u8 eRestoreState; /* See comments above doAutoDetectRestore() */ ColModeOpts cmOpts; /* Option values affecting columnar mode output */ unsigned statsOn; /* True to display memory stats before each finalize */ unsigned mEqpLines; /* Mask of vertical lines in the EQP output graph */ int inputNesting; /* Track nesting level of .read and other redirects */ int outCount; /* Revert to stdout when reaching zero */ int cnt; /* Number of records displayed so far */ int lineno; /* Line number of last line read from in */ |
︙ | ︙ | |||
17612 17613 17614 17615 17616 17617 17618 | /* ** A callback for the sqlite3_log() interface. */ static void shellLog(void *pArg, int iErrCode, const char *zMsg){ ShellState *p = (ShellState*)pArg; if( p->pLog==0 ) return; | | | | | < | 19479 19480 19481 19482 19483 19484 19485 19486 19487 19488 19489 19490 19491 19492 19493 19494 19495 19496 19497 19498 19499 19500 19501 19502 19503 19504 19505 19506 19507 19508 19509 19510 19511 19512 19513 19514 19515 19516 19517 19518 19519 19520 19521 19522 19523 19524 19525 19526 19527 19528 19529 | /* ** A callback for the sqlite3_log() interface. */ static void shellLog(void *pArg, int iErrCode, const char *zMsg){ ShellState *p = (ShellState*)pArg; if( p->pLog==0 ) return; sputf(p->pLog, "(%d) %s\n", iErrCode, zMsg); fflush(p->pLog); } /* ** SQL function: shell_putsnl(X) ** ** Write the text X to the screen (or whatever output is being directed) ** adding a newline at the end, and then return X. */ static void shellPutsFunc( sqlite3_context *pCtx, int nVal, sqlite3_value **apVal ){ /* Unused: (ShellState*)sqlite3_user_data(pCtx); */ (void)nVal; oputf("%s\n", sqlite3_value_text(apVal[0])); sqlite3_result_value(pCtx, apVal[0]); } /* ** If in safe mode, print an error message described by the arguments ** and exit immediately. */ static void failIfSafeMode( ShellState *p, const char *zErrMsg, ... ){ if( p->bSafeMode ){ va_list ap; char *zMsg; va_start(ap, zErrMsg); zMsg = sqlite3_vmprintf(zErrMsg, ap); va_end(ap); eputf("line %d: %s\n", p->lineno, zMsg); exit(1); } } /* ** SQL function: edit(VALUE) ** edit(VALUE,EDITOR) |
︙ | ︙ | |||
17817 17818 17819 17820 17821 17822 17823 | memcpy(p->colSeparator, p->colSepPrior, sizeof(p->colSeparator)); memcpy(p->rowSeparator, p->rowSepPrior, sizeof(p->rowSeparator)); } /* ** Output the given string as a hex-encoded blob (eg. X'1234' ) */ | | | | 19683 19684 19685 19686 19687 19688 19689 19690 19691 19692 19693 19694 19695 19696 19697 19698 19699 19700 19701 19702 19703 19704 19705 19706 19707 19708 19709 19710 19711 19712 19713 19714 | memcpy(p->colSeparator, p->colSepPrior, sizeof(p->colSeparator)); memcpy(p->rowSeparator, p->rowSepPrior, sizeof(p->rowSeparator)); } /* ** Output the given string as a hex-encoded blob (eg. X'1234' ) */ static void output_hex_blob(const void *pBlob, int nBlob){ int i; unsigned char *aBlob = (unsigned char*)pBlob; char *zStr = sqlite3_malloc(nBlob*2 + 1); shell_check_oom(zStr); for(i=0; i<nBlob; i++){ static const char aHex[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }; zStr[i*2] = aHex[ (aBlob[i] >> 4) ]; zStr[i*2+1] = aHex[ (aBlob[i] & 0x0F) ]; } zStr[i*2] = '\0'; oputf("X'%s'", zStr); sqlite3_free(zStr); } /* ** Find a string that is not found anywhere in z[]. Return a pointer ** to that string. ** |
︙ | ︙ | |||
17864 17865 17866 17867 17868 17869 17870 | } /* ** Output the given string as a quoted string using SQL quoting conventions. ** ** See also: output_quoted_escaped_string() */ | | > > | > | | | | | > | > > > | > > | > | | | | | | | | | | | > | > > > > > > > > > > > > > > > > > > > > | | > > > > > | | > > > | < < | < < > | | > > | | < < > > > > | < > | | > | > | | > > > > > > > | | | | > > > > > > > > | | > > > > > > > > > > > | < < < < < < < < < < < < | < > | | | | | | | | | | 19730 19731 19732 19733 19734 19735 19736 19737 19738 19739 19740 19741 19742 19743 19744 19745 19746 19747 19748 19749 19750 19751 19752 19753 19754 19755 19756 19757 19758 19759 19760 19761 19762 19763 19764 19765 19766 19767 19768 19769 19770 19771 19772 19773 19774 19775 19776 19777 19778 19779 19780 19781 19782 19783 19784 19785 19786 19787 19788 19789 19790 19791 19792 19793 19794 19795 19796 19797 19798 19799 19800 19801 19802 19803 19804 19805 19806 19807 19808 19809 19810 19811 19812 19813 19814 19815 19816 19817 19818 19819 19820 19821 19822 19823 19824 19825 19826 19827 19828 19829 19830 19831 19832 19833 19834 19835 19836 19837 19838 19839 19840 19841 19842 19843 19844 19845 19846 19847 19848 19849 19850 19851 19852 19853 19854 19855 19856 19857 19858 19859 19860 19861 19862 19863 19864 19865 19866 19867 19868 19869 19870 19871 19872 19873 19874 19875 19876 19877 19878 19879 19880 19881 19882 19883 19884 19885 19886 19887 19888 19889 19890 19891 19892 19893 19894 19895 19896 19897 19898 19899 19900 19901 19902 19903 19904 19905 19906 19907 19908 19909 19910 19911 19912 19913 19914 19915 19916 19917 19918 19919 19920 19921 19922 19923 19924 19925 19926 19927 19928 19929 19930 19931 19932 19933 19934 19935 19936 19937 19938 19939 19940 19941 19942 19943 19944 19945 19946 19947 19948 19949 19950 19951 19952 19953 19954 19955 19956 19957 19958 19959 19960 19961 19962 19963 19964 19965 19966 19967 19968 19969 19970 19971 19972 19973 19974 19975 19976 19977 19978 19979 19980 19981 19982 19983 19984 19985 19986 19987 19988 19989 19990 19991 | } /* ** Output the given string as a quoted string using SQL quoting conventions. ** ** See also: output_quoted_escaped_string() */ static void output_quoted_string(const char *z){ int i; char c; #ifndef SQLITE_SHELL_FIDDLE FILE *pfO = setOutputStream(invalidFileStream); setBinaryMode(pfO, 1); #endif if( z==0 ) return; for(i=0; (c = z[i])!=0 && c!='\''; i++){} if( c==0 ){ oputf("'%s'",z); }else{ oputz("'"); while( *z ){ for(i=0; (c = z[i])!=0 && c!='\''; i++){} if( c=='\'' ) i++; if( i ){ oputf("%.*s", i, z); z += i; } if( c=='\'' ){ oputz("'"); continue; } if( c==0 ){ break; } z++; } oputz("'"); } #ifndef SQLITE_SHELL_FIDDLE setTextMode(pfO, 1); #else setTextMode(stdout, 1); #endif } /* ** Output the given string as a quoted string using SQL quoting conventions. ** Additionallly , escape the "\n" and "\r" characters so that they do not ** get corrupted by end-of-line translation facilities in some operating ** systems. ** ** This is like output_quoted_string() but with the addition of the \r\n ** escape mechanism. */ static void output_quoted_escaped_string(const char *z){ int i; char c; #ifndef SQLITE_SHELL_FIDDLE FILE *pfO = setOutputStream(invalidFileStream); setBinaryMode(pfO, 1); #endif for(i=0; (c = z[i])!=0 && c!='\'' && c!='\n' && c!='\r'; i++){} if( c==0 ){ oputf("'%s'",z); }else{ const char *zNL = 0; const char *zCR = 0; int nNL = 0; int nCR = 0; char zBuf1[20], zBuf2[20]; for(i=0; z[i]; i++){ if( z[i]=='\n' ) nNL++; if( z[i]=='\r' ) nCR++; } if( nNL ){ oputz("replace("); zNL = unused_string(z, "\\n", "\\012", zBuf1); } if( nCR ){ oputz("replace("); zCR = unused_string(z, "\\r", "\\015", zBuf2); } oputz("'"); while( *z ){ for(i=0; (c = z[i])!=0 && c!='\n' && c!='\r' && c!='\''; i++){} if( c=='\'' ) i++; if( i ){ oputf("%.*s", i, z); z += i; } if( c=='\'' ){ oputz("'"); continue; } if( c==0 ){ break; } z++; if( c=='\n' ){ oputz(zNL); continue; } oputz(zCR); } oputz("'"); if( nCR ){ oputf(",'%s',char(13))", zCR); } if( nNL ){ oputf(",'%s',char(10))", zNL); } } #ifndef SQLITE_SHELL_FIDDLE setTextMode(pfO, 1); #else setTextMode(stdout, 1); #endif } /* ** Find earliest of chars within s specified in zAny. ** With ns == ~0, is like strpbrk(s,zAny) and s must be 0-terminated. */ static const char *anyOfInStr(const char *s, const char *zAny, size_t ns){ const char *pcFirst = 0; if( ns == ~(size_t)0 ) ns = strlen(s); while(*zAny){ const char *pc = (const char*)memchr(s, *zAny&0xff, ns); if( pc ){ pcFirst = pc; ns = pcFirst - s; } ++zAny; } return pcFirst; } /* ** Output the given string as a quoted according to C or TCL quoting rules. */ static void output_c_string(const char *z){ char c; static const char *zq = "\""; static long ctrlMask = ~0L; static const char *zDQBSRO = "\"\\\x7f"; /* double-quote, backslash, rubout */ char ace[3] = "\\?"; char cbsSay; oputz(zq); while( *z!=0 ){ const char *pcDQBSRO = anyOfInStr(z, zDQBSRO, ~(size_t)0); const char *pcPast = zSkipValidUtf8(z, INT_MAX, ctrlMask); const char *pcEnd = (pcDQBSRO && pcDQBSRO < pcPast)? pcDQBSRO : pcPast; if( pcEnd > z ) oputb(z, (int)(pcEnd-z)); if( (c = *pcEnd)==0 ) break; ++pcEnd; switch( c ){ case '\\': case '"': cbsSay = (char)c; break; case '\t': cbsSay = 't'; break; case '\n': cbsSay = 'n'; break; case '\r': cbsSay = 'r'; break; case '\f': cbsSay = 'f'; break; default: cbsSay = 0; break; } if( cbsSay ){ ace[1] = cbsSay; oputz(ace); }else if( !isprint(c&0xff) ){ oputf("\\%03o", c&0xff); }else{ ace[1] = (char)c; oputz(ace+1); } z = pcEnd; } oputz(zq); } /* ** Output the given string as a quoted according to JSON quoting rules. */ static void output_json_string(const char *z, i64 n){ char c; static const char *zq = "\""; static long ctrlMask = ~0L; static const char *zDQBS = "\"\\"; const char *pcLimit; char ace[3] = "\\?"; char cbsSay; if( z==0 ) z = ""; pcLimit = z + ((n<0)? strlen(z) : (size_t)n); oputz(zq); while( z < pcLimit ){ const char *pcDQBS = anyOfInStr(z, zDQBS, pcLimit-z); const char *pcPast = zSkipValidUtf8(z, (int)(pcLimit-z), ctrlMask); const char *pcEnd = (pcDQBS && pcDQBS < pcPast)? pcDQBS : pcPast; if( pcEnd > z ){ oputb(z, (int)(pcEnd-z)); z = pcEnd; } if( z >= pcLimit ) break; c = *(z++); switch( c ){ case '"': case '\\': cbsSay = (char)c; break; case '\b': cbsSay = 'b'; break; case '\f': cbsSay = 'f'; break; case '\n': cbsSay = 'n'; break; case '\r': cbsSay = 'r'; break; case '\t': cbsSay = 't'; break; default: cbsSay = 0; break; } if( cbsSay ){ ace[1] = cbsSay; oputz(ace); }else if( c<=0x1f ){ oputf("u%04x", c); }else{ ace[1] = (char)c; oputz(ace+1); } } oputz(zq); } /* ** Output the given string with characters that are special to ** HTML escaped. */ static void output_html_string(const char *z){ int i; if( z==0 ) z = ""; while( *z ){ for(i=0; z[i] && z[i]!='<' && z[i]!='&' && z[i]!='>' && z[i]!='\"' && z[i]!='\''; i++){} if( i>0 ){ oputf("%.*s",i,z); } if( z[i]=='<' ){ oputz("<"); }else if( z[i]=='&' ){ oputz("&"); }else if( z[i]=='>' ){ oputz(">"); }else if( z[i]=='\"' ){ oputz("""); }else if( z[i]=='\'' ){ oputz("'"); }else{ break; } z += i + 1; } } |
︙ | ︙ | |||
18093 18094 18095 18096 18097 18098 18099 | /* ** Output a single term of CSV. Actually, p->colSeparator is used for ** the separator, which may or may not be a comma. p->nullValue is ** the null value. Strings are quoted if necessary. The separator ** is only issued if bSep is true. */ static void output_csv(ShellState *p, const char *z, int bSep){ | < | | | | | 20015 20016 20017 20018 20019 20020 20021 20022 20023 20024 20025 20026 20027 20028 20029 20030 20031 20032 20033 20034 20035 20036 20037 20038 20039 20040 20041 20042 20043 20044 20045 20046 20047 20048 20049 | /* ** Output a single term of CSV. Actually, p->colSeparator is used for ** the separator, which may or may not be a comma. p->nullValue is ** the null value. Strings are quoted if necessary. The separator ** is only issued if bSep is true. */ static void output_csv(ShellState *p, const char *z, int bSep){ if( z==0 ){ oputf("%s",p->nullValue); }else{ unsigned i; for(i=0; z[i]; i++){ if( needCsvQuote[((unsigned char*)z)[i]] ){ i = 0; break; } } if( i==0 || strstr(z, p->colSeparator)!=0 ){ char *zQuoted = sqlite3_mprintf("\"%w\"", z); shell_check_oom(zQuoted); oputz(zQuoted); sqlite3_free(zQuoted); }else{ oputz(z); } } if( bSep ){ oputz(p->colSeparator); } } /* ** This routine runs when the user presses Ctrl-C */ static void interrupt_handler(int NotUsed){ |
︙ | ︙ | |||
18222 18223 18224 18225 18226 18227 18228 | }; int i; const char *az[4]; az[0] = zA1; az[1] = zA2; az[2] = zA3; az[3] = zA4; | | | | | | | | 20143 20144 20145 20146 20147 20148 20149 20150 20151 20152 20153 20154 20155 20156 20157 20158 20159 20160 20161 20162 20163 20164 20165 20166 20167 20168 20169 20170 20171 20172 20173 20174 20175 20176 20177 20178 20179 20180 20181 20182 | }; int i; const char *az[4]; az[0] = zA1; az[1] = zA2; az[2] = zA3; az[3] = zA4; oputf("authorizer: %s", azAction[op]); for(i=0; i<4; i++){ oputz(" "); if( az[i] ){ output_c_string(az[i]); }else{ oputz("NULL"); } } oputz("\n"); if( p->bSafeMode ) (void)safeModeAuth(pClientData, op, zA1, zA2, zA3, zA4); return SQLITE_OK; } #endif /* ** Print a schema statement. Part of MODE_Semi and MODE_Pretty output. ** ** This routine converts some CREATE TABLE statements for shadow tables ** in FTS3/4/5 into CREATE TABLE IF NOT EXISTS statements. ** ** If the schema statement in z[] contains a start-of-comment and if ** sqlite3_complete() returns false, try to terminate the comment before ** printing the result. https://sqlite.org/forum/forumpost/d7be961c5c */ static void printSchemaLine(const char *z, const char *zTail){ char *zToFree = 0; if( z==0 ) return; if( zTail==0 ) return; if( zTail[0]==';' && (strstr(z, "/*")!=0 || strstr(z,"--")!=0) ){ const char *zOrig = z; static const char *azTerm[] = { "", "*/", "\n" }; int i; |
︙ | ︙ | |||
18269 18270 18271 18272 18273 18274 18275 | z = zNew; break; } sqlite3_free(zNew); } } if( sqlite3_strglob("CREATE TABLE ['\"]*", z)==0 ){ | | | | | | 20190 20191 20192 20193 20194 20195 20196 20197 20198 20199 20200 20201 20202 20203 20204 20205 20206 20207 20208 20209 20210 20211 20212 20213 | z = zNew; break; } sqlite3_free(zNew); } } if( sqlite3_strglob("CREATE TABLE ['\"]*", z)==0 ){ oputf("CREATE TABLE IF NOT EXISTS %s%s", z+13, zTail); }else{ oputf("%s%s", z, zTail); } sqlite3_free(zToFree); } static void printSchemaLineN(char *z, int n, const char *zTail){ char c = z[n]; z[n] = 0; printSchemaLine(z, zTail); z[n] = c; } /* ** Return true if string z[] has nothing but whitespace and comments to the ** end of the first line. */ |
︙ | ︙ | |||
18306 18307 18308 18309 18310 18311 18312 | */ static void eqp_append(ShellState *p, int iEqpId, int p2, const char *zText){ EQPGraphRow *pNew; i64 nText; if( zText==0 ) return; nText = strlen(zText); if( p->autoEQPtest ){ | | | 20227 20228 20229 20230 20231 20232 20233 20234 20235 20236 20237 20238 20239 20240 20241 | */ static void eqp_append(ShellState *p, int iEqpId, int p2, const char *zText){ EQPGraphRow *pNew; i64 nText; if( zText==0 ) return; nText = strlen(zText); if( p->autoEQPtest ){ oputf("%d,%d,%s\n", iEqpId, p2, zText); } pNew = sqlite3_malloc64( sizeof(*pNew) + nText ); shell_check_oom(pNew); pNew->iEqpId = iEqpId; pNew->iParentId = p2; memcpy(pNew->zText, zText, nText+1); pNew->pNext = 0; |
︙ | ︙ | |||
18354 18355 18356 18357 18358 18359 18360 | static void eqp_render_level(ShellState *p, int iEqpId){ EQPGraphRow *pRow, *pNext; i64 n = strlen(p->sGraph.zPrefix); char *z; for(pRow = eqp_next_row(p, iEqpId, 0); pRow; pRow = pNext){ pNext = eqp_next_row(p, iEqpId, pRow); z = pRow->zText; | < | | | | | | | | | | | | | | | | 20275 20276 20277 20278 20279 20280 20281 20282 20283 20284 20285 20286 20287 20288 20289 20290 20291 20292 20293 20294 20295 20296 20297 20298 20299 20300 20301 20302 20303 20304 20305 20306 20307 20308 20309 20310 20311 20312 20313 20314 20315 20316 20317 20318 20319 20320 20321 20322 20323 20324 20325 20326 20327 20328 20329 20330 20331 20332 20333 20334 20335 20336 20337 20338 20339 20340 20341 20342 20343 20344 20345 20346 20347 20348 20349 20350 20351 20352 20353 20354 20355 20356 20357 20358 20359 20360 20361 20362 20363 20364 20365 20366 20367 20368 20369 20370 20371 20372 20373 20374 | static void eqp_render_level(ShellState *p, int iEqpId){ EQPGraphRow *pRow, *pNext; i64 n = strlen(p->sGraph.zPrefix); char *z; for(pRow = eqp_next_row(p, iEqpId, 0); pRow; pRow = pNext){ pNext = eqp_next_row(p, iEqpId, pRow); z = pRow->zText; oputf("%s%s%s\n", p->sGraph.zPrefix, pNext ? "|--" : "`--", z); if( n<(i64)sizeof(p->sGraph.zPrefix)-7 ){ memcpy(&p->sGraph.zPrefix[n], pNext ? "| " : " ", 4); eqp_render_level(p, pRow->iEqpId); p->sGraph.zPrefix[n] = 0; } } } /* ** Display and reset the EXPLAIN QUERY PLAN data */ static void eqp_render(ShellState *p, i64 nCycle){ EQPGraphRow *pRow = p->sGraph.pRow; if( pRow ){ if( pRow->zText[0]=='-' ){ if( pRow->pNext==0 ){ eqp_reset(p); return; } oputf("%s\n", pRow->zText+3); p->sGraph.pRow = pRow->pNext; sqlite3_free(pRow); }else if( nCycle>0 ){ oputf("QUERY PLAN (cycles=%lld [100%%])\n", nCycle); }else{ oputz("QUERY PLAN\n"); } p->sGraph.zPrefix[0] = 0; eqp_render_level(p, 0); eqp_reset(p); } } #ifndef SQLITE_OMIT_PROGRESS_CALLBACK /* ** Progress handler callback. */ static int progress_handler(void *pClientData) { ShellState *p = (ShellState*)pClientData; p->nProgress++; if( p->nProgress>=p->mxProgress && p->mxProgress>0 ){ oputf("Progress limit reached (%u)\n", p->nProgress); if( p->flgProgress & SHELL_PROGRESS_RESET ) p->nProgress = 0; if( p->flgProgress & SHELL_PROGRESS_ONCE ) p->mxProgress = 0; return 1; } if( (p->flgProgress & SHELL_PROGRESS_QUIET)==0 ){ oputf("Progress %u\n", p->nProgress); } return 0; } #endif /* SQLITE_OMIT_PROGRESS_CALLBACK */ /* ** Print N dashes */ static void print_dashes(int N){ const char zDash[] = "--------------------------------------------------"; const int nDash = sizeof(zDash) - 1; while( N>nDash ){ oputz(zDash); N -= nDash; } oputf("%.*s", N, zDash); } /* ** Print a markdown or table-style row separator using ascii-art */ static void print_row_separator( ShellState *p, int nArg, const char *zSep ){ int i; if( nArg>0 ){ oputz(zSep); print_dashes(p->actualWidth[0]+2); for(i=1; i<nArg; i++){ oputz(zSep); print_dashes(p->actualWidth[i]+2); } oputz(zSep); } oputz("\n"); } /* ** This is the callback routine that the shell ** invokes for each row of a query result. */ static int shell_callback( |
︙ | ︙ | |||
18470 18471 18472 18473 18474 18475 18476 | case MODE_Line: { int w = 5; if( azArg==0 ) break; for(i=0; i<nArg; i++){ int len = strlen30(azCol[i] ? azCol[i] : ""); if( len>w ) w = len; } | | | | | 20390 20391 20392 20393 20394 20395 20396 20397 20398 20399 20400 20401 20402 20403 20404 20405 20406 20407 | case MODE_Line: { int w = 5; if( azArg==0 ) break; for(i=0; i<nArg; i++){ int len = strlen30(azCol[i] ? azCol[i] : ""); if( len>w ) w = len; } if( p->cnt++>0 ) oputz(p->rowSeparator); for(i=0; i<nArg; i++){ oputf("%*s = %s%s", w, azCol[i], azArg[i] ? azArg[i] : p->nullValue, p->rowSeparator); } break; } case MODE_ScanExp: case MODE_Explain: { static const int aExplainWidth[] = {4, 13, 4, 4, 4, 13, 2, 13}; static const int aExplainMap[] = {0, 1, 2, 3, 4, 5, 6, 7 }; |
︙ | ︙ | |||
18500 18501 18502 18503 18504 18505 18506 | iIndent = 3; } if( nArg>nWidth ) nArg = nWidth; /* If this is the first row seen, print out the headers */ if( p->cnt++==0 ){ for(i=0; i<nArg; i++){ | | | | | | | | | | | 20420 20421 20422 20423 20424 20425 20426 20427 20428 20429 20430 20431 20432 20433 20434 20435 20436 20437 20438 20439 20440 20441 20442 20443 20444 20445 20446 20447 20448 20449 20450 20451 20452 20453 20454 20455 20456 20457 20458 20459 20460 20461 20462 20463 20464 20465 20466 20467 20468 20469 20470 20471 20472 20473 20474 20475 20476 20477 20478 20479 20480 20481 20482 | iIndent = 3; } if( nArg>nWidth ) nArg = nWidth; /* If this is the first row seen, print out the headers */ if( p->cnt++==0 ){ for(i=0; i<nArg; i++){ utf8_width_print(aWidth[i], azCol[ aMap[i] ]); oputz(i==nArg-1 ? "\n" : " "); } for(i=0; i<nArg; i++){ print_dashes(aWidth[i]); oputz(i==nArg-1 ? "\n" : " "); } } /* If there is no data, exit early. */ if( azArg==0 ) break; for(i=0; i<nArg; i++){ const char *zSep = " "; int w = aWidth[i]; const char *zVal = azArg[ aMap[i] ]; if( i==nArg-1 ) w = 0; if( zVal && strlenChar(zVal)>w ){ w = strlenChar(zVal); zSep = " "; } if( i==iIndent && p->aiIndent && p->pStmt ){ if( p->iIndent<p->nIndent ){ oputf("%*.s", p->aiIndent[p->iIndent], ""); } p->iIndent++; } utf8_width_print(w, zVal ? zVal : p->nullValue); oputz(i==nArg-1 ? "\n" : zSep); } break; } case MODE_Semi: { /* .schema and .fullschema output */ printSchemaLine(azArg[0], ";\n"); break; } case MODE_Pretty: { /* .schema and .fullschema with --indent */ char *z; int j; int nParen = 0; char cEnd = 0; char c; int nLine = 0; assert( nArg==1 ); if( azArg[0]==0 ) break; if( sqlite3_strlike("CREATE VIEW%", azArg[0], 0)==0 || sqlite3_strlike("CREATE TRIG%", azArg[0], 0)==0 ){ oputf("%s;\n", azArg[0]); break; } z = sqlite3_mprintf("%s", azArg[0]); shell_check_oom(z); j = 0; for(i=0; IsSpace(z[i]); i++){} for(; (c = z[i])!=0; i++){ |
︙ | ︙ | |||
18581 18582 18583 18584 18585 18586 18587 | }else if( c=='-' && z[i+1]=='-' ){ cEnd = '\n'; }else if( c=='(' ){ nParen++; }else if( c==')' ){ nParen--; if( nLine>0 && nParen==0 && j>0 ){ | | | | < | | < | < < | < | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 20501 20502 20503 20504 20505 20506 20507 20508 20509 20510 20511 20512 20513 20514 20515 20516 20517 20518 20519 20520 20521 20522 20523 20524 20525 20526 20527 20528 20529 20530 20531 20532 20533 20534 20535 20536 20537 20538 20539 20540 20541 20542 20543 20544 20545 20546 20547 20548 20549 20550 20551 20552 20553 20554 20555 20556 20557 20558 20559 20560 20561 20562 20563 20564 20565 20566 20567 20568 20569 20570 20571 20572 20573 20574 20575 20576 20577 20578 20579 20580 20581 20582 20583 20584 20585 20586 20587 20588 20589 20590 20591 20592 20593 20594 20595 20596 20597 20598 20599 20600 20601 20602 20603 20604 20605 20606 20607 20608 20609 20610 20611 20612 20613 20614 20615 20616 20617 20618 20619 20620 20621 20622 20623 20624 20625 20626 20627 20628 20629 20630 20631 20632 20633 20634 20635 20636 20637 20638 20639 20640 20641 20642 20643 20644 20645 20646 20647 20648 20649 20650 20651 20652 20653 20654 20655 20656 20657 20658 20659 20660 20661 20662 20663 20664 20665 20666 20667 20668 20669 20670 20671 20672 20673 20674 20675 20676 20677 20678 20679 20680 20681 20682 20683 20684 20685 20686 20687 20688 20689 20690 20691 20692 20693 20694 20695 20696 20697 20698 20699 20700 20701 20702 20703 20704 20705 20706 20707 20708 20709 20710 20711 20712 20713 20714 20715 20716 20717 20718 20719 20720 20721 20722 20723 20724 20725 20726 20727 20728 20729 20730 20731 20732 20733 20734 20735 20736 20737 20738 20739 20740 20741 20742 20743 20744 20745 20746 20747 20748 20749 20750 20751 20752 20753 20754 20755 20756 20757 20758 20759 | }else if( c=='-' && z[i+1]=='-' ){ cEnd = '\n'; }else if( c=='(' ){ nParen++; }else if( c==')' ){ nParen--; if( nLine>0 && nParen==0 && j>0 ){ printSchemaLineN(z, j, "\n"); j = 0; } } z[j++] = c; if( nParen==1 && cEnd==0 && (c=='(' || c=='\n' || (c==',' && !wsToEol(z+i+1))) ){ if( c=='\n' ) j--; printSchemaLineN(z, j, "\n "); j = 0; nLine++; while( IsSpace(z[i+1]) ){ i++; } } } z[j] = 0; } printSchemaLine(z, ";\n"); sqlite3_free(z); break; } case MODE_List: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ oputf("%s%s",azCol[i], i==nArg-1 ? p->rowSeparator : p->colSeparator); } } if( azArg==0 ) break; for(i=0; i<nArg; i++){ char *z = azArg[i]; if( z==0 ) z = p->nullValue; oputz(z); oputz((i<nArg-1)? p->colSeparator : p->rowSeparator); } break; } case MODE_Html: { if( p->cnt++==0 && p->showHeader ){ oputz("<TR>"); for(i=0; i<nArg; i++){ oputz("<TH>"); output_html_string(azCol[i]); oputz("</TH>\n"); } oputz("</TR>\n"); } if( azArg==0 ) break; oputz("<TR>"); for(i=0; i<nArg; i++){ oputz("<TD>"); output_html_string(azArg[i] ? azArg[i] : p->nullValue); oputz("</TD>\n"); } oputz("</TR>\n"); break; } case MODE_Tcl: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ output_c_string(azCol[i] ? azCol[i] : ""); if(i<nArg-1) oputz(p->colSeparator); } oputz(p->rowSeparator); } if( azArg==0 ) break; for(i=0; i<nArg; i++){ output_c_string(azArg[i] ? azArg[i] : p->nullValue); if(i<nArg-1) oputz(p->colSeparator); } oputz(p->rowSeparator); break; } case MODE_Csv: { setBinaryMode(p->out, 1); if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ output_csv(p, azCol[i] ? azCol[i] : "", i<nArg-1); } oputz(p->rowSeparator); } if( nArg>0 ){ for(i=0; i<nArg; i++){ output_csv(p, azArg[i], i<nArg-1); } oputz(p->rowSeparator); } setTextMode(p->out, 1); break; } case MODE_Insert: { if( azArg==0 ) break; oputf("INSERT INTO %s",p->zDestTable); if( p->showHeader ){ oputz("("); for(i=0; i<nArg; i++){ if( i>0 ) oputz(","); if( quoteChar(azCol[i]) ){ char *z = sqlite3_mprintf("\"%w\"", azCol[i]); shell_check_oom(z); oputz(z); sqlite3_free(z); }else{ oputf("%s", azCol[i]); } } oputz(")"); } p->cnt++; for(i=0; i<nArg; i++){ oputz(i>0 ? "," : " VALUES("); if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ oputz("NULL"); }else if( aiType && aiType[i]==SQLITE_TEXT ){ if( ShellHasFlag(p, SHFLG_Newlines) ){ output_quoted_string(azArg[i]); }else{ output_quoted_escaped_string(azArg[i]); } }else if( aiType && aiType[i]==SQLITE_INTEGER ){ oputz(azArg[i]); }else if( aiType && aiType[i]==SQLITE_FLOAT ){ char z[50]; double r = sqlite3_column_double(p->pStmt, i); sqlite3_uint64 ur; memcpy(&ur,&r,sizeof(r)); if( ur==0x7ff0000000000000LL ){ oputz("9.0e+999"); }else if( ur==0xfff0000000000000LL ){ oputz("-9.0e+999"); }else{ sqlite3_int64 ir = (sqlite3_int64)r; if( r==(double)ir ){ sqlite3_snprintf(50,z,"%lld.0", ir); }else{ sqlite3_snprintf(50,z,"%!.20g", r); } oputz(z); } }else if( aiType && aiType[i]==SQLITE_BLOB && p->pStmt ){ const void *pBlob = sqlite3_column_blob(p->pStmt, i); int nBlob = sqlite3_column_bytes(p->pStmt, i); output_hex_blob(pBlob, nBlob); }else if( isNumber(azArg[i], 0) ){ oputz(azArg[i]); }else if( ShellHasFlag(p, SHFLG_Newlines) ){ output_quoted_string(azArg[i]); }else{ output_quoted_escaped_string(azArg[i]); } } oputz(");\n"); break; } case MODE_Json: { if( azArg==0 ) break; if( p->cnt==0 ){ fputs("[{", p->out); }else{ fputs(",\n{", p->out); } p->cnt++; for(i=0; i<nArg; i++){ output_json_string(azCol[i], -1); oputz(":"); if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ oputz("null"); }else if( aiType && aiType[i]==SQLITE_FLOAT ){ char z[50]; double r = sqlite3_column_double(p->pStmt, i); sqlite3_uint64 ur; memcpy(&ur,&r,sizeof(r)); if( ur==0x7ff0000000000000LL ){ oputz("9.0e+999"); }else if( ur==0xfff0000000000000LL ){ oputz("-9.0e+999"); }else{ sqlite3_snprintf(50,z,"%!.20g", r); oputz(z); } }else if( aiType && aiType[i]==SQLITE_BLOB && p->pStmt ){ const void *pBlob = sqlite3_column_blob(p->pStmt, i); int nBlob = sqlite3_column_bytes(p->pStmt, i); output_json_string(pBlob, nBlob); }else if( aiType && aiType[i]==SQLITE_TEXT ){ output_json_string(azArg[i], -1); }else{ oputz(azArg[i]); } if( i<nArg-1 ){ oputz(","); } } oputz("}"); break; } case MODE_Quote: { if( azArg==0 ) break; if( p->cnt==0 && p->showHeader ){ for(i=0; i<nArg; i++){ if( i>0 ) fputs(p->colSeparator, p->out); output_quoted_string(azCol[i]); } fputs(p->rowSeparator, p->out); } p->cnt++; for(i=0; i<nArg; i++){ if( i>0 ) fputs(p->colSeparator, p->out); if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ oputz("NULL"); }else if( aiType && aiType[i]==SQLITE_TEXT ){ output_quoted_string(azArg[i]); }else if( aiType && aiType[i]==SQLITE_INTEGER ){ oputz(azArg[i]); }else if( aiType && aiType[i]==SQLITE_FLOAT ){ char z[50]; double r = sqlite3_column_double(p->pStmt, i); sqlite3_snprintf(50,z,"%!.20g", r); oputz(z); }else if( aiType && aiType[i]==SQLITE_BLOB && p->pStmt ){ const void *pBlob = sqlite3_column_blob(p->pStmt, i); int nBlob = sqlite3_column_bytes(p->pStmt, i); output_hex_blob(pBlob, nBlob); }else if( isNumber(azArg[i], 0) ){ oputz(azArg[i]); }else{ output_quoted_string(azArg[i]); } } fputs(p->rowSeparator, p->out); break; } case MODE_Ascii: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ if( i>0 ) oputz(p->colSeparator); oputz(azCol[i] ? azCol[i] : ""); } oputz(p->rowSeparator); } if( azArg==0 ) break; for(i=0; i<nArg; i++){ if( i>0 ) oputz(p->colSeparator); oputz(azArg[i] ? azArg[i] : p->nullValue); } oputz(p->rowSeparator); break; } case MODE_EQP: { eqp_append(p, atoi(azArg[0]), atoi(azArg[1]), azArg[3]); break; } } |
︙ | ︙ | |||
18909 18910 18911 18912 18913 18914 18915 | "INSERT INTO [_shell$self]\n" " VALUES('run','PRAGMA integrity_check','ok');\n" "INSERT INTO selftest(tno,op,cmd,ans)" " SELECT rowid*10,op,cmd,ans FROM [_shell$self];\n" "DROP TABLE [_shell$self];" ,0,0,&zErrMsg); if( zErrMsg ){ | | | 20824 20825 20826 20827 20828 20829 20830 20831 20832 20833 20834 20835 20836 20837 20838 | "INSERT INTO [_shell$self]\n" " VALUES('run','PRAGMA integrity_check','ok');\n" "INSERT INTO selftest(tno,op,cmd,ans)" " SELECT rowid*10,op,cmd,ans FROM [_shell$self];\n" "DROP TABLE [_shell$self];" ,0,0,&zErrMsg); if( zErrMsg ){ eputf("SELFTEST initialization failure: %s\n", zErrMsg); sqlite3_free(zErrMsg); } sqlite3_exec(p->db, "RELEASE selftest_init",0,0,0); } /* |
︙ | ︙ | |||
19012 19013 19014 19015 19016 19017 19018 | int rc; int nResult; int i; const char *z; rc = sqlite3_prepare_v2(p->db, zSelect, -1, &pSelect, 0); if( rc!=SQLITE_OK || !pSelect ){ char *zContext = shell_error_context(zSelect, p->db); | | | | | | | | < | 20927 20928 20929 20930 20931 20932 20933 20934 20935 20936 20937 20938 20939 20940 20941 20942 20943 20944 20945 20946 20947 20948 20949 20950 20951 20952 20953 20954 20955 20956 20957 20958 20959 20960 20961 20962 20963 20964 20965 20966 | int rc; int nResult; int i; const char *z; rc = sqlite3_prepare_v2(p->db, zSelect, -1, &pSelect, 0); if( rc!=SQLITE_OK || !pSelect ){ char *zContext = shell_error_context(zSelect, p->db); oputf("/**** ERROR: (%d) %s *****/\n%s", rc, sqlite3_errmsg(p->db), zContext); sqlite3_free(zContext); if( (rc&0xff)!=SQLITE_CORRUPT ) p->nErr++; return rc; } rc = sqlite3_step(pSelect); nResult = sqlite3_column_count(pSelect); while( rc==SQLITE_ROW ){ z = (const char*)sqlite3_column_text(pSelect, 0); oputf("%s", z); for(i=1; i<nResult; i++){ oputf(",%s", sqlite3_column_text(pSelect, i)); } if( z==0 ) z = ""; while( z[0] && (z[0]!='-' || z[1]!='-') ) z++; if( z[0] ){ oputz("\n;\n"); }else{ oputz(";\n"); } rc = sqlite3_step(pSelect); } rc = sqlite3_finalize(pSelect); if( rc!=SQLITE_OK ){ oputf("/**** ERROR: (%d) %s *****/\n", rc, sqlite3_errmsg(p->db)); if( (rc&0xff)!=SQLITE_CORRUPT ) p->nErr++; } return rc; } /* ** Allocate space and save off string indicating current error. |
︙ | ︙ | |||
19074 19075 19076 19077 19078 19079 19080 | return zErr; } #ifdef __linux__ /* ** Attempt to display I/O stats on Linux using /proc/PID/io */ | | | 20988 20989 20990 20991 20992 20993 20994 20995 20996 20997 20998 20999 21000 21001 21002 | return zErr; } #ifdef __linux__ /* ** Attempt to display I/O stats on Linux using /proc/PID/io */ static void displayLinuxIoStats(void){ FILE *in; char z[200]; sqlite3_snprintf(sizeof(z), z, "/proc/%d/io", getpid()); in = fopen(z, "rb"); if( in==0 ) return; while( fgets(z, sizeof(z), in)!=0 ){ static const struct { |
︙ | ︙ | |||
19097 19098 19099 19100 19101 19102 19103 | { "write_bytes: ", "Bytes written to storage:" }, { "cancelled_write_bytes: ", "Cancelled write bytes:" }, }; int i; for(i=0; i<ArraySize(aTrans); i++){ int n = strlen30(aTrans[i].zPattern); if( cli_strncmp(aTrans[i].zPattern, z, n)==0 ){ | | < | < < | | | | | | | | | | | | | | < | < | < | < | < | < | | | | | < | < | | | | < | | | | | | 21011 21012 21013 21014 21015 21016 21017 21018 21019 21020 21021 21022 21023 21024 21025 21026 21027 21028 21029 21030 21031 21032 21033 21034 21035 21036 21037 21038 21039 21040 21041 21042 21043 21044 21045 21046 21047 21048 21049 21050 21051 21052 21053 21054 21055 21056 21057 21058 21059 21060 21061 21062 21063 21064 21065 21066 21067 21068 21069 21070 21071 21072 21073 21074 21075 21076 21077 21078 21079 21080 21081 21082 21083 21084 21085 21086 21087 21088 21089 21090 21091 21092 21093 21094 21095 21096 21097 21098 21099 21100 21101 21102 21103 21104 21105 21106 21107 21108 21109 21110 21111 21112 21113 21114 21115 21116 21117 21118 21119 21120 21121 21122 21123 21124 21125 21126 21127 21128 21129 21130 21131 21132 21133 21134 21135 21136 21137 21138 21139 21140 21141 21142 21143 21144 21145 21146 21147 21148 21149 21150 21151 21152 21153 21154 21155 21156 21157 21158 21159 21160 21161 21162 21163 21164 21165 21166 21167 21168 21169 21170 21171 21172 21173 21174 21175 21176 21177 21178 21179 21180 21181 21182 21183 21184 21185 21186 21187 21188 | { "write_bytes: ", "Bytes written to storage:" }, { "cancelled_write_bytes: ", "Cancelled write bytes:" }, }; int i; for(i=0; i<ArraySize(aTrans); i++){ int n = strlen30(aTrans[i].zPattern); if( cli_strncmp(aTrans[i].zPattern, z, n)==0 ){ oputf("%-36s %s", aTrans[i].zDesc, &z[n]); break; } } } fclose(in); } #endif /* ** Display a single line of status using 64-bit values. */ static void displayStatLine( char *zLabel, /* Label for this one line */ char *zFormat, /* Format for the result */ int iStatusCtrl, /* Which status to display */ int bReset /* True to reset the stats */ ){ sqlite3_int64 iCur = -1; sqlite3_int64 iHiwtr = -1; int i, nPercent; char zLine[200]; sqlite3_status64(iStatusCtrl, &iCur, &iHiwtr, bReset); for(i=0, nPercent=0; zFormat[i]; i++){ if( zFormat[i]=='%' ) nPercent++; } if( nPercent>1 ){ sqlite3_snprintf(sizeof(zLine), zLine, zFormat, iCur, iHiwtr); }else{ sqlite3_snprintf(sizeof(zLine), zLine, zFormat, iHiwtr); } oputf("%-36s %s\n", zLabel, zLine); } /* ** Display memory stats. */ static int display_stats( sqlite3 *db, /* Database to query */ ShellState *pArg, /* Pointer to ShellState */ int bReset /* True to reset the stats */ ){ int iCur; int iHiwtr; if( pArg==0 || pArg->out==0 ) return 0; if( pArg->pStmt && pArg->statsOn==2 ){ int nCol, i, x; sqlite3_stmt *pStmt = pArg->pStmt; char z[100]; nCol = sqlite3_column_count(pStmt); oputf("%-36s %d\n", "Number of output columns:", nCol); for(i=0; i<nCol; i++){ sqlite3_snprintf(sizeof(z),z,"Column %d %nname:", i, &x); oputf("%-36s %s\n", z, sqlite3_column_name(pStmt,i)); #ifndef SQLITE_OMIT_DECLTYPE sqlite3_snprintf(30, z+x, "declared type:"); oputf("%-36s %s\n", z, sqlite3_column_decltype(pStmt, i)); #endif #ifdef SQLITE_ENABLE_COLUMN_METADATA sqlite3_snprintf(30, z+x, "database name:"); oputf("%-36s %s\n", z, sqlite3_column_database_name(pStmt,i)); sqlite3_snprintf(30, z+x, "table name:"); oputf("%-36s %s\n", z, sqlite3_column_table_name(pStmt,i)); sqlite3_snprintf(30, z+x, "origin name:"); oputf("%-36s %s\n", z, sqlite3_column_origin_name(pStmt,i)); #endif } } if( pArg->statsOn==3 ){ if( pArg->pStmt ){ iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_VM_STEP,bReset); oputf("VM-steps: %d\n", iCur); } return 0; } displayStatLine("Memory Used:", "%lld (max %lld) bytes", SQLITE_STATUS_MEMORY_USED, bReset); displayStatLine("Number of Outstanding Allocations:", "%lld (max %lld)", SQLITE_STATUS_MALLOC_COUNT, bReset); if( pArg->shellFlgs & SHFLG_Pagecache ){ displayStatLine("Number of Pcache Pages Used:", "%lld (max %lld) pages", SQLITE_STATUS_PAGECACHE_USED, bReset); } displayStatLine("Number of Pcache Overflow Bytes:", "%lld (max %lld) bytes", SQLITE_STATUS_PAGECACHE_OVERFLOW, bReset); displayStatLine("Largest Allocation:", "%lld bytes", SQLITE_STATUS_MALLOC_SIZE, bReset); displayStatLine("Largest Pcache Allocation:", "%lld bytes", SQLITE_STATUS_PAGECACHE_SIZE, bReset); #ifdef YYTRACKMAXSTACKDEPTH displayStatLine("Deepest Parser Stack:", "%lld (max %lld)", SQLITE_STATUS_PARSER_STACK, bReset); #endif if( db ){ if( pArg->shellFlgs & SHFLG_Lookaside ){ iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_USED, &iCur, &iHiwtr, bReset); oputf("Lookaside Slots Used: %d (max %d)\n", iCur, iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_HIT, &iCur, &iHiwtr, bReset); oputf("Successful lookaside attempts: %d\n", iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE, &iCur, &iHiwtr, bReset); oputf("Lookaside failures due to size: %d\n", iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL, &iCur, &iHiwtr, bReset); oputf("Lookaside failures due to OOM: %d\n", iHiwtr); } iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_USED, &iCur, &iHiwtr, bReset); oputf("Pager Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_HIT, &iCur, &iHiwtr, 1); oputf("Page cache hits: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_MISS, &iCur, &iHiwtr, 1); oputf("Page cache misses: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_WRITE, &iCur, &iHiwtr, 1); oputf("Page cache writes: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_SPILL, &iCur, &iHiwtr, 1); oputf("Page cache spills: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_SCHEMA_USED, &iCur, &iHiwtr, bReset); oputf("Schema Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_STMT_USED, &iCur, &iHiwtr, bReset); oputf("Statement Heap/Lookaside Usage: %d bytes\n", iCur); } if( pArg->pStmt ){ int iHit, iMiss; iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_FULLSCAN_STEP, bReset); oputf("Fullscan Steps: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_SORT, bReset); oputf("Sort Operations: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_AUTOINDEX,bReset); oputf("Autoindex Inserts: %d\n", iCur); iHit = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_FILTER_HIT, bReset); iMiss = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_FILTER_MISS, bReset); if( iHit || iMiss ){ oputf("Bloom filter bypass taken: %d/%d\n", iHit, iHit+iMiss); } iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_VM_STEP, bReset); oputf("Virtual Machine Steps: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_REPREPARE,bReset); oputf("Reprepare operations: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_RUN, bReset); oputf("Number of times run: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_MEMUSED, bReset); oputf("Memory used by prepared stmt: %d\n", iCur); } #ifdef __linux__ displayLinuxIoStats(); #endif /* Do not remove this machine readable comment: extra-stats-output-here */ return 0; } |
︙ | ︙ | |||
19507 19508 19509 19510 19511 19512 19513 | ShellState *pArg /* Pointer to ShellState */ ){ #ifndef SQLITE_ENABLE_STMT_SCANSTATUS UNUSED_PARAMETER(db); UNUSED_PARAMETER(pArg); #else if( pArg->scanstatsOn==3 ){ | | | 21409 21410 21411 21412 21413 21414 21415 21416 21417 21418 21419 21420 21421 21422 21423 | ShellState *pArg /* Pointer to ShellState */ ){ #ifndef SQLITE_ENABLE_STMT_SCANSTATUS UNUSED_PARAMETER(db); UNUSED_PARAMETER(pArg); #else if( pArg->scanstatsOn==3 ){ const char *zSql = " SELECT addr, opcode, p1, p2, p3, p4, p5, comment, nexec," " round(ncycle*100.0 / (sum(ncycle) OVER ()), 2)||'%' AS cycles" " FROM bytecode(?)"; int rc = SQLITE_OK; sqlite3_stmt *pStmt = 0; rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, 0); |
︙ | ︙ | |||
19653 19654 19655 19656 19657 19658 19659 | #define BOX_234 "\342\224\254" /* U+252c -,- */ #define BOX_124 "\342\224\264" /* U+2534 -'- */ #define BOX_1234 "\342\224\274" /* U+253c -|- */ /* Draw horizontal line N characters long using unicode box ** characters */ | | | | | | | | | | | 21555 21556 21557 21558 21559 21560 21561 21562 21563 21564 21565 21566 21567 21568 21569 21570 21571 21572 21573 21574 21575 21576 21577 21578 21579 21580 21581 21582 21583 21584 21585 21586 21587 21588 21589 21590 21591 21592 21593 21594 21595 21596 21597 21598 21599 21600 21601 21602 | #define BOX_234 "\342\224\254" /* U+252c -,- */ #define BOX_124 "\342\224\264" /* U+2534 -'- */ #define BOX_1234 "\342\224\274" /* U+253c -|- */ /* Draw horizontal line N characters long using unicode box ** characters */ static void print_box_line(int N){ const char zDash[] = BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24; const int nDash = sizeof(zDash) - 1; N *= 3; while( N>nDash ){ oputz(zDash); N -= nDash; } oputf("%.*s", N, zDash); } /* ** Draw a horizontal separator for a MODE_Box table. */ static void print_box_row_separator( ShellState *p, int nArg, const char *zSep1, const char *zSep2, const char *zSep3 ){ int i; if( nArg>0 ){ oputz(zSep1); print_box_line(p->actualWidth[0]+2); for(i=1; i<nArg; i++){ oputz(zSep2); print_box_line(p->actualWidth[i]+2); } oputz(zSep3); } oputz("\n"); } /* ** z[] is a line of text that is to be displayed the .mode box or table or ** similar tabular formats. z[] might contain control characters such ** as \n, \t, \f, or \r. ** |
︙ | ︙ | |||
19855 19856 19857 19858 19859 19860 19861 19862 19863 19864 19865 19866 19867 19868 | int bw = p->cmOpts.bWordWrap; const char *zEmpty = ""; const char *zShowNull = p->nullValue; rc = sqlite3_step(pStmt); if( rc!=SQLITE_ROW ) return; nColumn = sqlite3_column_count(pStmt); nAlloc = nColumn*4; if( nAlloc<=0 ) nAlloc = 1; azData = sqlite3_malloc64( nAlloc*sizeof(char*) ); shell_check_oom(azData); azNextLine = sqlite3_malloc64( nColumn*sizeof(char*) ); shell_check_oom(azNextLine); memset((void*)azNextLine, 0, nColumn*sizeof(char*) ); | > | 21757 21758 21759 21760 21761 21762 21763 21764 21765 21766 21767 21768 21769 21770 21771 | int bw = p->cmOpts.bWordWrap; const char *zEmpty = ""; const char *zShowNull = p->nullValue; rc = sqlite3_step(pStmt); if( rc!=SQLITE_ROW ) return; nColumn = sqlite3_column_count(pStmt); if( nColumn==0 ) goto columnar_end; nAlloc = nColumn*4; if( nAlloc<=0 ) nAlloc = 1; azData = sqlite3_malloc64( nAlloc*sizeof(char*) ); shell_check_oom(azData); azNextLine = sqlite3_malloc64( nColumn*sizeof(char*) ); shell_check_oom(azNextLine); memset((void*)azNextLine, 0, nColumn*sizeof(char*) ); |
︙ | ︙ | |||
19940 19941 19942 19943 19944 19945 19946 | z = azData[i]; if( z==0 ) z = (char*)zEmpty; n = strlenChar(z); j = i%nColumn; if( n>p->actualWidth[j] ) p->actualWidth[j] = n; } if( seenInterrupt ) goto columnar_end; | < | | | | | | | | | | | | | | | | | 21843 21844 21845 21846 21847 21848 21849 21850 21851 21852 21853 21854 21855 21856 21857 21858 21859 21860 21861 21862 21863 21864 21865 21866 21867 21868 21869 21870 21871 21872 21873 21874 21875 21876 21877 21878 21879 21880 21881 21882 21883 21884 21885 21886 21887 21888 21889 21890 21891 21892 21893 21894 21895 21896 21897 21898 21899 21900 21901 21902 21903 21904 21905 21906 21907 21908 21909 21910 21911 21912 21913 21914 21915 21916 21917 21918 21919 21920 21921 21922 21923 21924 21925 21926 21927 21928 21929 21930 21931 21932 21933 21934 21935 21936 21937 21938 21939 21940 21941 21942 21943 21944 21945 21946 21947 21948 21949 21950 21951 | z = azData[i]; if( z==0 ) z = (char*)zEmpty; n = strlenChar(z); j = i%nColumn; if( n>p->actualWidth[j] ) p->actualWidth[j] = n; } if( seenInterrupt ) goto columnar_end; switch( p->cMode ){ case MODE_Column: { colSep = " "; rowSep = "\n"; if( p->showHeader ){ for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; if( p->colWidth[i]<0 ) w = -w; utf8_width_print(w, azData[i]); fputs(i==nColumn-1?"\n":" ", p->out); } for(i=0; i<nColumn; i++){ print_dashes(p->actualWidth[i]); fputs(i==nColumn-1?"\n":" ", p->out); } } break; } case MODE_Table: { colSep = " | "; rowSep = " |\n"; print_row_separator(p, nColumn, "+"); fputs("| ", p->out); for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; n = strlenChar(azData[i]); oputf("%*s%s%*s", (w-n)/2, "", azData[i], (w-n+1)/2, ""); oputz(i==nColumn-1?" |\n":" | "); } print_row_separator(p, nColumn, "+"); break; } case MODE_Markdown: { colSep = " | "; rowSep = " |\n"; fputs("| ", p->out); for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; n = strlenChar(azData[i]); oputf("%*s%s%*s", (w-n)/2, "", azData[i], (w-n+1)/2, ""); oputz(i==nColumn-1?" |\n":" | "); } print_row_separator(p, nColumn, "|"); break; } case MODE_Box: { colSep = " " BOX_13 " "; rowSep = " " BOX_13 "\n"; print_box_row_separator(p, nColumn, BOX_23, BOX_234, BOX_34); oputz(BOX_13 " "); for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; n = strlenChar(azData[i]); oputf("%*s%s%*s%s", (w-n)/2, "", azData[i], (w-n+1)/2, "", i==nColumn-1?" "BOX_13"\n":" "BOX_13" "); } print_box_row_separator(p, nColumn, BOX_123, BOX_1234, BOX_134); break; } } for(i=nColumn, j=0; i<nTotal; i++, j++){ if( j==0 && p->cMode!=MODE_Column ){ oputz(p->cMode==MODE_Box?BOX_13" ":"| "); } z = azData[i]; if( z==0 ) z = p->nullValue; w = p->actualWidth[j]; if( p->colWidth[j]<0 ) w = -w; utf8_width_print(w, z); if( j==nColumn-1 ){ oputz(rowSep); if( bMultiLineRowExists && abRowDiv[i/nColumn-1] && i+1<nTotal ){ if( p->cMode==MODE_Table ){ print_row_separator(p, nColumn, "+"); }else if( p->cMode==MODE_Box ){ print_box_row_separator(p, nColumn, BOX_123, BOX_1234, BOX_134); }else if( p->cMode==MODE_Column ){ oputz("\n"); } } j = -1; if( seenInterrupt ) goto columnar_end; }else{ oputz(colSep); } } if( p->cMode==MODE_Table ){ print_row_separator(p, nColumn, "+"); }else if( p->cMode==MODE_Box ){ print_box_row_separator(p, nColumn, BOX_12, BOX_124, BOX_14); } columnar_end: if( seenInterrupt ){ oputz("Interrupt\n"); } nData = (nRow+1)*nColumn; for(i=0; i<nData; i++){ z = azData[i]; if( z!=zEmpty && z!=zShowNull ) free(azData[i]); } sqlite3_free(azData); |
︙ | ︙ | |||
20174 20175 20176 20177 20178 20179 20180 | char **pzErr ){ int rc = SQLITE_OK; sqlite3expert *p = pState->expert.pExpert; assert( p ); assert( bCancel || pzErr==0 || *pzErr==0 ); if( bCancel==0 ){ | < | | | | | | | 22076 22077 22078 22079 22080 22081 22082 22083 22084 22085 22086 22087 22088 22089 22090 22091 22092 22093 22094 22095 22096 22097 22098 22099 22100 22101 22102 22103 22104 22105 22106 22107 22108 22109 22110 22111 22112 | char **pzErr ){ int rc = SQLITE_OK; sqlite3expert *p = pState->expert.pExpert; assert( p ); assert( bCancel || pzErr==0 || *pzErr==0 ); if( bCancel==0 ){ int bVerbose = pState->expert.bVerbose; rc = sqlite3_expert_analyze(p, pzErr); if( rc==SQLITE_OK ){ int nQuery = sqlite3_expert_count(p); int i; if( bVerbose ){ const char *zCand = sqlite3_expert_report(p,0,EXPERT_REPORT_CANDIDATES); oputz("-- Candidates -----------------------------\n"); oputf("%s\n", zCand); } for(i=0; i<nQuery; i++){ const char *zSql = sqlite3_expert_report(p, i, EXPERT_REPORT_SQL); const char *zIdx = sqlite3_expert_report(p, i, EXPERT_REPORT_INDEXES); const char *zEQP = sqlite3_expert_report(p, i, EXPERT_REPORT_PLAN); if( zIdx==0 ) zIdx = "(no new indexes)\n"; if( bVerbose ){ oputf("-- Query %d --------------------------------\n",i+1); oputf("%s\n\n", zSql); } oputf("%s\n", zIdx); oputf("%s\n", zEQP); } } } sqlite3_expert_destroy(p); pState->expert.pExpert = 0; return rc; } |
︙ | ︙ | |||
20232 20233 20234 20235 20236 20237 20238 | if( z[0]=='-' && z[1]=='-' ) z++; n = strlen30(z); if( n>=2 && 0==cli_strncmp(z, "-verbose", n) ){ pState->expert.bVerbose = 1; } else if( n>=2 && 0==cli_strncmp(z, "-sample", n) ){ if( i==(nArg-1) ){ | | | | < | | 22133 22134 22135 22136 22137 22138 22139 22140 22141 22142 22143 22144 22145 22146 22147 22148 22149 22150 22151 22152 22153 22154 22155 22156 22157 22158 22159 22160 22161 22162 22163 22164 22165 22166 | if( z[0]=='-' && z[1]=='-' ) z++; n = strlen30(z); if( n>=2 && 0==cli_strncmp(z, "-verbose", n) ){ pState->expert.bVerbose = 1; } else if( n>=2 && 0==cli_strncmp(z, "-sample", n) ){ if( i==(nArg-1) ){ eputf("option requires an argument: %s\n", z); rc = SQLITE_ERROR; }else{ iSample = (int)integerValue(azArg[++i]); if( iSample<0 || iSample>100 ){ eputf("value out of range: %s\n", azArg[i]); rc = SQLITE_ERROR; } } } else{ eputf("unknown option: %s\n", z); rc = SQLITE_ERROR; } } if( rc==SQLITE_OK ){ pState->expert.pExpert = sqlite3_expert_new(pState->db, &zErr); if( pState->expert.pExpert==0 ){ eputf("sqlite3_expert_new: %s\n", zErr ? zErr : "out of memory"); rc = SQLITE_ERROR; }else{ sqlite3_expert_config( pState->expert.pExpert, EXPERT_CONFIG_SAMPLE, iSample ); } } |
︙ | ︙ | |||
20579 20580 20581 20582 20583 20584 20585 | zSql = azArg[2]; if( zTable==0 ) return 0; if( zType==0 ) return 0; dataOnly = (p->shellFlgs & SHFLG_DumpDataOnly)!=0; noSys = (p->shellFlgs & SHFLG_DumpNoSys)!=0; if( cli_strcmp(zTable, "sqlite_sequence")==0 && !noSys ){ | | | | | | | 22479 22480 22481 22482 22483 22484 22485 22486 22487 22488 22489 22490 22491 22492 22493 22494 22495 22496 22497 22498 22499 22500 22501 22502 22503 22504 22505 22506 22507 22508 22509 22510 22511 22512 22513 22514 22515 | zSql = azArg[2]; if( zTable==0 ) return 0; if( zType==0 ) return 0; dataOnly = (p->shellFlgs & SHFLG_DumpDataOnly)!=0; noSys = (p->shellFlgs & SHFLG_DumpNoSys)!=0; if( cli_strcmp(zTable, "sqlite_sequence")==0 && !noSys ){ if( !dataOnly ) oputz("DELETE FROM sqlite_sequence;\n"); }else if( sqlite3_strglob("sqlite_stat?", zTable)==0 && !noSys ){ if( !dataOnly ) oputz("ANALYZE sqlite_schema;\n"); }else if( cli_strncmp(zTable, "sqlite_", 7)==0 ){ return 0; }else if( dataOnly ){ /* no-op */ }else if( cli_strncmp(zSql, "CREATE VIRTUAL TABLE", 20)==0 ){ char *zIns; if( !p->writableSchema ){ oputz("PRAGMA writable_schema=ON;\n"); p->writableSchema = 1; } zIns = sqlite3_mprintf( "INSERT INTO sqlite_schema(type,name,tbl_name,rootpage,sql)" "VALUES('table','%q','%q',0,'%q');", zTable, zTable, zSql); shell_check_oom(zIns); oputf("%s\n", zIns); sqlite3_free(zIns); return 0; }else{ printSchemaLine(zSql, ";\n"); } if( cli_strcmp(zType, "table")==0 ){ ShellText sSelect; ShellText sTable; char **azCol; int i; |
︙ | ︙ | |||
20659 20660 20661 20662 20663 20664 20665 | savedDestTable = p->zDestTable; savedMode = p->mode; p->zDestTable = sTable.z; p->mode = p->cMode = MODE_Insert; rc = shell_exec(p, sSelect.z, 0); if( (rc&0xff)==SQLITE_CORRUPT ){ | | | 22559 22560 22561 22562 22563 22564 22565 22566 22567 22568 22569 22570 22571 22572 22573 | savedDestTable = p->zDestTable; savedMode = p->mode; p->zDestTable = sTable.z; p->mode = p->cMode = MODE_Insert; rc = shell_exec(p, sSelect.z, 0); if( (rc&0xff)==SQLITE_CORRUPT ){ oputz("/****** CORRUPTION ERROR *******/\n"); toggleSelectOrder(p->db); shell_exec(p, sSelect.z, 0); toggleSelectOrder(p->db); } p->zDestTable = savedDestTable; p->mode = savedMode; freeText(&sTable); |
︙ | ︙ | |||
20690 20691 20692 20693 20694 20695 20696 | ){ int rc; char *zErr = 0; rc = sqlite3_exec(p->db, zQuery, dump_callback, p, &zErr); if( rc==SQLITE_CORRUPT ){ char *zQ2; int len = strlen30(zQuery); | | | | | 22590 22591 22592 22593 22594 22595 22596 22597 22598 22599 22600 22601 22602 22603 22604 22605 22606 22607 22608 22609 22610 22611 22612 22613 22614 22615 | ){ int rc; char *zErr = 0; rc = sqlite3_exec(p->db, zQuery, dump_callback, p, &zErr); if( rc==SQLITE_CORRUPT ){ char *zQ2; int len = strlen30(zQuery); oputz("/****** CORRUPTION ERROR *******/\n"); if( zErr ){ oputf("/****** %s ******/\n", zErr); sqlite3_free(zErr); zErr = 0; } zQ2 = malloc( len+100 ); if( zQ2==0 ) return rc; sqlite3_snprintf(len+100, zQ2, "%s ORDER BY rowid DESC", zQuery); rc = sqlite3_exec(p->db, zQ2, dump_callback, p, &zErr); if( rc ){ oputf("/****** ERROR: %s ******/\n", zErr); }else{ rc = SQLITE_CORRUPT; } sqlite3_free(zErr); free(zQ2); } return rc; |
︙ | ︙ | |||
20825 20826 20827 20828 20829 20830 20831 20832 20833 20834 20835 20836 20837 20838 | #endif #ifndef SQLITE_OMIT_TEST_CONTROL ",imposter INDEX TABLE Create imposter table TABLE on index INDEX", #endif ".indexes ?TABLE? Show names of indexes", " If TABLE is specified, only show indexes for", " tables matching TABLE using the LIKE operator.", #ifdef SQLITE_ENABLE_IOTRACE ",iotrace FILE Enable I/O diagnostic logging to FILE", #endif ".limit ?LIMIT? ?VAL? Display or change the value of an SQLITE_LIMIT", ".lint OPTIONS Report potential schema issues.", " Options:", " fkey-indexes Find missing foreign key indexes", | > | 22725 22726 22727 22728 22729 22730 22731 22732 22733 22734 22735 22736 22737 22738 22739 | #endif #ifndef SQLITE_OMIT_TEST_CONTROL ",imposter INDEX TABLE Create imposter table TABLE on index INDEX", #endif ".indexes ?TABLE? Show names of indexes", " If TABLE is specified, only show indexes for", " tables matching TABLE using the LIKE operator.", ".intck ?STEPS_PER_UNLOCK? Run an incremental integrity check on the db", #ifdef SQLITE_ENABLE_IOTRACE ",iotrace FILE Enable I/O diagnostic logging to FILE", #endif ".limit ?LIMIT? ?VAL? Display or change the value of an SQLITE_LIMIT", ".lint OPTIONS Report potential schema issues.", " Options:", " fkey-indexes Find missing foreign key indexes", |
︙ | ︙ | |||
21057 21058 21059 21060 21061 21062 21063 | break; default: hh &= ~HH_Summary; break; } if( ((hw^hh)&HH_Undoc)==0 ){ if( (hh&HH_Summary)!=0 ){ | | | | | | | | 22958 22959 22960 22961 22962 22963 22964 22965 22966 22967 22968 22969 22970 22971 22972 22973 22974 22975 22976 22977 22978 22979 22980 22981 22982 22983 22984 22985 22986 22987 22988 22989 22990 22991 22992 22993 22994 22995 22996 22997 22998 22999 23000 23001 23002 23003 23004 23005 23006 23007 23008 23009 23010 23011 23012 23013 23014 23015 23016 | break; default: hh &= ~HH_Summary; break; } if( ((hw^hh)&HH_Undoc)==0 ){ if( (hh&HH_Summary)!=0 ){ sputf(out, ".%s\n", azHelp[i]+1); ++n; }else if( (hw&HW_SummaryOnly)==0 ){ sputf(out, "%s\n", azHelp[i]); } } } }else{ /* Seek documented commands for which zPattern is an exact prefix */ zPat = sqlite3_mprintf(".%s*", zPattern); shell_check_oom(zPat); for(i=0; i<ArraySize(azHelp); i++){ if( sqlite3_strglob(zPat, azHelp[i])==0 ){ sputf(out, "%s\n", azHelp[i]); j = i+1; n++; } } sqlite3_free(zPat); if( n ){ if( n==1 ){ /* when zPattern is a prefix of exactly one command, then include ** the details of that command, which should begin at offset j */ while( j<ArraySize(azHelp)-1 && azHelp[j][0]==' ' ){ sputf(out, "%s\n", azHelp[j]); j++; } } return n; } /* Look for documented commands that contain zPattern anywhere. ** Show complete text of all documented commands that match. */ zPat = sqlite3_mprintf("%%%s%%", zPattern); shell_check_oom(zPat); for(i=0; i<ArraySize(azHelp); i++){ if( azHelp[i][0]==',' ){ while( i<ArraySize(azHelp)-1 && azHelp[i+1][0]==' ' ) ++i; continue; } if( azHelp[i][0]=='.' ) j = i; if( sqlite3_strlike(zPat, azHelp[i], 0)==0 ){ sputf(out, "%s\n", azHelp[j]); while( j<ArraySize(azHelp)-1 && azHelp[j+1][0]==' ' ){ j++; sputf(out, "%s\n", azHelp[j]); } i = j; n++; } } sqlite3_free(zPat); } |
︙ | ︙ | |||
21139 21140 21141 21142 21143 21144 21145 | long nIn; size_t nRead; char *pBuf; int rc; if( in==0 ) return 0; rc = fseek(in, 0, SEEK_END); if( rc!=0 ){ | | | | | 23040 23041 23042 23043 23044 23045 23046 23047 23048 23049 23050 23051 23052 23053 23054 23055 23056 23057 23058 23059 23060 23061 23062 23063 23064 23065 23066 23067 23068 23069 23070 | long nIn; size_t nRead; char *pBuf; int rc; if( in==0 ) return 0; rc = fseek(in, 0, SEEK_END); if( rc!=0 ){ eputf("Error: '%s' not seekable\n", zName); fclose(in); return 0; } nIn = ftell(in); rewind(in); pBuf = sqlite3_malloc64( nIn+1 ); if( pBuf==0 ){ eputz("Error: out of memory\n"); fclose(in); return 0; } nRead = fread(pBuf, nIn, 1, in); fclose(in); if( nRead!=1 ){ sqlite3_free(pBuf); eputf("Error: cannot read '%s'\n", zName); return 0; } pBuf[nIn] = 0; if( pnByte ) *pnByte = nIn; return pBuf; } |
︙ | ︙ | |||
21276 21277 21278 21279 21280 21281 21282 | FILE *in; const char *zDbFilename = p->pAuxDb->zDbFilename; unsigned int x[16]; char zLine[1000]; if( zDbFilename ){ in = fopen(zDbFilename, "r"); if( in==0 ){ | | | | 23177 23178 23179 23180 23181 23182 23183 23184 23185 23186 23187 23188 23189 23190 23191 23192 23193 23194 23195 23196 23197 23198 23199 23200 23201 23202 23203 23204 23205 23206 23207 23208 23209 23210 23211 23212 | FILE *in; const char *zDbFilename = p->pAuxDb->zDbFilename; unsigned int x[16]; char zLine[1000]; if( zDbFilename ){ in = fopen(zDbFilename, "r"); if( in==0 ){ eputf("cannot open \"%s\" for reading\n", zDbFilename); return 0; } nLine = 0; }else{ in = p->in; nLine = p->lineno; if( in==0 ) in = stdin; } *pnData = 0; nLine++; if( fgets(zLine, sizeof(zLine), in)==0 ) goto readHexDb_error; rc = sscanf(zLine, "| size %d pagesize %d", &n, &pgsz); if( rc!=2 ) goto readHexDb_error; if( n<0 ) goto readHexDb_error; if( pgsz<512 || pgsz>65536 || (pgsz&(pgsz-1))!=0 ) goto readHexDb_error; n = (n+pgsz-1)&~(pgsz-1); /* Round n up to the next multiple of pgsz */ a = sqlite3_malloc( n ? n : 1 ); shell_check_oom(a); memset(a, 0, n); if( pgsz<512 || pgsz>65536 || (pgsz & (pgsz-1))!=0 ){ eputz("invalid pagesize\n"); goto readHexDb_error; } for(nLine++; fgets(zLine, sizeof(zLine), in)!=0; nLine++){ rc = sscanf(zLine, "| page %d offset %d", &j, &k); if( rc==2 ){ iOffset = k; continue; |
︙ | ︙ | |||
21339 21340 21341 21342 21343 21344 21345 | while( fgets(zLine, sizeof(zLine), p->in)!=0 ){ nLine++; if(cli_strncmp(zLine, "| end ", 6)==0 ) break; } p->lineno = nLine; } sqlite3_free(a); | | | 23240 23241 23242 23243 23244 23245 23246 23247 23248 23249 23250 23251 23252 23253 23254 | while( fgets(zLine, sizeof(zLine), p->in)!=0 ){ nLine++; if(cli_strncmp(zLine, "| end ", 6)==0 ) break; } p->lineno = nLine; } sqlite3_free(a); eputf("Error on line %d of --hexdb input\n", nLine); return 0; } #endif /* SQLITE_OMIT_DESERIALIZE */ /* ** Scalar function "usleep(X)" invokes sqlite3_sleep(X) and returns X. */ |
︙ | ︙ | |||
21413 21414 21415 21416 21417 21418 21419 | case SHELL_OPEN_UNSPEC: case SHELL_OPEN_NORMAL: { sqlite3_open_v2(zDbFilename, &p->db, SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE|p->openFlags, 0); break; } } | < | | < | < < | | > | 23314 23315 23316 23317 23318 23319 23320 23321 23322 23323 23324 23325 23326 23327 23328 23329 23330 23331 23332 23333 23334 23335 23336 23337 23338 23339 23340 23341 23342 23343 23344 | case SHELL_OPEN_UNSPEC: case SHELL_OPEN_NORMAL: { sqlite3_open_v2(zDbFilename, &p->db, SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE|p->openFlags, 0); break; } } if( p->db==0 || SQLITE_OK!=sqlite3_errcode(p->db) ){ eputf("Error: unable to open database \"%s\": %s\n", zDbFilename, sqlite3_errmsg(p->db)); if( (openFlags & OPEN_DB_KEEPALIVE)==0 ){ exit(1); } sqlite3_close(p->db); sqlite3_open(":memory:", &p->db); if( p->db==0 || SQLITE_OK!=sqlite3_errcode(p->db) ){ eputz("Also: unable to open substitute in-memory database.\n"); exit(1); }else{ eputf("Notice: using substitute in-memory database instead of \"%s\"\n", zDbFilename); } } globalDb = p->db; sqlite3_db_config(p->db, SQLITE_DBCONFIG_STMT_SCANSTATUS, (int)0, (int*)0); /* Reflect the use or absence of --unsafe-testing invocation. */ { int testmode_on = ShellHasFlag(p,SHFLG_TestingMode); sqlite3_db_config(p->db, SQLITE_DBCONFIG_TRUSTED_SCHEMA, testmode_on,0); sqlite3_db_config(p->db, SQLITE_DBCONFIG_DEFENSIVE, !testmode_on,0); |
︙ | ︙ | |||
21537 21538 21539 21540 21541 21542 21543 | if( aData==0 ){ return; } rc = sqlite3_deserialize(p->db, "main", aData, nData, nData, SQLITE_DESERIALIZE_RESIZEABLE | SQLITE_DESERIALIZE_FREEONCLOSE); if( rc ){ | | | 23435 23436 23437 23438 23439 23440 23441 23442 23443 23444 23445 23446 23447 23448 23449 | if( aData==0 ){ return; } rc = sqlite3_deserialize(p->db, "main", aData, nData, nData, SQLITE_DESERIALIZE_RESIZEABLE | SQLITE_DESERIALIZE_FREEONCLOSE); if( rc ){ eputf("Error: sqlite3_deserialize() returns %d\n", rc); } if( p->szMax>0 ){ sqlite3_file_control(p->db, "main", SQLITE_FCNTL_SIZE_LIMIT, &p->szMax); } } #endif } |
︙ | ︙ | |||
21561 21562 21563 21564 21565 21566 21567 | /* ** Attempt to close the database connection. Report errors. */ void close_db(sqlite3 *db){ int rc = sqlite3_close(db); if( rc ){ | | < | 23459 23460 23461 23462 23463 23464 23465 23466 23467 23468 23469 23470 23471 23472 23473 | /* ** Attempt to close the database connection. Report errors. */ void close_db(sqlite3 *db){ int rc = sqlite3_close(db); if( rc ){ eputf("Error: sqlite3_close() returns %d: %s\n", rc, sqlite3_errmsg(db)); } } #if HAVE_READLINE || HAVE_EDITLINE /* ** Readline completion callbacks */ |
︙ | ︙ | |||
21723 21724 21725 21726 21727 21728 21729 | if( i>0 && zArg[i]==0 ) return (int)(integerValue(zArg) & 0xffffffff); if( sqlite3_stricmp(zArg, "on")==0 || sqlite3_stricmp(zArg,"yes")==0 ){ return 1; } if( sqlite3_stricmp(zArg, "off")==0 || sqlite3_stricmp(zArg,"no")==0 ){ return 0; } | | < | 23620 23621 23622 23623 23624 23625 23626 23627 23628 23629 23630 23631 23632 23633 23634 | if( i>0 && zArg[i]==0 ) return (int)(integerValue(zArg) & 0xffffffff); if( sqlite3_stricmp(zArg, "on")==0 || sqlite3_stricmp(zArg,"yes")==0 ){ return 1; } if( sqlite3_stricmp(zArg, "off")==0 || sqlite3_stricmp(zArg,"no")==0 ){ return 0; } eputf("ERROR: Not a boolean value: \"%s\". Assuming \"no\".\n", zArg); return 0; } /* ** Set or clear a shell flag according to a boolean value. */ static void setOrClearFlag(ShellState *p, unsigned mFlag, const char *zArg){ |
︙ | ︙ | |||
21762 21763 21764 21765 21766 21767 21768 | }else if( cli_strcmp(zFile, "stderr")==0 ){ f = stderr; }else if( cli_strcmp(zFile, "off")==0 ){ f = 0; }else{ f = fopen(zFile, bTextMode ? "w" : "wb"); if( f==0 ){ | | | 23658 23659 23660 23661 23662 23663 23664 23665 23666 23667 23668 23669 23670 23671 23672 | }else if( cli_strcmp(zFile, "stderr")==0 ){ f = stderr; }else if( cli_strcmp(zFile, "off")==0 ){ f = 0; }else{ f = fopen(zFile, bTextMode ? "w" : "wb"); if( f==0 ){ eputf("Error: cannot open \"%s\"\n", zFile); } } return f; } #ifndef SQLITE_OMIT_TRACE /* |
︙ | ︙ | |||
21784 21785 21786 21787 21788 21789 21790 | ){ ShellState *p = (ShellState*)pArg; sqlite3_stmt *pStmt; const char *zSql; i64 nSql; if( p->traceOut==0 ) return 0; if( mType==SQLITE_TRACE_CLOSE ){ | | | 23680 23681 23682 23683 23684 23685 23686 23687 23688 23689 23690 23691 23692 23693 23694 | ){ ShellState *p = (ShellState*)pArg; sqlite3_stmt *pStmt; const char *zSql; i64 nSql; if( p->traceOut==0 ) return 0; if( mType==SQLITE_TRACE_CLOSE ){ sputz(p->traceOut, "-- closing database connection\n"); return 0; } if( mType!=SQLITE_TRACE_ROW && pX!=0 && ((const char*)pX)[0]=='-' ){ zSql = (const char*)pX; }else{ pStmt = (sqlite3_stmt*)pP; switch( p->eTraceType ){ |
︙ | ︙ | |||
21815 21816 21817 21818 21819 21820 21821 | if( zSql==0 ) return 0; nSql = strlen(zSql); if( nSql>1000000000 ) nSql = 1000000000; while( nSql>0 && zSql[nSql-1]==';' ){ nSql--; } switch( mType ){ case SQLITE_TRACE_ROW: case SQLITE_TRACE_STMT: { | | | | 23711 23712 23713 23714 23715 23716 23717 23718 23719 23720 23721 23722 23723 23724 23725 23726 23727 23728 23729 23730 | if( zSql==0 ) return 0; nSql = strlen(zSql); if( nSql>1000000000 ) nSql = 1000000000; while( nSql>0 && zSql[nSql-1]==';' ){ nSql--; } switch( mType ){ case SQLITE_TRACE_ROW: case SQLITE_TRACE_STMT: { sputf(p->traceOut, "%.*s;\n", (int)nSql, zSql); break; } case SQLITE_TRACE_PROFILE: { sqlite3_int64 nNanosec = pX ? *(sqlite3_int64*)pX : 0; sputf(p->traceOut, "%.*s; -- %lld ns\n", (int)nSql, zSql, nNanosec); break; } } return 0; } #endif |
︙ | ︙ | |||
21927 21928 21929 21930 21931 21932 21933 | || (c==EOF && pc==cQuote) ){ do{ p->n--; }while( p->z[p->n]!=cQuote ); p->cTerm = c; break; } if( pc==cQuote && c!='\r' ){ | < | | | | 23823 23824 23825 23826 23827 23828 23829 23830 23831 23832 23833 23834 23835 23836 23837 23838 23839 23840 23841 | || (c==EOF && pc==cQuote) ){ do{ p->n--; }while( p->z[p->n]!=cQuote ); p->cTerm = c; break; } if( pc==cQuote && c!='\r' ){ eputf("%s:%d: unescaped %c character\n", p->zFile, p->nLine, cQuote); } if( c==EOF ){ eputf("%s:%d: unterminated %c-quoted field\n", p->zFile, startLine, cQuote); p->cTerm = c; break; } import_append_char(p, c); ppc = pc; pc = c; } |
︙ | ︙ | |||
22030 22031 22032 22033 22034 22035 22036 | int cnt = 0; const int spinRate = 10000; zQuery = sqlite3_mprintf("SELECT * FROM \"%w\"", zTable); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ | | | < | | < | 23925 23926 23927 23928 23929 23930 23931 23932 23933 23934 23935 23936 23937 23938 23939 23940 23941 23942 23943 23944 23945 23946 23947 23948 23949 23950 23951 23952 23953 23954 23955 23956 23957 | int cnt = 0; const int spinRate = 10000; zQuery = sqlite3_mprintf("SELECT * FROM \"%w\"", zTable); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ eputf("Error %d: %s on [%s]\n", sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), zQuery); goto end_data_xfer; } n = sqlite3_column_count(pQuery); zInsert = sqlite3_malloc64(200 + nTable + n*3); shell_check_oom(zInsert); sqlite3_snprintf(200+nTable,zInsert, "INSERT OR IGNORE INTO \"%s\" VALUES(?", zTable); i = strlen30(zInsert); for(j=1; j<n; j++){ memcpy(zInsert+i, ",?", 2); i += 2; } memcpy(zInsert+i, ");", 3); rc = sqlite3_prepare_v2(newDb, zInsert, -1, &pInsert, 0); if( rc ){ eputf("Error %d: %s on [%s]\n", sqlite3_extended_errcode(newDb), sqlite3_errmsg(newDb), zInsert); goto end_data_xfer; } for(k=0; k<2; k++){ while( (rc = sqlite3_step(pQuery))==SQLITE_ROW ){ for(i=0; i<n; i++){ switch( sqlite3_column_type(pQuery, i) ){ case SQLITE_NULL: { |
︙ | ︙ | |||
22085 22086 22087 22088 22089 22090 22091 | SQLITE_STATIC); break; } } } /* End for */ rc = sqlite3_step(pInsert); if( rc!=SQLITE_OK && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){ | | | | | 23978 23979 23980 23981 23982 23983 23984 23985 23986 23987 23988 23989 23990 23991 23992 23993 23994 23995 23996 23997 23998 23999 24000 24001 24002 24003 24004 24005 24006 24007 24008 24009 24010 | SQLITE_STATIC); break; } } } /* End for */ rc = sqlite3_step(pInsert); if( rc!=SQLITE_OK && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){ eputf("Error %d: %s\n", sqlite3_extended_errcode(newDb), sqlite3_errmsg(newDb)); } sqlite3_reset(pInsert); cnt++; if( (cnt%spinRate)==0 ){ printf("%c\b", "|/-\\"[(cnt/spinRate)%4]); fflush(stdout); } } /* End while */ if( rc==SQLITE_DONE ) break; sqlite3_finalize(pQuery); sqlite3_free(zQuery); zQuery = sqlite3_mprintf("SELECT * FROM \"%w\" ORDER BY rowid DESC;", zTable); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ eputf("Warning: cannot step \"%s\" backwards", zTable); break; } } /* End for(k=0...) */ end_data_xfer: sqlite3_finalize(pQuery); sqlite3_finalize(pInsert); |
︙ | ︙ | |||
22140 22141 22142 22143 22144 22145 22146 | char *zErrMsg = 0; zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_schema" " WHERE %s ORDER BY rowid ASC", zWhere); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ | | | < | | | | | < | | | | | < > > > > > > > > > > > > | 24033 24034 24035 24036 24037 24038 24039 24040 24041 24042 24043 24044 24045 24046 24047 24048 24049 24050 24051 24052 24053 24054 24055 24056 24057 24058 24059 24060 24061 24062 24063 24064 24065 24066 24067 24068 24069 24070 24071 24072 24073 24074 24075 24076 24077 24078 24079 24080 24081 24082 24083 24084 24085 24086 24087 24088 24089 24090 24091 24092 24093 24094 24095 24096 24097 24098 24099 24100 24101 24102 24103 24104 24105 24106 24107 24108 24109 24110 24111 24112 24113 24114 24115 24116 24117 24118 24119 24120 24121 24122 24123 24124 24125 24126 24127 24128 24129 24130 24131 24132 24133 24134 24135 24136 24137 24138 24139 24140 | char *zErrMsg = 0; zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_schema" " WHERE %s ORDER BY rowid ASC", zWhere); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ eputf("Error: (%d) %s on [%s]\n", sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), zQuery); goto end_schema_xfer; } while( (rc = sqlite3_step(pQuery))==SQLITE_ROW ){ zName = sqlite3_column_text(pQuery, 0); zSql = sqlite3_column_text(pQuery, 1); if( zName==0 || zSql==0 ) continue; if( sqlite3_stricmp((char*)zName, "sqlite_sequence")!=0 ){ sputf(stdout, "%s... ", zName); fflush(stdout); sqlite3_exec(newDb, (const char*)zSql, 0, 0, &zErrMsg); if( zErrMsg ){ eputf("Error: %s\nSQL: [%s]\n", zErrMsg, zSql); sqlite3_free(zErrMsg); zErrMsg = 0; } } if( xForEach ){ xForEach(p, newDb, (const char*)zName); } sputz(stdout, "done\n"); } if( rc!=SQLITE_DONE ){ sqlite3_finalize(pQuery); sqlite3_free(zQuery); zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_schema" " WHERE %s ORDER BY rowid DESC", zWhere); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ eputf("Error: (%d) %s on [%s]\n", sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), zQuery); goto end_schema_xfer; } while( sqlite3_step(pQuery)==SQLITE_ROW ){ zName = sqlite3_column_text(pQuery, 0); zSql = sqlite3_column_text(pQuery, 1); if( zName==0 || zSql==0 ) continue; if( sqlite3_stricmp((char*)zName, "sqlite_sequence")==0 ) continue; sputf(stdout, "%s... ", zName); fflush(stdout); sqlite3_exec(newDb, (const char*)zSql, 0, 0, &zErrMsg); if( zErrMsg ){ eputf("Error: %s\nSQL: [%s]\n", zErrMsg, zSql); sqlite3_free(zErrMsg); zErrMsg = 0; } if( xForEach ){ xForEach(p, newDb, (const char*)zName); } sputz(stdout, "done\n"); } } end_schema_xfer: sqlite3_finalize(pQuery); sqlite3_free(zQuery); } /* ** Open a new database file named "zNewDb". Try to recover as much information ** as possible out of the main database (which might be corrupt) and write it ** into zNewDb. */ static void tryToClone(ShellState *p, const char *zNewDb){ int rc; sqlite3 *newDb = 0; if( access(zNewDb,0)==0 ){ eputf("File \"%s\" already exists.\n", zNewDb); return; } rc = sqlite3_open(zNewDb, &newDb); if( rc ){ eputf("Cannot create output database: %s\n", sqlite3_errmsg(newDb)); }else{ sqlite3_exec(p->db, "PRAGMA writable_schema=ON;", 0, 0, 0); sqlite3_exec(newDb, "BEGIN EXCLUSIVE;", 0, 0, 0); tryToCloneSchema(p, newDb, "type='table'", tryToCloneData); tryToCloneSchema(p, newDb, "type!='table'", 0); sqlite3_exec(newDb, "COMMIT;", 0, 0, 0); sqlite3_exec(p->db, "PRAGMA writable_schema=OFF;", 0, 0, 0); } close_db(newDb); } #ifndef SQLITE_SHELL_FIDDLE /* ** Change the output stream (file or pipe or console) to something else. */ static void output_redir(ShellState *p, FILE *pfNew){ if( p->out != stdout ) eputz("Output already redirected.\n"); else{ p->out = pfNew; setOutputStream(pfNew); } } /* ** Change the output file back to stdout. ** ** If the p->doXdgOpen flag is set, that means the output was being ** redirected to a temporary file named by p->zTempFile. In that case, ** launch start/open/xdg-open on that temporary file. |
︙ | ︙ | |||
22253 22254 22255 22256 22257 22258 22259 | "open"; #else "xdg-open"; #endif char *zCmd; zCmd = sqlite3_mprintf("%s %s", zXdgOpenCmd, p->zTempFile); if( system(zCmd) ){ | | > > > > > | 24155 24156 24157 24158 24159 24160 24161 24162 24163 24164 24165 24166 24167 24168 24169 24170 24171 24172 24173 24174 24175 24176 24177 24178 24179 24180 24181 24182 24183 24184 24185 24186 24187 24188 24189 | "open"; #else "xdg-open"; #endif char *zCmd; zCmd = sqlite3_mprintf("%s %s", zXdgOpenCmd, p->zTempFile); if( system(zCmd) ){ eputf("Failed: [%s]\n", zCmd); }else{ /* Give the start/open/xdg-open command some time to get ** going before we continue, and potential delete the ** p->zTempFile data file out from under it */ sqlite3_sleep(2000); } sqlite3_free(zCmd); outputModePop(p); p->doXdgOpen = 0; } #endif /* !defined(SQLITE_NOHAVE_SYSTEM) */ } p->outfile[0] = 0; p->out = stdout; setOutputStream(stdout); } #else # define output_redir(SS,pfO) # define output_reset(SS) #endif /* ** Run an SQL command and return the single integer result. */ static int db_int(sqlite3 *db, const char *zSql){ sqlite3_stmt *pStmt; int res = 0; |
︙ | ︙ | |||
22339 22340 22341 22342 22343 22344 22345 | unsigned char aHdr[100]; open_db(p, 0); if( p->db==0 ) return 1; rc = sqlite3_prepare_v2(p->db, "SELECT data FROM sqlite_dbpage(?1) WHERE pgno=1", -1, &pStmt, 0); if( rc ){ | | | | | | | | | | | | | | | | 24246 24247 24248 24249 24250 24251 24252 24253 24254 24255 24256 24257 24258 24259 24260 24261 24262 24263 24264 24265 24266 24267 24268 24269 24270 24271 24272 24273 24274 24275 24276 24277 24278 24279 24280 24281 24282 24283 24284 24285 24286 24287 24288 24289 24290 24291 24292 24293 24294 24295 24296 24297 24298 24299 24300 24301 24302 24303 24304 24305 24306 24307 24308 24309 24310 24311 24312 24313 24314 24315 24316 24317 24318 24319 24320 24321 | unsigned char aHdr[100]; open_db(p, 0); if( p->db==0 ) return 1; rc = sqlite3_prepare_v2(p->db, "SELECT data FROM sqlite_dbpage(?1) WHERE pgno=1", -1, &pStmt, 0); if( rc ){ eputf("error: %s\n", sqlite3_errmsg(p->db)); sqlite3_finalize(pStmt); return 1; } sqlite3_bind_text(pStmt, 1, zDb, -1, SQLITE_STATIC); if( sqlite3_step(pStmt)==SQLITE_ROW && sqlite3_column_bytes(pStmt,0)>100 ){ const u8 *pb = sqlite3_column_blob(pStmt,0); shell_check_oom(pb); memcpy(aHdr, pb, 100); sqlite3_finalize(pStmt); }else{ eputz("unable to read database header\n"); sqlite3_finalize(pStmt); return 1; } i = get2byteInt(aHdr+16); if( i==1 ) i = 65536; oputf("%-20s %d\n", "database page size:", i); oputf("%-20s %d\n", "write format:", aHdr[18]); oputf("%-20s %d\n", "read format:", aHdr[19]); oputf("%-20s %d\n", "reserved bytes:", aHdr[20]); for(i=0; i<ArraySize(aField); i++){ int ofst = aField[i].ofst; unsigned int val = get4byteInt(aHdr + ofst); oputf("%-20s %u", aField[i].zName, val); switch( ofst ){ case 56: { if( val==1 ) oputz(" (utf8)"); if( val==2 ) oputz(" (utf16le)"); if( val==3 ) oputz(" (utf16be)"); } } oputz("\n"); } if( zDb==0 ){ zSchemaTab = sqlite3_mprintf("main.sqlite_schema"); }else if( cli_strcmp(zDb,"temp")==0 ){ zSchemaTab = sqlite3_mprintf("%s", "sqlite_temp_schema"); }else{ zSchemaTab = sqlite3_mprintf("\"%w\".sqlite_schema", zDb); } for(i=0; i<ArraySize(aQuery); i++){ char *zSql = sqlite3_mprintf(aQuery[i].zSql, zSchemaTab); int val = db_int(p->db, zSql); sqlite3_free(zSql); oputf("%-20s %d\n", aQuery[i].zName, val); } sqlite3_free(zSchemaTab); sqlite3_file_control(p->db, zDb, SQLITE_FCNTL_DATA_VERSION, &iDataVersion); oputf("%-20s %u\n", "data version", iDataVersion); return 0; } #endif /* SQLITE_SHELL_HAVE_RECOVER */ /* ** Print the current sqlite3_errmsg() value to stderr and return 1. */ static int shellDatabaseError(sqlite3 *db){ const char *zErr = sqlite3_errmsg(db); eputf("Error: %s\n", zErr); return 1; } /* ** Compare the pattern in zGlob[] against the text in z[]. Return TRUE ** if they match and FALSE (0) if they do not match. ** |
︙ | ︙ | |||
22635 22636 22637 22638 22639 22640 22641 | */ static int lintFkeyIndexes( ShellState *pState, /* Current shell tool state */ char **azArg, /* Array of arguments passed to dot command */ int nArg /* Number of entries in azArg[] */ ){ sqlite3 *db = pState->db; /* Database handle to query "main" db of */ | < | 24542 24543 24544 24545 24546 24547 24548 24549 24550 24551 24552 24553 24554 24555 | */ static int lintFkeyIndexes( ShellState *pState, /* Current shell tool state */ char **azArg, /* Array of arguments passed to dot command */ int nArg /* Number of entries in azArg[] */ ){ sqlite3 *db = pState->db; /* Database handle to query "main" db of */ int bVerbose = 0; /* If -verbose is present */ int bGroupByParent = 0; /* If -groupbyparent is present */ int i; /* To iterate through azArg[] */ const char *zIndent = ""; /* How much to indent CREATE INDEX by */ int rc; /* Return code */ sqlite3_stmt *pSql = 0; /* Compiled version of SQL statement below */ |
︙ | ︙ | |||
22717 22718 22719 22720 22721 22722 22723 | bVerbose = 1; } else if( n>1 && sqlite3_strnicmp("-groupbyparent", azArg[i], n)==0 ){ bGroupByParent = 1; zIndent = " "; } else{ | | < < | 24623 24624 24625 24626 24627 24628 24629 24630 24631 24632 24633 24634 24635 24636 24637 | bVerbose = 1; } else if( n>1 && sqlite3_strnicmp("-groupbyparent", azArg[i], n)==0 ){ bGroupByParent = 1; zIndent = " "; } else{ eputf("Usage: %s %s ?-verbose? ?-groupbyparent?\n", azArg[0], azArg[1]); return SQLITE_ERROR; } } /* Register the fkey_collate_clause() SQL function */ rc = sqlite3_create_function(db, "fkey_collate_clause", 4, SQLITE_UTF8, 0, shellFkeyCollateClause, 0, 0 |
︙ | ︙ | |||
22763 22764 22765 22766 22767 22768 22769 | res = zPlan!=0 && ( 0==sqlite3_strglob(zGlob, zPlan) || 0==sqlite3_strglob(zGlobIPK, zPlan)); } rc = sqlite3_finalize(pExplain); if( rc!=SQLITE_OK ) break; if( res<0 ){ | | | | | | | | | | | | < < | < < < < < | | 24667 24668 24669 24670 24671 24672 24673 24674 24675 24676 24677 24678 24679 24680 24681 24682 24683 24684 24685 24686 24687 24688 24689 24690 24691 24692 24693 24694 24695 24696 24697 24698 24699 24700 24701 24702 24703 24704 24705 24706 24707 24708 24709 24710 24711 24712 24713 24714 24715 24716 24717 24718 24719 24720 24721 24722 24723 24724 24725 24726 24727 24728 24729 24730 24731 24732 24733 24734 24735 24736 24737 24738 24739 24740 24741 24742 24743 24744 24745 24746 24747 24748 24749 24750 24751 24752 24753 24754 24755 24756 24757 24758 24759 | res = zPlan!=0 && ( 0==sqlite3_strglob(zGlob, zPlan) || 0==sqlite3_strglob(zGlobIPK, zPlan)); } rc = sqlite3_finalize(pExplain); if( rc!=SQLITE_OK ) break; if( res<0 ){ eputz("Error: internal error"); break; }else{ if( bGroupByParent && (bVerbose || res==0) && (zPrev==0 || sqlite3_stricmp(zParent, zPrev)) ){ oputf("-- Parent table %s\n", zParent); sqlite3_free(zPrev); zPrev = sqlite3_mprintf("%s", zParent); } if( res==0 ){ oputf("%s%s --> %s\n", zIndent, zCI, zTarget); }else if( bVerbose ){ oputf("%s/* no extra indexes required for %s -> %s */\n", zIndent, zFrom, zTarget ); } } } sqlite3_free(zPrev); if( rc!=SQLITE_OK ){ eputf("%s\n", sqlite3_errmsg(db)); } rc2 = sqlite3_finalize(pSql); if( rc==SQLITE_OK && rc2!=SQLITE_OK ){ rc = rc2; eputf("%s\n", sqlite3_errmsg(db)); } }else{ eputf("%s\n", sqlite3_errmsg(db)); } return rc; } /* ** Implementation of ".lint" dot command. */ static int lintDotCommand( ShellState *pState, /* Current shell tool state */ char **azArg, /* Array of arguments passed to dot command */ int nArg /* Number of entries in azArg[] */ ){ int n; n = (nArg>=2 ? strlen30(azArg[1]) : 0); if( n<1 || sqlite3_strnicmp(azArg[1], "fkey-indexes", n) ) goto usage; return lintFkeyIndexes(pState, azArg, nArg); usage: eputf("Usage %s sub-command ?switches...?\n", azArg[0]); eputz("Where sub-commands are:\n"); eputz(" fkey-indexes\n"); return SQLITE_ERROR; } static void shellPrepare( sqlite3 *db, int *pRc, const char *zSql, sqlite3_stmt **ppStmt ){ *ppStmt = 0; if( *pRc==SQLITE_OK ){ int rc = sqlite3_prepare_v2(db, zSql, -1, ppStmt, 0); if( rc!=SQLITE_OK ){ eputf("sql error: %s (%d)\n", sqlite3_errmsg(db), sqlite3_errcode(db)); *pRc = rc; } } } /* ** Create a prepared statement using printf-style arguments for the SQL. */ static void shellPreparePrintf( sqlite3 *db, int *pRc, sqlite3_stmt **ppStmt, const char *zFmt, ... ){ *ppStmt = 0; |
︙ | ︙ | |||
22871 22872 22873 22874 22875 22876 22877 | }else{ shellPrepare(db, pRc, z, ppStmt); sqlite3_free(z); } } } | > | < < < < | | > | | 24768 24769 24770 24771 24772 24773 24774 24775 24776 24777 24778 24779 24780 24781 24782 24783 24784 24785 24786 24787 24788 24789 24790 24791 24792 24793 24794 24795 24796 24797 24798 24799 24800 24801 24802 24803 24804 24805 24806 24807 24808 24809 24810 24811 24812 24813 24814 24815 24816 | }else{ shellPrepare(db, pRc, z, ppStmt); sqlite3_free(z); } } } /* ** Finalize the prepared statement created using shellPreparePrintf(). */ static void shellFinalize( int *pRc, sqlite3_stmt *pStmt ){ if( pStmt ){ sqlite3 *db = sqlite3_db_handle(pStmt); int rc = sqlite3_finalize(pStmt); if( *pRc==SQLITE_OK ){ if( rc!=SQLITE_OK ){ eputf("SQL error: %s\n", sqlite3_errmsg(db)); } *pRc = rc; } } } #if !defined SQLITE_OMIT_VIRTUALTABLE /* Reset the prepared statement created using shellPreparePrintf(). ** ** This routine is could be marked "static". But it is not always used, ** depending on compile-time options. By omitting the "static", we avoid ** nuisance compiler warnings about "defined but not used". */ void shellReset( int *pRc, sqlite3_stmt *pStmt ){ int rc = sqlite3_reset(pStmt); if( *pRc==SQLITE_OK ){ if( rc!=SQLITE_OK ){ sqlite3 *db = sqlite3_db_handle(pStmt); eputf("SQL error: %s\n", sqlite3_errmsg(db)); } *pRc = rc; } } #endif /* !defined SQLITE_OMIT_VIRTUALTABLE */ #if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_HAVE_ZLIB) |
︙ | ︙ | |||
22957 22958 22959 22960 22961 22962 22963 | */ static int arErrorMsg(ArCommand *pAr, const char *zFmt, ...){ va_list ap; char *z; va_start(ap, zFmt); z = sqlite3_vmprintf(zFmt, ap); va_end(ap); | | | | | 24852 24853 24854 24855 24856 24857 24858 24859 24860 24861 24862 24863 24864 24865 24866 24867 24868 24869 24870 | */ static int arErrorMsg(ArCommand *pAr, const char *zFmt, ...){ va_list ap; char *z; va_start(ap, zFmt); z = sqlite3_vmprintf(zFmt, ap); va_end(ap); eputf("Error: %s\n", z); if( pAr->fromCmdLine ){ eputz("Use \"-A\" for more help\n"); }else{ eputz("Use \".archive --help\" for more help\n"); } sqlite3_free(z); return SQLITE_ERROR; } /* ** Values for ArCommand.eCmd. |
︙ | ︙ | |||
23061 23062 23063 23064 23065 23066 23067 | { "dryrun", 'n', AR_SWITCH_DRYRUN, 0 }, { "glob", 'g', AR_SWITCH_GLOB, 0 }, }; int nSwitch = sizeof(aSwitch) / sizeof(struct ArSwitch); struct ArSwitch *pEnd = &aSwitch[nSwitch]; if( nArg<=1 ){ | | | 24956 24957 24958 24959 24960 24961 24962 24963 24964 24965 24966 24967 24968 24969 24970 | { "dryrun", 'n', AR_SWITCH_DRYRUN, 0 }, { "glob", 'g', AR_SWITCH_GLOB, 0 }, }; int nSwitch = sizeof(aSwitch) / sizeof(struct ArSwitch); struct ArSwitch *pEnd = &aSwitch[nSwitch]; if( nArg<=1 ){ eputz("Wrong number of arguments. Usage:\n"); return arUsage(stderr); }else{ char *z = azArg[1]; if( z[0]!='-' ){ /* Traditional style [tar] invocation */ int i; int iArg = 2; |
︙ | ︙ | |||
23167 23168 23169 23170 23171 23172 23173 | } if( arProcessSwitch(pAr, pMatch->eSwitch, zArg) ) return SQLITE_ERROR; } } } } if( pAr->eCmd==0 ){ | | | 25062 25063 25064 25065 25066 25067 25068 25069 25070 25071 25072 25073 25074 25075 25076 | } if( arProcessSwitch(pAr, pMatch->eSwitch, zArg) ) return SQLITE_ERROR; } } } } if( pAr->eCmd==0 ){ eputz("Required argument missing. Usage:\n"); return arUsage(stderr); } return SQLITE_OK; } /* ** This function assumes that all arguments within the ArCommand.azArg[] |
︙ | ︙ | |||
23210 23211 23212 23213 23214 23215 23216 | z[n] = '\0'; sqlite3_bind_text(pTest, j, z, -1, SQLITE_STATIC); if( SQLITE_ROW==sqlite3_step(pTest) ){ bOk = 1; } shellReset(&rc, pTest); if( rc==SQLITE_OK && bOk==0 ){ | | | 25105 25106 25107 25108 25109 25110 25111 25112 25113 25114 25115 25116 25117 25118 25119 | z[n] = '\0'; sqlite3_bind_text(pTest, j, z, -1, SQLITE_STATIC); if( SQLITE_ROW==sqlite3_step(pTest) ){ bOk = 1; } shellReset(&rc, pTest); if( rc==SQLITE_OK && bOk==0 ){ eputf("not found in archive: %s\n", z); rc = SQLITE_ERROR; } } shellFinalize(&rc, pTest); } return rc; } |
︙ | ︙ | |||
23277 23278 23279 23280 23281 23282 23283 | rc = arCheckEntries(pAr); arWhereClause(&rc, pAr, &zWhere); shellPreparePrintf(pAr->db, &rc, &pSql, zSql, azCols[pAr->bVerbose], pAr->zSrcTable, zWhere); if( pAr->bDryRun ){ | | | | < | < < | < | | | 25172 25173 25174 25175 25176 25177 25178 25179 25180 25181 25182 25183 25184 25185 25186 25187 25188 25189 25190 25191 25192 25193 25194 25195 25196 25197 25198 25199 25200 25201 25202 25203 25204 25205 25206 25207 25208 25209 25210 25211 25212 25213 25214 25215 25216 25217 25218 25219 25220 25221 25222 25223 25224 25225 25226 25227 25228 25229 25230 25231 25232 25233 25234 | rc = arCheckEntries(pAr); arWhereClause(&rc, pAr, &zWhere); shellPreparePrintf(pAr->db, &rc, &pSql, zSql, azCols[pAr->bVerbose], pAr->zSrcTable, zWhere); if( pAr->bDryRun ){ oputf("%s\n", sqlite3_sql(pSql)); }else{ while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pSql) ){ if( pAr->bVerbose ){ oputf("%s % 10d %s %s\n", sqlite3_column_text(pSql, 0), sqlite3_column_int(pSql, 1), sqlite3_column_text(pSql, 2),sqlite3_column_text(pSql, 3)); }else{ oputf("%s\n", sqlite3_column_text(pSql, 0)); } } } shellFinalize(&rc, pSql); sqlite3_free(zWhere); return rc; } /* ** Implementation of .ar "Remove" command. */ static int arRemoveCommand(ArCommand *pAr){ int rc = 0; char *zSql = 0; char *zWhere = 0; if( pAr->nArg ){ /* Verify that args actually exist within the archive before proceeding. ** And formulate a WHERE clause to match them. */ rc = arCheckEntries(pAr); arWhereClause(&rc, pAr, &zWhere); } if( rc==SQLITE_OK ){ zSql = sqlite3_mprintf("DELETE FROM %s WHERE %s;", pAr->zSrcTable, zWhere); if( pAr->bDryRun ){ oputf("%s\n", zSql); }else{ char *zErr = 0; rc = sqlite3_exec(pAr->db, "SAVEPOINT ar;", 0, 0, 0); if( rc==SQLITE_OK ){ rc = sqlite3_exec(pAr->db, zSql, 0, 0, &zErr); if( rc!=SQLITE_OK ){ sqlite3_exec(pAr->db, "ROLLBACK TO ar; RELEASE ar;", 0, 0, 0); }else{ rc = sqlite3_exec(pAr->db, "RELEASE ar;", 0, 0, 0); } } if( zErr ){ sputf(stdout, "ERROR: %s\n", zErr); /* stdout? */ sqlite3_free(zErr); } } } sqlite3_free(zWhere); sqlite3_free(zSql); return rc; |
︙ | ︙ | |||
23393 23394 23395 23396 23397 23398 23399 | ** only for the directories. This is because the timestamps for ** extracted directories must be reset after they are populated (as ** populating them changes the timestamp). */ for(i=0; i<2; i++){ j = sqlite3_bind_parameter_index(pSql, "$dirOnly"); sqlite3_bind_int(pSql, j, i); if( pAr->bDryRun ){ | | | | | | 25284 25285 25286 25287 25288 25289 25290 25291 25292 25293 25294 25295 25296 25297 25298 25299 25300 25301 25302 25303 25304 25305 25306 25307 25308 25309 25310 25311 25312 25313 25314 25315 25316 25317 25318 25319 25320 25321 25322 25323 25324 25325 25326 25327 25328 | ** only for the directories. This is because the timestamps for ** extracted directories must be reset after they are populated (as ** populating them changes the timestamp). */ for(i=0; i<2; i++){ j = sqlite3_bind_parameter_index(pSql, "$dirOnly"); sqlite3_bind_int(pSql, j, i); if( pAr->bDryRun ){ oputf("%s\n", sqlite3_sql(pSql)); }else{ while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pSql) ){ if( i==0 && pAr->bVerbose ){ oputf("%s\n", sqlite3_column_text(pSql, 0)); } } } shellReset(&rc, pSql); } shellFinalize(&rc, pSql); } sqlite3_free(zDir); sqlite3_free(zWhere); return rc; } /* ** Run the SQL statement in zSql. Or if doing a --dryrun, merely print it out. */ static int arExecSql(ArCommand *pAr, const char *zSql){ int rc; if( pAr->bDryRun ){ oputf("%s\n", zSql); rc = SQLITE_OK; }else{ char *zErr = 0; rc = sqlite3_exec(pAr->db, zSql, 0, 0, &zErr); if( zErr ){ sputf(stdout, "ERROR: %s\n", zErr); sqlite3_free(zErr); } } return rc; } |
︙ | ︙ | |||
23598 23599 23600 23601 23602 23603 23604 | || cmd.eCmd==AR_CMD_REMOVE || cmd.eCmd==AR_CMD_UPDATE ){ flags = SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE; }else{ flags = SQLITE_OPEN_READONLY; } cmd.db = 0; if( cmd.bDryRun ){ | | | < | < | | 25489 25490 25491 25492 25493 25494 25495 25496 25497 25498 25499 25500 25501 25502 25503 25504 25505 25506 25507 25508 25509 25510 25511 25512 25513 25514 25515 25516 25517 25518 25519 25520 25521 25522 | || cmd.eCmd==AR_CMD_REMOVE || cmd.eCmd==AR_CMD_UPDATE ){ flags = SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE; }else{ flags = SQLITE_OPEN_READONLY; } cmd.db = 0; if( cmd.bDryRun ){ oputf("-- open database '%s'%s\n", cmd.zFile, eDbType==SHELL_OPEN_APPENDVFS ? " using 'apndvfs'" : ""); } rc = sqlite3_open_v2(cmd.zFile, &cmd.db, flags, eDbType==SHELL_OPEN_APPENDVFS ? "apndvfs" : 0); if( rc!=SQLITE_OK ){ eputf("cannot open file: %s (%s)\n", cmd.zFile, sqlite3_errmsg(cmd.db)); goto end_ar_command; } sqlite3_fileio_init(cmd.db, 0, 0); sqlite3_sqlar_init(cmd.db, 0, 0); sqlite3_create_function(cmd.db, "shell_putsnl", 1, SQLITE_UTF8, cmd.p, shellPutsFunc, 0, 0); } if( cmd.zSrcTable==0 && cmd.bZip==0 && cmd.eCmd!=AR_CMD_HELP ){ if( cmd.eCmd!=AR_CMD_CREATE && sqlite3_table_column_metadata(cmd.db,0,"sqlar","name",0,0,0,0,0) ){ eputz("database does not contain an 'sqlar' table\n"); rc = SQLITE_ERROR; goto end_ar_command; } cmd.zSrcTable = sqlite3_mprintf("sqlar"); } switch( cmd.eCmd ){ |
︙ | ︙ | |||
23677 23678 23679 23680 23681 23682 23683 | /* ** This function is used as a callback by the recover extension. Simply ** print the supplied SQL statement to stdout. */ static int recoverSqlCb(void *pCtx, const char *zSql){ ShellState *pState = (ShellState*)pCtx; | | | 25566 25567 25568 25569 25570 25571 25572 25573 25574 25575 25576 25577 25578 25579 25580 | /* ** This function is used as a callback by the recover extension. Simply ** print the supplied SQL statement to stdout. */ static int recoverSqlCb(void *pCtx, const char *zSql){ ShellState *pState = (ShellState*)pCtx; sputf(pState->out, "%s;\n", zSql); return SQLITE_OK; } /* ** This function is called to recover data from the database. A script ** to construct a new database containing all recovered data is output ** on stream pState->out. |
︙ | ︙ | |||
23720 23721 23722 23723 23724 23725 23726 | i++; zLAF = azArg[i]; }else if( n<=10 && memcmp("-no-rowids", z, n)==0 ){ bRowids = 0; } else{ | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 25609 25610 25611 25612 25613 25614 25615 25616 25617 25618 25619 25620 25621 25622 25623 25624 25625 25626 25627 25628 25629 25630 25631 25632 25633 25634 25635 25636 25637 25638 25639 25640 25641 25642 25643 25644 25645 25646 25647 25648 25649 25650 25651 25652 25653 25654 25655 25656 25657 25658 25659 25660 25661 25662 25663 25664 25665 25666 25667 25668 25669 25670 25671 25672 25673 25674 25675 25676 25677 25678 25679 25680 25681 25682 25683 25684 25685 25686 25687 25688 25689 25690 25691 25692 25693 25694 25695 25696 25697 25698 25699 25700 25701 | i++; zLAF = azArg[i]; }else if( n<=10 && memcmp("-no-rowids", z, n)==0 ){ bRowids = 0; } else{ eputf("unexpected option: %s\n", azArg[i]); showHelp(pState->out, azArg[0]); return 1; } } p = sqlite3_recover_init_sql( pState->db, "main", recoverSqlCb, (void*)pState ); sqlite3_recover_config(p, 789, (void*)zRecoveryDb); /* Debug use only */ sqlite3_recover_config(p, SQLITE_RECOVER_LOST_AND_FOUND, (void*)zLAF); sqlite3_recover_config(p, SQLITE_RECOVER_ROWIDS, (void*)&bRowids); sqlite3_recover_config(p, SQLITE_RECOVER_FREELIST_CORRUPT,(void*)&bFreelist); sqlite3_recover_run(p); if( sqlite3_recover_errcode(p)!=SQLITE_OK ){ const char *zErr = sqlite3_recover_errmsg(p); int errCode = sqlite3_recover_errcode(p); eputf("sql error: %s (%d)\n", zErr, errCode); } rc = sqlite3_recover_finish(p); return rc; } #endif /* SQLITE_SHELL_HAVE_RECOVER */ /* ** Implementation of ".intck STEPS_PER_UNLOCK" command. */ static int intckDatabaseCmd(ShellState *pState, i64 nStepPerUnlock){ sqlite3_intck *p = 0; int rc = SQLITE_OK; rc = sqlite3_intck_open(pState->db, "main", &p); if( rc==SQLITE_OK ){ i64 nStep = 0; i64 nError = 0; const char *zErr = 0; while( SQLITE_OK==sqlite3_intck_step(p) ){ const char *zMsg = sqlite3_intck_message(p); if( zMsg ){ oputf("%s\n", zMsg); nError++; } nStep++; if( nStepPerUnlock && (nStep % nStepPerUnlock)==0 ){ sqlite3_intck_unlock(p); } } rc = sqlite3_intck_error(p, &zErr); if( zErr ){ eputf("%s\n", zErr); } sqlite3_intck_close(p); oputf("%lld steps, %lld errors\n", nStep, nError); } return rc; } /* * zAutoColumn(zCol, &db, ?) => Maybe init db, add column zCol to it. * zAutoColumn(0, &db, ?) => (db!=0) Form columns spec for CREATE TABLE, * close db and set it to 0, and return the columns spec, to later * be sqlite3_free()'ed by the caller. * The return is 0 when either: * (a) The db was not initialized and zCol==0 (There are no columns.) * (b) zCol!=0 (Column was added, db initialized as needed.) * The 3rd argument, pRenamed, references an out parameter. If the * pointer is non-zero, its referent will be set to a summary of renames * done if renaming was necessary, or set to 0 if none was done. The out * string (if any) must be sqlite3_free()'ed by the caller. */ #ifdef SHELL_DEBUG #define rc_err_oom_die(rc) \ if( rc==SQLITE_NOMEM ) shell_check_oom(0); \ else if(!(rc==SQLITE_OK||rc==SQLITE_DONE)) \ eputf("E:%d\n",rc), assert(0) #else static void rc_err_oom_die(int rc){ if( rc==SQLITE_NOMEM ) shell_check_oom(0); assert(rc==SQLITE_OK||rc==SQLITE_DONE); } #endif |
︙ | ︙ | |||
23904 23905 23906 23907 23908 23909 23910 23911 23912 23913 23914 23915 23916 23917 | if( *pDb==0 ){ if( SQLITE_OK!=sqlite3_open(zCOL_DB, pDb) ) return 0; #ifdef SHELL_COLFIX_DB if(*zCOL_DB!=':') sqlite3_exec(*pDb,"drop table if exists ColNames;" "drop view if exists RepeatedNames;",0,0,0); #endif rc = sqlite3_exec(*pDb, zTabMake, 0, 0, 0); rc_err_oom_die(rc); } assert(*pDb!=0); rc = sqlite3_prepare_v2(*pDb, zTabFill, -1, &pStmt, 0); rc_err_oom_die(rc); rc = sqlite3_bind_text(pStmt, 1, zColNew, -1, 0); | > | 25827 25828 25829 25830 25831 25832 25833 25834 25835 25836 25837 25838 25839 25840 25841 | if( *pDb==0 ){ if( SQLITE_OK!=sqlite3_open(zCOL_DB, pDb) ) return 0; #ifdef SHELL_COLFIX_DB if(*zCOL_DB!=':') sqlite3_exec(*pDb,"drop table if exists ColNames;" "drop view if exists RepeatedNames;",0,0,0); #endif #undef SHELL_COLFIX_DB rc = sqlite3_exec(*pDb, zTabMake, 0, 0, 0); rc_err_oom_die(rc); } assert(*pDb!=0); rc = sqlite3_prepare_v2(*pDb, zTabFill, -1, &pStmt, 0); rc_err_oom_die(rc); rc = sqlite3_bind_text(pStmt, 1, zColNew, -1, 0); |
︙ | ︙ | |||
23963 23964 23965 23966 23967 23968 23969 23970 23971 23972 23973 23974 23975 23976 | } sqlite3_finalize(pStmt); sqlite3_close(*pDb); *pDb = 0; return zColsSpec; } } /* ** If an input line begins with "." then invoke this routine to ** process that line. ** ** Return 1 on error, 2 to exit, and 0 otherwise. */ | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 25887 25888 25889 25890 25891 25892 25893 25894 25895 25896 25897 25898 25899 25900 25901 25902 25903 25904 25905 25906 25907 25908 25909 25910 25911 25912 25913 25914 25915 25916 25917 25918 25919 25920 25921 25922 25923 25924 25925 25926 25927 25928 25929 25930 25931 25932 25933 25934 25935 25936 25937 25938 25939 25940 25941 25942 25943 25944 25945 25946 25947 25948 25949 25950 25951 25952 25953 25954 25955 25956 | } sqlite3_finalize(pStmt); sqlite3_close(*pDb); *pDb = 0; return zColsSpec; } } /* ** Check if the sqlite_schema table contains one or more virtual tables. If ** parameter zLike is not NULL, then it is an SQL expression that the ** sqlite_schema row must also match. If one or more such rows are found, ** print the following warning to the output: ** ** WARNING: Script requires that SQLITE_DBCONFIG_DEFENSIVE be disabled */ static int outputDumpWarning(ShellState *p, const char *zLike){ int rc = SQLITE_OK; sqlite3_stmt *pStmt = 0; shellPreparePrintf(p->db, &rc, &pStmt, "SELECT 1 FROM sqlite_schema o WHERE " "sql LIKE 'CREATE VIRTUAL TABLE%%' AND %s", zLike ? zLike : "true" ); if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ oputz("/* WARNING: " "Script requires that SQLITE_DBCONFIG_DEFENSIVE be disabled */\n" ); } shellFinalize(&rc, pStmt); return rc; } /* ** Fault-Simulator state and logic. */ static struct { int iId; /* ID that triggers a simulated fault. -1 means "any" */ int iErr; /* The error code to return on a fault */ int iCnt; /* Trigger the fault only if iCnt is already zero */ int iInterval; /* Reset iCnt to this value after each fault */ int eVerbose; /* When to print output */ } faultsim_state = {-1, 0, 0, 0, 0}; /* ** This is the fault-sim callback */ static int faultsim_callback(int iArg){ if( faultsim_state.iId>0 && faultsim_state.iId!=iArg ){ return SQLITE_OK; } if( faultsim_state.iCnt>0 ){ faultsim_state.iCnt--; if( faultsim_state.eVerbose>=2 ){ oputf("FAULT-SIM id=%d no-fault (cnt=%d)\n", iArg, faultsim_state.iCnt); } return SQLITE_OK; } if( faultsim_state.eVerbose>=1 ){ oputf("FAULT-SIM id=%d returns %d\n", iArg, faultsim_state.iErr); } faultsim_state.iCnt = faultsim_state.iInterval; return faultsim_state.iErr; } /* ** If an input line begins with "." then invoke this routine to ** process that line. ** ** Return 1 on error, 2 to exit, and 0 otherwise. */ |
︙ | ︙ | |||
24017 24018 24019 24020 24021 24022 24023 | n = strlen30(azArg[0]); c = azArg[0][0]; clearTempFile(p); #ifndef SQLITE_OMIT_AUTHORIZATION if( c=='a' && cli_strncmp(azArg[0], "auth", n)==0 ){ if( nArg!=2 ){ | | | 25997 25998 25999 26000 26001 26002 26003 26004 26005 26006 26007 26008 26009 26010 26011 | n = strlen30(azArg[0]); c = azArg[0][0]; clearTempFile(p); #ifndef SQLITE_OMIT_AUTHORIZATION if( c=='a' && cli_strncmp(azArg[0], "auth", n)==0 ){ if( nArg!=2 ){ eputz("Usage: .auth ON|OFF\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); if( booleanValue(azArg[1]) ){ sqlite3_set_authorizer(p->db, shellAuth, p); }else if( p->bSafeModePersist ){ |
︙ | ︙ | |||
24064 24065 24066 24067 24068 24069 24070 | if( cli_strcmp(z, "-append")==0 ){ zVfs = "apndvfs"; }else if( cli_strcmp(z, "-async")==0 ){ bAsync = 1; }else { | | | | | | | | | < | | 26044 26045 26046 26047 26048 26049 26050 26051 26052 26053 26054 26055 26056 26057 26058 26059 26060 26061 26062 26063 26064 26065 26066 26067 26068 26069 26070 26071 26072 26073 26074 26075 26076 26077 26078 26079 26080 26081 26082 26083 26084 26085 26086 26087 26088 26089 26090 26091 26092 26093 26094 26095 26096 26097 26098 26099 26100 26101 26102 26103 26104 26105 26106 26107 26108 26109 26110 26111 26112 26113 26114 26115 26116 26117 26118 26119 26120 26121 26122 26123 26124 26125 | if( cli_strcmp(z, "-append")==0 ){ zVfs = "apndvfs"; }else if( cli_strcmp(z, "-async")==0 ){ bAsync = 1; }else { eputf("unknown option: %s\n", azArg[j]); return 1; } }else if( zDestFile==0 ){ zDestFile = azArg[j]; }else if( zDb==0 ){ zDb = zDestFile; zDestFile = azArg[j]; }else{ eputz("Usage: .backup ?DB? ?OPTIONS? FILENAME\n"); return 1; } } if( zDestFile==0 ){ eputz("missing FILENAME argument on .backup\n"); return 1; } if( zDb==0 ) zDb = "main"; rc = sqlite3_open_v2(zDestFile, &pDest, SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE, zVfs); if( rc!=SQLITE_OK ){ eputf("Error: cannot open \"%s\"\n", zDestFile); close_db(pDest); return 1; } if( bAsync ){ sqlite3_exec(pDest, "PRAGMA synchronous=OFF; PRAGMA journal_mode=OFF;", 0, 0, 0); } open_db(p, 0); pBackup = sqlite3_backup_init(pDest, "main", p->db, zDb); if( pBackup==0 ){ eputf("Error: %s\n", sqlite3_errmsg(pDest)); close_db(pDest); return 1; } while( (rc = sqlite3_backup_step(pBackup,100))==SQLITE_OK ){} sqlite3_backup_finish(pBackup); if( rc==SQLITE_DONE ){ rc = 0; }else{ eputf("Error: %s\n", sqlite3_errmsg(pDest)); rc = 1; } close_db(pDest); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='b' && n>=3 && cli_strncmp(azArg[0], "bail", n)==0 ){ if( nArg==2 ){ bail_on_error = booleanValue(azArg[1]); }else{ eputz("Usage: .bail on|off\n"); rc = 1; } }else /* Undocumented. Legacy only. See "crnl" below */ if( c=='b' && n>=3 && cli_strncmp(azArg[0], "binary", n)==0 ){ if( nArg==2 ){ if( booleanValue(azArg[1]) ){ setBinaryMode(p->out, 1); }else{ setTextMode(p->out, 1); } }else{ eputz("The \".binary\" command is deprecated. Use \".crnl\" instead.\n" "Usage: .binary on|off\n"); rc = 1; } }else /* The undocumented ".breakpoint" command causes a call to the no-op ** routine named test_breakpoint(). */ |
︙ | ︙ | |||
24156 24157 24158 24159 24160 24161 24162 | wchar_t *z = sqlite3_win32_utf8_to_unicode(azArg[1]); rc = !SetCurrentDirectoryW(z); sqlite3_free(z); #else rc = chdir(azArg[1]); #endif if( rc ){ | | | | | < | | | | | | | | | | | | < < | | | 26135 26136 26137 26138 26139 26140 26141 26142 26143 26144 26145 26146 26147 26148 26149 26150 26151 26152 26153 26154 26155 26156 26157 26158 26159 26160 26161 26162 26163 26164 26165 26166 26167 26168 26169 26170 26171 26172 26173 26174 26175 26176 26177 26178 26179 26180 26181 26182 26183 26184 26185 26186 26187 26188 26189 26190 26191 26192 26193 26194 26195 26196 26197 26198 26199 26200 26201 26202 26203 26204 26205 26206 26207 26208 26209 26210 26211 26212 26213 26214 26215 26216 26217 26218 26219 26220 26221 26222 26223 26224 26225 26226 26227 26228 26229 26230 26231 26232 26233 26234 26235 26236 26237 26238 26239 26240 26241 26242 26243 26244 26245 26246 26247 26248 26249 26250 26251 26252 26253 26254 26255 26256 26257 26258 26259 26260 26261 26262 26263 26264 26265 26266 26267 26268 26269 26270 26271 26272 26273 26274 26275 26276 26277 26278 26279 26280 26281 26282 26283 26284 26285 26286 26287 26288 26289 26290 26291 26292 26293 26294 26295 26296 | wchar_t *z = sqlite3_win32_utf8_to_unicode(azArg[1]); rc = !SetCurrentDirectoryW(z); sqlite3_free(z); #else rc = chdir(azArg[1]); #endif if( rc ){ eputf("Cannot change to directory \"%s\"\n", azArg[1]); rc = 1; } }else{ eputz("Usage: .cd DIRECTORY\n"); rc = 1; } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='c' && n>=3 && cli_strncmp(azArg[0], "changes", n)==0 ){ if( nArg==2 ){ setOrClearFlag(p, SHFLG_CountChanges, azArg[1]); }else{ eputz("Usage: .changes on|off\n"); rc = 1; } }else #ifndef SQLITE_SHELL_FIDDLE /* Cancel output redirection, if it is currently set (by .testcase) ** Then read the content of the testcase-out.txt file and compare against ** azArg[1]. If there are differences, report an error and exit. */ if( c=='c' && n>=3 && cli_strncmp(azArg[0], "check", n)==0 ){ char *zRes = 0; output_reset(p); if( nArg!=2 ){ eputz("Usage: .check GLOB-PATTERN\n"); rc = 2; }else if( (zRes = readFile("testcase-out.txt", 0))==0 ){ rc = 2; }else if( testcase_glob(azArg[1],zRes)==0 ){ eputf("testcase-%s FAILED\n Expected: [%s]\n Got: [%s]\n", p->zTestcase, azArg[1], zRes); rc = 1; }else{ oputf("testcase-%s ok\n", p->zTestcase); p->nCheck++; } sqlite3_free(zRes); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ #ifndef SQLITE_SHELL_FIDDLE if( c=='c' && cli_strncmp(azArg[0], "clone", n)==0 ){ failIfSafeMode(p, "cannot run .clone in safe mode"); if( nArg==2 ){ tryToClone(p, azArg[1]); }else{ eputz("Usage: .clone FILENAME\n"); rc = 1; } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='c' && cli_strncmp(azArg[0], "connection", n)==0 ){ if( nArg==1 ){ /* List available connections */ int i; for(i=0; i<ArraySize(p->aAuxDb); i++){ const char *zFile = p->aAuxDb[i].zDbFilename; if( p->aAuxDb[i].db==0 && p->pAuxDb!=&p->aAuxDb[i] ){ zFile = "(not open)"; }else if( zFile==0 ){ zFile = "(memory)"; }else if( zFile[0]==0 ){ zFile = "(temporary-file)"; } if( p->pAuxDb == &p->aAuxDb[i] ){ sputf(stdout, "ACTIVE %d: %s\n", i, zFile); }else if( p->aAuxDb[i].db!=0 ){ sputf(stdout, " %d: %s\n", i, zFile); } } }else if( nArg==2 && IsDigit(azArg[1][0]) && azArg[1][1]==0 ){ int i = azArg[1][0] - '0'; if( p->pAuxDb != &p->aAuxDb[i] && i>=0 && i<ArraySize(p->aAuxDb) ){ p->pAuxDb->db = p->db; p->pAuxDb = &p->aAuxDb[i]; globalDb = p->db = p->pAuxDb->db; p->pAuxDb->db = 0; } }else if( nArg==3 && cli_strcmp(azArg[1], "close")==0 && IsDigit(azArg[2][0]) && azArg[2][1]==0 ){ int i = azArg[2][0] - '0'; if( i<0 || i>=ArraySize(p->aAuxDb) ){ /* No-op */ }else if( p->pAuxDb == &p->aAuxDb[i] ){ eputz("cannot close the active database connection\n"); rc = 1; }else if( p->aAuxDb[i].db ){ session_close_all(p, i); close_db(p->aAuxDb[i].db); p->aAuxDb[i].db = 0; } }else{ eputz("Usage: .connection [close] [CONNECTION-NUMBER]\n"); rc = 1; } }else if( c=='c' && n==4 && cli_strncmp(azArg[0], "crnl", n)==0 ){ if( nArg==2 ){ if( booleanValue(azArg[1]) ){ setTextMode(p->out, 1); }else{ setBinaryMode(p->out, 1); } }else{ #if !defined(_WIN32) && !defined(WIN32) eputz("The \".crnl\" is a no-op on non-Windows machines.\n"); #endif eputz("Usage: .crnl on|off\n"); rc = 1; } }else if( c=='d' && n>1 && cli_strncmp(azArg[0], "databases", n)==0 ){ char **azName = 0; int nName = 0; sqlite3_stmt *pStmt; int i; open_db(p, 0); rc = sqlite3_prepare_v2(p->db, "PRAGMA database_list", -1, &pStmt, 0); if( rc ){ eputf("Error: %s\n", sqlite3_errmsg(p->db)); rc = 1; }else{ while( sqlite3_step(pStmt)==SQLITE_ROW ){ const char *zSchema = (const char *)sqlite3_column_text(pStmt,1); const char *zFile = (const char*)sqlite3_column_text(pStmt,2); if( zSchema==0 || zFile==0 ) continue; azName = sqlite3_realloc(azName, (nName+1)*2*sizeof(char*)); shell_check_oom(azName); azName[nName*2] = strdup(zSchema); azName[nName*2+1] = strdup(zFile); nName++; } } sqlite3_finalize(pStmt); for(i=0; i<nName; i++){ int eTxn = sqlite3_txn_state(p->db, azName[i*2]); int bRdonly = sqlite3_db_readonly(p->db, azName[i*2]); const char *z = azName[i*2+1]; oputf("%s: %s %s%s\n", azName[i*2], z && z[0] ? z : "\"\"", bRdonly ? "r/o" : "r/w", eTxn==SQLITE_TXN_NONE ? "" : eTxn==SQLITE_TXN_READ ? " read-txn" : " write-txn"); free(azName[i*2]); free(azName[i*2+1]); } sqlite3_free(azName); }else |
︙ | ︙ | |||
24346 24347 24348 24349 24350 24351 24352 | open_db(p, 0); for(ii=0; ii<ArraySize(aDbConfig); ii++){ if( nArg>1 && cli_strcmp(azArg[1], aDbConfig[ii].zName)!=0 ) continue; if( nArg>=3 ){ sqlite3_db_config(p->db, aDbConfig[ii].op, booleanValue(azArg[2]), 0); } sqlite3_db_config(p->db, aDbConfig[ii].op, -1, &v); | | | | | 26322 26323 26324 26325 26326 26327 26328 26329 26330 26331 26332 26333 26334 26335 26336 26337 26338 26339 26340 26341 | open_db(p, 0); for(ii=0; ii<ArraySize(aDbConfig); ii++){ if( nArg>1 && cli_strcmp(azArg[1], aDbConfig[ii].zName)!=0 ) continue; if( nArg>=3 ){ sqlite3_db_config(p->db, aDbConfig[ii].op, booleanValue(azArg[2]), 0); } sqlite3_db_config(p->db, aDbConfig[ii].op, -1, &v); oputf("%19s %s\n", aDbConfig[ii].zName, v ? "on" : "off"); if( nArg>1 ) break; } if( nArg>1 && ii==ArraySize(aDbConfig) ){ eputf("Error: unknown dbconfig \"%s\"\n", azArg[1]); eputz("Enter \".dbconfig\" with no arguments for a list\n"); } }else #if SQLITE_SHELL_HAVE_RECOVER if( c=='d' && n>=3 && cli_strncmp(azArg[0], "dbinfo", n)==0 ){ rc = shell_dbinfo_command(p, nArg, azArg); }else |
︙ | ︙ | |||
24381 24382 24383 24384 24385 24386 24387 | |SHFLG_DumpDataOnly|SHFLG_DumpNoSys); for(i=1; i<nArg; i++){ if( azArg[i][0]=='-' ){ const char *z = azArg[i]+1; if( z[0]=='-' ) z++; if( cli_strcmp(z,"preserve-rowids")==0 ){ #ifdef SQLITE_OMIT_VIRTUALTABLE | | | | | 26357 26358 26359 26360 26361 26362 26363 26364 26365 26366 26367 26368 26369 26370 26371 26372 26373 26374 26375 26376 26377 26378 26379 26380 26381 26382 26383 26384 26385 26386 26387 26388 26389 26390 | |SHFLG_DumpDataOnly|SHFLG_DumpNoSys); for(i=1; i<nArg; i++){ if( azArg[i][0]=='-' ){ const char *z = azArg[i]+1; if( z[0]=='-' ) z++; if( cli_strcmp(z,"preserve-rowids")==0 ){ #ifdef SQLITE_OMIT_VIRTUALTABLE eputz("The --preserve-rowids option is not compatible" " with SQLITE_OMIT_VIRTUALTABLE\n"); rc = 1; sqlite3_free(zLike); goto meta_command_exit; #else ShellSetFlag(p, SHFLG_PreserveRowid); #endif }else if( cli_strcmp(z,"newlines")==0 ){ ShellSetFlag(p, SHFLG_Newlines); }else if( cli_strcmp(z,"data-only")==0 ){ ShellSetFlag(p, SHFLG_DumpDataOnly); }else if( cli_strcmp(z,"nosys")==0 ){ ShellSetFlag(p, SHFLG_DumpNoSys); }else { eputf("Unknown option \"%s\" on \".dump\"\n", azArg[i]); rc = 1; sqlite3_free(zLike); goto meta_command_exit; } }else{ /* azArg[i] contains a LIKE pattern. This ".dump" request should ** only dump data for tables for which either the table name matches |
︙ | ︙ | |||
24430 24431 24432 24433 24434 24435 24436 24437 24438 24439 24440 | zLike = zExpr; } } } open_db(p, 0); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ /* When playing back a "dump", the content might appear in an order ** which causes immediate foreign key constraints to be violated. ** So disable foreign-key constraint enforcement to prevent problems. */ | > | | | 26406 26407 26408 26409 26410 26411 26412 26413 26414 26415 26416 26417 26418 26419 26420 26421 26422 26423 26424 26425 26426 | zLike = zExpr; } } } open_db(p, 0); outputDumpWarning(p, zLike); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ /* When playing back a "dump", the content might appear in an order ** which causes immediate foreign key constraints to be violated. ** So disable foreign-key constraint enforcement to prevent problems. */ oputz("PRAGMA foreign_keys=OFF;\n"); oputz("BEGIN TRANSACTION;\n"); } p->writableSchema = 0; p->showHeader = 0; /* Set writable_schema=ON since doing so forces SQLite to initialize ** as much of the schema as it can even if the sqlite_schema table is ** corrupt. */ sqlite3_exec(p->db, "SAVEPOINT dump; PRAGMA writable_schema=ON", 0, 0, 0); |
︙ | ︙ | |||
24458 24459 24460 24461 24462 24463 24464 | ); run_schema_dump_query(p,zSql); sqlite3_free(zSql); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ zSql = sqlite3_mprintf( "SELECT sql FROM sqlite_schema AS o " "WHERE (%s) AND sql NOT NULL" | | > | | | | 26435 26436 26437 26438 26439 26440 26441 26442 26443 26444 26445 26446 26447 26448 26449 26450 26451 26452 26453 26454 26455 26456 26457 26458 26459 26460 26461 26462 26463 26464 26465 26466 26467 26468 26469 26470 26471 26472 26473 26474 | ); run_schema_dump_query(p,zSql); sqlite3_free(zSql); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ zSql = sqlite3_mprintf( "SELECT sql FROM sqlite_schema AS o " "WHERE (%s) AND sql NOT NULL" " AND type IN ('index','trigger','view') " "ORDER BY type COLLATE NOCASE DESC", zLike ); run_table_dump_query(p, zSql); sqlite3_free(zSql); } sqlite3_free(zLike); if( p->writableSchema ){ oputz("PRAGMA writable_schema=OFF;\n"); p->writableSchema = 0; } sqlite3_exec(p->db, "PRAGMA writable_schema=OFF;", 0, 0, 0); sqlite3_exec(p->db, "RELEASE dump;", 0, 0, 0); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ oputz(p->nErr?"ROLLBACK; -- due to errors\n":"COMMIT;\n"); } p->showHeader = savedShowHeader; p->shellFlgs = savedShellFlags; }else if( c=='e' && cli_strncmp(azArg[0], "echo", n)==0 ){ if( nArg==2 ){ setOrClearFlag(p, SHFLG_Echo, azArg[1]); }else{ eputz("Usage: .echo on|off\n"); rc = 1; } }else if( c=='e' && cli_strncmp(azArg[0], "eqp", n)==0 ){ if( nArg==2 ){ p->autoEQPtest = 0; |
︙ | ︙ | |||
24513 24514 24515 24516 24517 24518 24519 | sqlite3_exec(p->db, "SELECT name FROM sqlite_schema LIMIT 1", 0, 0, 0); sqlite3_exec(p->db, "PRAGMA vdbe_trace=ON;", 0, 0, 0); #endif }else{ p->autoEQP = (u8)booleanValue(azArg[1]); } }else{ | | | 26491 26492 26493 26494 26495 26496 26497 26498 26499 26500 26501 26502 26503 26504 26505 | sqlite3_exec(p->db, "SELECT name FROM sqlite_schema LIMIT 1", 0, 0, 0); sqlite3_exec(p->db, "PRAGMA vdbe_trace=ON;", 0, 0, 0); #endif }else{ p->autoEQP = (u8)booleanValue(azArg[1]); } }else{ eputz("Usage: .eqp off|on|trace|trigger|full\n"); rc = 1; } }else #ifndef SQLITE_SHELL_FIDDLE if( c=='e' && cli_strncmp(azArg[0], "exit", n)==0 ){ if( nArg>1 && (rc = (int)integerValue(azArg[1]))!=0 ) exit(rc); |
︙ | ︙ | |||
24552 24553 24554 24555 24556 24557 24558 | p->autoExplain = 1; } }else #ifndef SQLITE_OMIT_VIRTUALTABLE if( c=='e' && cli_strncmp(azArg[0], "expert", n)==0 ){ if( p->bSafeMode ){ | < | | | 26530 26531 26532 26533 26534 26535 26536 26537 26538 26539 26540 26541 26542 26543 26544 26545 | p->autoExplain = 1; } }else #ifndef SQLITE_OMIT_VIRTUALTABLE if( c=='e' && cli_strncmp(azArg[0], "expert", n)==0 ){ if( p->bSafeMode ){ eputf("Cannot run experimental commands such as \"%s\" in safe mode\n", azArg[0]); rc = 1; }else{ open_db(p, 0); expertDotCommand(p, azArg, nArg); } }else #endif |
︙ | ︙ | |||
24610 24611 24612 24613 24614 24615 24616 | if( zCmd[0]=='-' && zCmd[1] ){ zCmd++; if( zCmd[0]=='-' && zCmd[1] ) zCmd++; } /* --help lists all file-controls */ if( cli_strcmp(zCmd,"help")==0 ){ | | < | | | | | | 26587 26588 26589 26590 26591 26592 26593 26594 26595 26596 26597 26598 26599 26600 26601 26602 26603 26604 26605 26606 26607 26608 26609 26610 26611 26612 26613 26614 26615 26616 26617 26618 26619 26620 26621 26622 26623 26624 26625 26626 26627 | if( zCmd[0]=='-' && zCmd[1] ){ zCmd++; if( zCmd[0]=='-' && zCmd[1] ) zCmd++; } /* --help lists all file-controls */ if( cli_strcmp(zCmd,"help")==0 ){ oputz("Available file-controls:\n"); for(i=0; i<ArraySize(aCtrl); i++){ oputf(" .filectrl %s %s\n", aCtrl[i].zCtrlName, aCtrl[i].zUsage); } rc = 1; goto meta_command_exit; } /* convert filectrl text option to value. allow any unique prefix ** of the option name, or a numerical value. */ n2 = strlen30(zCmd); for(i=0; i<ArraySize(aCtrl); i++){ if( cli_strncmp(zCmd, aCtrl[i].zCtrlName, n2)==0 ){ if( filectrl<0 ){ filectrl = aCtrl[i].ctrlCode; iCtrl = i; }else{ eputf("Error: ambiguous file-control: \"%s\"\n" "Use \".filectrl --help\" for help\n", zCmd); rc = 1; goto meta_command_exit; } } } if( filectrl<0 ){ eputf("Error: unknown file-control: %s\n" "Use \".filectrl --help\" for help\n", zCmd); }else{ switch(filectrl){ case SQLITE_FCNTL_SIZE_LIMIT: { if( nArg!=2 && nArg!=3 ) break; iRes = nArg==3 ? integerValue(azArg[2]) : -1; sqlite3_file_control(p->db, zSchema, SQLITE_FCNTL_SIZE_LIMIT, &iRes); isOk = 1; |
︙ | ︙ | |||
24680 24681 24682 24683 24684 24685 24686 | break; } case SQLITE_FCNTL_TEMPFILENAME: { char *z = 0; if( nArg!=2 ) break; sqlite3_file_control(p->db, zSchema, filectrl, &z); if( z ){ | | | | | | | 26656 26657 26658 26659 26660 26661 26662 26663 26664 26665 26666 26667 26668 26669 26670 26671 26672 26673 26674 26675 26676 26677 26678 26679 26680 26681 26682 26683 26684 26685 26686 26687 26688 26689 26690 26691 26692 26693 26694 26695 26696 26697 26698 26699 26700 26701 26702 26703 26704 26705 26706 26707 26708 26709 26710 26711 | break; } case SQLITE_FCNTL_TEMPFILENAME: { char *z = 0; if( nArg!=2 ) break; sqlite3_file_control(p->db, zSchema, filectrl, &z); if( z ){ oputf("%s\n", z); sqlite3_free(z); } isOk = 2; break; } case SQLITE_FCNTL_RESERVE_BYTES: { int x; if( nArg>=3 ){ x = atoi(azArg[2]); sqlite3_file_control(p->db, zSchema, filectrl, &x); } x = -1; sqlite3_file_control(p->db, zSchema, filectrl, &x); oputf("%d\n", x); isOk = 2; break; } } } if( isOk==0 && iCtrl>=0 ){ oputf("Usage: .filectrl %s %s\n", zCmd,aCtrl[iCtrl].zUsage); rc = 1; }else if( isOk==1 ){ char zBuf[100]; sqlite3_snprintf(sizeof(zBuf), zBuf, "%lld", iRes); oputf("%s\n", zBuf); } }else if( c=='f' && cli_strncmp(azArg[0], "fullschema", n)==0 ){ ShellState data; int doStats = 0; memcpy(&data, p, sizeof(data)); data.showHeader = 0; data.cMode = data.mode = MODE_Semi; if( nArg==2 && optionMatch(azArg[1], "indent") ){ data.cMode = data.mode = MODE_Pretty; nArg = 1; } if( nArg!=1 ){ eputz("Usage: .fullschema ?--indent?\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); rc = sqlite3_exec(p->db, "SELECT sql FROM" " (SELECT sql sql, type type, tbl_name tbl_name, name name, rowid x" |
︙ | ︙ | |||
24747 24748 24749 24750 24751 24752 24753 | -1, &pStmt, 0); if( rc==SQLITE_OK ){ doStats = sqlite3_step(pStmt)==SQLITE_ROW; sqlite3_finalize(pStmt); } } if( doStats==0 ){ | | | | | | | | | < | 26723 26724 26725 26726 26727 26728 26729 26730 26731 26732 26733 26734 26735 26736 26737 26738 26739 26740 26741 26742 26743 26744 26745 26746 26747 26748 26749 26750 26751 26752 26753 26754 26755 26756 26757 26758 26759 26760 26761 26762 26763 26764 26765 26766 26767 26768 26769 26770 26771 26772 26773 26774 26775 26776 26777 26778 26779 26780 26781 | -1, &pStmt, 0); if( rc==SQLITE_OK ){ doStats = sqlite3_step(pStmt)==SQLITE_ROW; sqlite3_finalize(pStmt); } } if( doStats==0 ){ oputz("/* No STAT tables available */\n"); }else{ oputz("ANALYZE sqlite_schema;\n"); data.cMode = data.mode = MODE_Insert; data.zDestTable = "sqlite_stat1"; shell_exec(&data, "SELECT * FROM sqlite_stat1", 0); data.zDestTable = "sqlite_stat4"; shell_exec(&data, "SELECT * FROM sqlite_stat4", 0); oputz("ANALYZE sqlite_schema;\n"); } }else if( c=='h' && cli_strncmp(azArg[0], "headers", n)==0 ){ if( nArg==2 ){ p->showHeader = booleanValue(azArg[1]); p->shellFlgs |= SHFLG_HeaderSet; }else{ eputz("Usage: .headers on|off\n"); rc = 1; } }else if( c=='h' && cli_strncmp(azArg[0], "help", n)==0 ){ if( nArg>=2 ){ n = showHelp(p->out, azArg[1]); if( n==0 ){ oputf("Nothing matches '%s'\n", azArg[1]); } }else{ showHelp(p->out, 0); } }else #ifndef SQLITE_SHELL_FIDDLE if( c=='i' && cli_strncmp(azArg[0], "import", n)==0 ){ char *zTable = 0; /* Insert data into this table */ char *zSchema = 0; /* Schema of zTable */ char *zFile = 0; /* Name of file to extra content from */ sqlite3_stmt *pStmt = NULL; /* A statement */ int nCol; /* Number of columns in the table */ i64 nByte; /* Number of bytes in an SQL string */ int i, j; /* Loop counters */ int needCommit; /* True to COMMIT or ROLLBACK at end */ int nSep; /* Number of bytes in p->colSeparator[] */ char *zSql = 0; /* An SQL statement */ ImportCtx sCtx; /* Reader context */ char *(SQLITE_CDECL *xRead)(ImportCtx*); /* Func to read one value */ int eVerbose = 0; /* Larger for more console output */ int nSkip = 0; /* Initial lines to skip */ int useOutputMode = 1; /* Use output mode to determine separators */ char *zCreate = 0; /* CREATE TABLE statement text */ |
︙ | ︙ | |||
24817 24818 24819 24820 24821 24822 24823 | if( z[0]=='-' && z[1]=='-' ) z++; if( z[0]!='-' ){ if( zFile==0 ){ zFile = z; }else if( zTable==0 ){ zTable = z; }else{ | | | | | < | < | < | | | | | | | | | | < < < < < < < < < < < < | > | > | | | | < < < < > > > > | > > | | > > > > > > > > > > > | | < < | | > > | | > > > > | > > > > | > > | | > > < < | 26792 26793 26794 26795 26796 26797 26798 26799 26800 26801 26802 26803 26804 26805 26806 26807 26808 26809 26810 26811 26812 26813 26814 26815 26816 26817 26818 26819 26820 26821 26822 26823 26824 26825 26826 26827 26828 26829 26830 26831 26832 26833 26834 26835 26836 26837 26838 26839 26840 26841 26842 26843 26844 26845 26846 26847 26848 26849 26850 26851 26852 26853 26854 26855 26856 26857 26858 26859 26860 26861 26862 26863 26864 26865 26866 26867 26868 26869 26870 26871 26872 26873 26874 26875 26876 26877 26878 26879 26880 26881 26882 26883 26884 26885 26886 26887 26888 26889 26890 26891 26892 26893 26894 26895 26896 26897 26898 26899 26900 26901 26902 26903 26904 26905 26906 26907 26908 26909 26910 26911 26912 26913 26914 26915 26916 26917 26918 26919 26920 26921 26922 26923 26924 26925 26926 26927 26928 26929 26930 26931 26932 26933 26934 26935 26936 26937 26938 26939 26940 26941 26942 26943 26944 26945 26946 26947 26948 26949 26950 26951 26952 26953 26954 26955 26956 26957 26958 26959 26960 26961 26962 26963 26964 26965 26966 26967 26968 26969 26970 26971 26972 26973 26974 26975 26976 26977 26978 26979 26980 26981 26982 26983 26984 26985 26986 26987 26988 26989 26990 26991 26992 26993 26994 26995 26996 26997 26998 26999 27000 27001 27002 27003 27004 27005 27006 27007 27008 27009 27010 27011 27012 27013 27014 | if( z[0]=='-' && z[1]=='-' ) z++; if( z[0]!='-' ){ if( zFile==0 ){ zFile = z; }else if( zTable==0 ){ zTable = z; }else{ oputf("ERROR: extra argument: \"%s\". Usage:\n", z); showHelp(p->out, "import"); goto meta_command_exit; } }else if( cli_strcmp(z,"-v")==0 ){ eVerbose++; }else if( cli_strcmp(z,"-schema")==0 && i<nArg-1 ){ zSchema = azArg[++i]; }else if( cli_strcmp(z,"-skip")==0 && i<nArg-1 ){ nSkip = integerValue(azArg[++i]); }else if( cli_strcmp(z,"-ascii")==0 ){ sCtx.cColSep = SEP_Unit[0]; sCtx.cRowSep = SEP_Record[0]; xRead = ascii_read_one_field; useOutputMode = 0; }else if( cli_strcmp(z,"-csv")==0 ){ sCtx.cColSep = ','; sCtx.cRowSep = '\n'; xRead = csv_read_one_field; useOutputMode = 0; }else{ oputf("ERROR: unknown option: \"%s\". Usage:\n", z); showHelp(p->out, "import"); goto meta_command_exit; } } if( zTable==0 ){ oputf("ERROR: missing %s argument. Usage:\n", zFile==0 ? "FILE" : "TABLE"); showHelp(p->out, "import"); goto meta_command_exit; } seenInterrupt = 0; open_db(p, 0); if( useOutputMode ){ /* If neither the --csv or --ascii options are specified, then set ** the column and row separator characters from the output mode. */ nSep = strlen30(p->colSeparator); if( nSep==0 ){ eputz("Error: non-null column separator required for import\n"); goto meta_command_exit; } if( nSep>1 ){ eputz("Error: multi-character column separators not allowed" " for import\n"); goto meta_command_exit; } nSep = strlen30(p->rowSeparator); if( nSep==0 ){ eputz("Error: non-null row separator required for import\n"); goto meta_command_exit; } if( nSep==2 && p->mode==MODE_Csv && cli_strcmp(p->rowSeparator,SEP_CrLf)==0 ){ /* When importing CSV (only), if the row separator is set to the ** default output row separator, change it to the default input ** row separator. This avoids having to maintain different input ** and output row separators. */ sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Row); nSep = strlen30(p->rowSeparator); } if( nSep>1 ){ eputz("Error: multi-character row separators not allowed" " for import\n"); goto meta_command_exit; } sCtx.cColSep = (u8)p->colSeparator[0]; sCtx.cRowSep = (u8)p->rowSeparator[0]; } sCtx.zFile = zFile; sCtx.nLine = 1; if( sCtx.zFile[0]=='|' ){ #ifdef SQLITE_OMIT_POPEN eputz("Error: pipes are not supported in this OS\n"); goto meta_command_exit; #else sCtx.in = popen(sCtx.zFile+1, "r"); sCtx.zFile = "<pipe>"; sCtx.xCloser = pclose; #endif }else{ sCtx.in = fopen(sCtx.zFile, "rb"); sCtx.xCloser = fclose; } if( sCtx.in==0 ){ eputf("Error: cannot open \"%s\"\n", zFile); goto meta_command_exit; } if( eVerbose>=2 || (eVerbose>=1 && useOutputMode) ){ char zSep[2]; zSep[1] = 0; zSep[0] = sCtx.cColSep; oputz("Column separator "); output_c_string(zSep); oputz(", row separator "); zSep[0] = sCtx.cRowSep; output_c_string(zSep); oputz("\n"); } sCtx.z = sqlite3_malloc64(120); if( sCtx.z==0 ){ import_cleanup(&sCtx); shell_out_of_memory(); } /* Below, resources must be freed before exit. */ while( (nSkip--)>0 ){ while( xRead(&sCtx) && sCtx.cTerm==sCtx.cColSep ){} } import_append_char(&sCtx, 0); /* To ensure sCtx.z is allocated */ if( sqlite3_table_column_metadata(p->db, zSchema, zTable,0,0,0,0,0,0) ){ /* Table does not exist. Create it. */ sqlite3 *dbCols = 0; char *zRenames = 0; char *zColDefs; zCreate = sqlite3_mprintf("CREATE TABLE \"%w\".\"%w\"", zSchema ? zSchema : "main", zTable); while( xRead(&sCtx) ){ zAutoColumn(sCtx.z, &dbCols, 0); if( sCtx.cTerm!=sCtx.cColSep ) break; } zColDefs = zAutoColumn(0, &dbCols, &zRenames); if( zRenames!=0 ){ sputf((stdin_is_interactive && p->in==stdin)? p->out : stderr, "Columns renamed during .import %s due to duplicates:\n" "%s\n", sCtx.zFile, zRenames); sqlite3_free(zRenames); } assert(dbCols==0); if( zColDefs==0 ){ eputf("%s: empty file\n", sCtx.zFile); import_cleanup(&sCtx); rc = 1; goto meta_command_exit; } zCreate = sqlite3_mprintf("%z%z\n", zCreate, zColDefs); if( zCreate==0 ){ import_cleanup(&sCtx); shell_out_of_memory(); } if( eVerbose>=1 ){ oputf("%s\n", zCreate); } rc = sqlite3_exec(p->db, zCreate, 0, 0, 0); sqlite3_free(zCreate); zCreate = 0; if( rc ){ eputf("%s failed:\n%s\n", zCreate, sqlite3_errmsg(p->db)); import_cleanup(&sCtx); rc = 1; goto meta_command_exit; } } zSql = sqlite3_mprintf("SELECT count(*) FROM pragma_table_info(%Q,%Q);", zTable, zSchema); if( zSql==0 ){ import_cleanup(&sCtx); shell_out_of_memory(); } nByte = strlen(zSql); rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); sqlite3_free(zSql); zSql = 0; if( rc ){ if (pStmt) sqlite3_finalize(pStmt); eputf("Error: %s\n", sqlite3_errmsg(p->db)); import_cleanup(&sCtx); rc = 1; goto meta_command_exit; } if( sqlite3_step(pStmt)==SQLITE_ROW ){ nCol = sqlite3_column_int(pStmt, 0); }else{ nCol = 0; } sqlite3_finalize(pStmt); pStmt = 0; if( nCol==0 ) return 0; /* no columns, no error */ zSql = sqlite3_malloc64( nByte*2 + 20 + nCol*2 ); if( zSql==0 ){ import_cleanup(&sCtx); shell_out_of_memory(); } if( zSchema ){ sqlite3_snprintf(nByte+20, zSql, "INSERT INTO \"%w\".\"%w\" VALUES(?", zSchema, zTable); }else{ sqlite3_snprintf(nByte+20, zSql, "INSERT INTO \"%w\" VALUES(?", zTable); } j = strlen30(zSql); for(i=1; i<nCol; i++){ zSql[j++] = ','; zSql[j++] = '?'; } zSql[j++] = ')'; zSql[j] = 0; if( eVerbose>=2 ){ oputf("Insert using: %s\n", zSql); } rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); sqlite3_free(zSql); zSql = 0; if( rc ){ eputf("Error: %s\n", sqlite3_errmsg(p->db)); if (pStmt) sqlite3_finalize(pStmt); import_cleanup(&sCtx); rc = 1; goto meta_command_exit; } needCommit = sqlite3_get_autocommit(p->db); if( needCommit ) sqlite3_exec(p->db, "BEGIN", 0, 0, 0); do{ int startLine = sCtx.nLine; for(i=0; i<nCol; i++){ char *z = xRead(&sCtx); /* |
︙ | ︙ | |||
25043 25044 25045 25046 25047 25048 25049 | ** (If there are too few fields, it's not valid CSV anyway.) */ if( z==0 && (xRead==csv_read_one_field) && i==nCol-1 && i>0 ){ z = ""; } sqlite3_bind_text(pStmt, i+1, z, -1, SQLITE_TRANSIENT); if( i<nCol-1 && sCtx.cTerm!=sCtx.cColSep ){ | | | | | < | | | < | | | | | | | 27028 27029 27030 27031 27032 27033 27034 27035 27036 27037 27038 27039 27040 27041 27042 27043 27044 27045 27046 27047 27048 27049 27050 27051 27052 27053 27054 27055 27056 27057 27058 27059 27060 27061 27062 27063 27064 27065 27066 27067 27068 27069 27070 27071 27072 27073 27074 27075 27076 27077 27078 27079 27080 27081 27082 27083 27084 27085 27086 27087 27088 27089 27090 27091 27092 27093 27094 27095 27096 27097 | ** (If there are too few fields, it's not valid CSV anyway.) */ if( z==0 && (xRead==csv_read_one_field) && i==nCol-1 && i>0 ){ z = ""; } sqlite3_bind_text(pStmt, i+1, z, -1, SQLITE_TRANSIENT); if( i<nCol-1 && sCtx.cTerm!=sCtx.cColSep ){ eputf("%s:%d: expected %d columns but found %d" " - filling the rest with NULL\n", sCtx.zFile, startLine, nCol, i+1); i += 2; while( i<=nCol ){ sqlite3_bind_null(pStmt, i); i++; } } } if( sCtx.cTerm==sCtx.cColSep ){ do{ xRead(&sCtx); i++; }while( sCtx.cTerm==sCtx.cColSep ); eputf("%s:%d: expected %d columns but found %d - extras ignored\n", sCtx.zFile, startLine, nCol, i); } if( i>=nCol ){ sqlite3_step(pStmt); rc = sqlite3_reset(pStmt); if( rc!=SQLITE_OK ){ eputf("%s:%d: INSERT failed: %s\n", sCtx.zFile, startLine, sqlite3_errmsg(p->db)); sCtx.nErr++; }else{ sCtx.nRow++; } } }while( sCtx.cTerm!=EOF ); import_cleanup(&sCtx); sqlite3_finalize(pStmt); if( needCommit ) sqlite3_exec(p->db, "COMMIT", 0, 0, 0); if( eVerbose>0 ){ oputf("Added %d rows with %d errors using %d lines of input\n", sCtx.nRow, sCtx.nErr, sCtx.nLine-1); } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ #ifndef SQLITE_UNTESTABLE if( c=='i' && cli_strncmp(azArg[0], "imposter", n)==0 ){ char *zSql; char *zCollist = 0; sqlite3_stmt *pStmt; int tnum = 0; int isWO = 0; /* True if making an imposter of a WITHOUT ROWID table */ int lenPK = 0; /* Length of the PRIMARY KEY string for isWO tables */ int i; if( !ShellHasFlag(p,SHFLG_TestingMode) ){ eputf(".%s unavailable without --unsafe-testing\n", "imposter"); rc = 1; goto meta_command_exit; } if( !(nArg==3 || (nArg==2 && sqlite3_stricmp(azArg[1],"off")==0)) ){ eputz("Usage: .imposter INDEX IMPOSTER\n" " .imposter off\n"); /* Also allowed, but not documented: ** ** .imposter TABLE IMPOSTER ** ** where TABLE is a WITHOUT ROWID table. In that case, the ** imposter is another WITHOUT ROWID table with the columns in ** storage order. */ |
︙ | ︙ | |||
25159 25160 25161 25162 25163 25164 25165 | zCollist = sqlite3_mprintf("\"%w\"", zCol); }else{ zCollist = sqlite3_mprintf("%z,\"%w\"", zCollist, zCol); } } sqlite3_finalize(pStmt); if( i==0 || tnum==0 ){ | | | | < | | < | > > > > > > > > > > > > > > > | | 27142 27143 27144 27145 27146 27147 27148 27149 27150 27151 27152 27153 27154 27155 27156 27157 27158 27159 27160 27161 27162 27163 27164 27165 27166 27167 27168 27169 27170 27171 27172 27173 27174 27175 27176 27177 27178 27179 27180 27181 27182 27183 27184 27185 27186 27187 27188 27189 27190 27191 27192 27193 27194 27195 27196 27197 27198 27199 27200 27201 27202 27203 27204 27205 27206 27207 27208 27209 27210 27211 27212 27213 | zCollist = sqlite3_mprintf("\"%w\"", zCol); }else{ zCollist = sqlite3_mprintf("%z,\"%w\"", zCollist, zCol); } } sqlite3_finalize(pStmt); if( i==0 || tnum==0 ){ eputf("no such index: \"%s\"\n", azArg[1]); rc = 1; sqlite3_free(zCollist); goto meta_command_exit; } if( lenPK==0 ) lenPK = 100000; zSql = sqlite3_mprintf( "CREATE TABLE \"%w\"(%s,PRIMARY KEY(%.*s))WITHOUT ROWID", azArg[2], zCollist, lenPK, zCollist); sqlite3_free(zCollist); rc = sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->db, "main", 1, tnum); if( rc==SQLITE_OK ){ rc = sqlite3_exec(p->db, zSql, 0, 0, 0); sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->db, "main", 0, 0); if( rc ){ eputf("Error in [%s]: %s\n", zSql, sqlite3_errmsg(p->db)); }else{ sputf(stdout, "%s;\n", zSql); sputf(stdout, "WARNING: writing to an imposter table will corrupt" " the \"%s\" %s!\n", azArg[1], isWO ? "table" : "index"); } }else{ eputf("SQLITE_TESTCTRL_IMPOSTER returns %d\n", rc); rc = 1; } sqlite3_free(zSql); }else #endif /* !defined(SQLITE_OMIT_TEST_CONTROL) */ if( c=='i' && cli_strncmp(azArg[0], "intck", n)==0 ){ i64 iArg = 0; if( nArg==2 ){ iArg = integerValue(azArg[1]); if( iArg==0 ) iArg = -1; } if( (nArg!=1 && nArg!=2) || iArg<0 ){ eputf("%s","Usage: .intck STEPS_PER_UNLOCK\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); rc = intckDatabaseCmd(p, iArg); }else #ifdef SQLITE_ENABLE_IOTRACE if( c=='i' && cli_strncmp(azArg[0], "iotrace", n)==0 ){ SQLITE_API extern void (SQLITE_CDECL *sqlite3IoTrace)(const char*, ...); if( iotrace && iotrace!=stdout ) fclose(iotrace); iotrace = 0; if( nArg<2 ){ sqlite3IoTrace = 0; }else if( cli_strcmp(azArg[1], "-")==0 ){ sqlite3IoTrace = iotracePrintf; iotrace = stdout; }else{ iotrace = fopen(azArg[1], "w"); if( iotrace==0 ){ eputf("Error: cannot open \"%s\"\n", azArg[1]); sqlite3IoTrace = 0; rc = 1; }else{ sqlite3IoTrace = iotracePrintf; } } }else |
︙ | ︙ | |||
25235 25236 25237 25238 25239 25240 25241 | { "trigger_depth", SQLITE_LIMIT_TRIGGER_DEPTH }, { "worker_threads", SQLITE_LIMIT_WORKER_THREADS }, }; int i, n2; open_db(p, 0); if( nArg==1 ){ for(i=0; i<ArraySize(aLimit); i++){ | | | | | | | | | | | | | | | | 27231 27232 27233 27234 27235 27236 27237 27238 27239 27240 27241 27242 27243 27244 27245 27246 27247 27248 27249 27250 27251 27252 27253 27254 27255 27256 27257 27258 27259 27260 27261 27262 27263 27264 27265 27266 27267 27268 27269 27270 27271 27272 27273 27274 27275 27276 27277 27278 27279 27280 27281 27282 27283 27284 27285 27286 27287 27288 27289 27290 27291 27292 27293 27294 27295 27296 27297 27298 27299 27300 27301 27302 27303 27304 27305 27306 27307 27308 27309 27310 27311 27312 27313 27314 27315 27316 27317 27318 27319 27320 27321 | { "trigger_depth", SQLITE_LIMIT_TRIGGER_DEPTH }, { "worker_threads", SQLITE_LIMIT_WORKER_THREADS }, }; int i, n2; open_db(p, 0); if( nArg==1 ){ for(i=0; i<ArraySize(aLimit); i++){ sputf(stdout, "%20s %d\n", aLimit[i].zLimitName, sqlite3_limit(p->db, aLimit[i].limitCode, -1)); } }else if( nArg>3 ){ eputz("Usage: .limit NAME ?NEW-VALUE?\n"); rc = 1; goto meta_command_exit; }else{ int iLimit = -1; n2 = strlen30(azArg[1]); for(i=0; i<ArraySize(aLimit); i++){ if( sqlite3_strnicmp(aLimit[i].zLimitName, azArg[1], n2)==0 ){ if( iLimit<0 ){ iLimit = i; }else{ eputf("ambiguous limit: \"%s\"\n", azArg[1]); rc = 1; goto meta_command_exit; } } } if( iLimit<0 ){ eputf("unknown limit: \"%s\"\n" "enter \".limits\" with no arguments for a list.\n", azArg[1]); rc = 1; goto meta_command_exit; } if( nArg==3 ){ sqlite3_limit(p->db, aLimit[iLimit].limitCode, (int)integerValue(azArg[2])); } sputf(stdout, "%20s %d\n", aLimit[iLimit].zLimitName, sqlite3_limit(p->db, aLimit[iLimit].limitCode, -1)); } }else if( c=='l' && n>2 && cli_strncmp(azArg[0], "lint", n)==0 ){ open_db(p, 0); lintDotCommand(p, azArg, nArg); }else #if !defined(SQLITE_OMIT_LOAD_EXTENSION) && !defined(SQLITE_SHELL_FIDDLE) if( c=='l' && cli_strncmp(azArg[0], "load", n)==0 ){ const char *zFile, *zProc; char *zErrMsg = 0; failIfSafeMode(p, "cannot run .load in safe mode"); if( nArg<2 || azArg[1][0]==0 ){ /* Must have a non-empty FILE. (Will not load self.) */ eputz("Usage: .load FILE ?ENTRYPOINT?\n"); rc = 1; goto meta_command_exit; } zFile = azArg[1]; zProc = nArg>=3 ? azArg[2] : 0; open_db(p, 0); rc = sqlite3_load_extension(p->db, zFile, zProc, &zErrMsg); if( rc!=SQLITE_OK ){ eputf("Error: %s\n", zErrMsg); sqlite3_free(zErrMsg); rc = 1; } }else #endif if( c=='l' && cli_strncmp(azArg[0], "log", n)==0 ){ if( nArg!=2 ){ eputz("Usage: .log FILENAME\n"); rc = 1; }else{ const char *zFile = azArg[1]; if( p->bSafeMode && cli_strcmp(zFile,"on")!=0 && cli_strcmp(zFile,"off")!=0 ){ sputz(stdout, "cannot set .log to anything other" " than \"on\" or \"off\"\n"); zFile = "off"; } output_file_close(p->pLog); if( cli_strcmp(zFile,"on")==0 ) zFile = "stdout"; p->pLog = output_file_open(zFile, 0); } }else |
︙ | ︙ | |||
25350 25351 25352 25353 25354 25355 25356 | ColModeOpts cmo = ColModeOpts_default_qbox; zMode = "box"; cmOpts = cmo; } }else if( zTabname==0 ){ zTabname = z; }else if( z[0]=='-' ){ | | | | | | | | | < < | | | | | | 27346 27347 27348 27349 27350 27351 27352 27353 27354 27355 27356 27357 27358 27359 27360 27361 27362 27363 27364 27365 27366 27367 27368 27369 27370 27371 27372 27373 27374 27375 27376 27377 27378 27379 27380 27381 27382 27383 27384 | ColModeOpts cmo = ColModeOpts_default_qbox; zMode = "box"; cmOpts = cmo; } }else if( zTabname==0 ){ zTabname = z; }else if( z[0]=='-' ){ eputf("unknown option: %s\n", z); eputz("options:\n" " --noquote\n" " --quote\n" " --wordwrap on/off\n" " --wrap N\n" " --ww\n"); rc = 1; goto meta_command_exit; }else{ eputf("extra argument: \"%s\"\n", z); rc = 1; goto meta_command_exit; } } if( zMode==0 ){ if( p->mode==MODE_Column || (p->mode>=MODE_Markdown && p->mode<=MODE_Box) ){ oputf("current output mode: %s --wrap %d --wordwrap %s --%squote\n", modeDescr[p->mode], p->cmOpts.iWrap, p->cmOpts.bWordWrap ? "on" : "off", p->cmOpts.bQuote ? "" : "no"); }else{ oputf("current output mode: %s\n", modeDescr[p->mode]); } zMode = modeDescr[p->mode]; } n2 = strlen30(zMode); if( cli_strncmp(zMode,"lines",n2)==0 ){ p->mode = MODE_Line; sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Row); |
︙ | ︙ | |||
25435 25436 25437 25438 25439 25440 25441 | }else if( cli_strncmp(zMode,"count",n2)==0 ){ p->mode = MODE_Count; }else if( cli_strncmp(zMode,"off",n2)==0 ){ p->mode = MODE_Off; }else if( cli_strncmp(zMode,"json",n2)==0 ){ p->mode = MODE_Json; }else{ | | | | | | | | | 27429 27430 27431 27432 27433 27434 27435 27436 27437 27438 27439 27440 27441 27442 27443 27444 27445 27446 27447 27448 27449 27450 27451 27452 27453 27454 27455 27456 27457 27458 27459 27460 27461 27462 27463 27464 27465 27466 27467 27468 27469 27470 27471 27472 27473 | }else if( cli_strncmp(zMode,"count",n2)==0 ){ p->mode = MODE_Count; }else if( cli_strncmp(zMode,"off",n2)==0 ){ p->mode = MODE_Off; }else if( cli_strncmp(zMode,"json",n2)==0 ){ p->mode = MODE_Json; }else{ eputz("Error: mode should be one of: " "ascii box column csv html insert json line list markdown " "qbox quote table tabs tcl\n"); rc = 1; } p->cMode = p->mode; }else #ifndef SQLITE_SHELL_FIDDLE if( c=='n' && cli_strcmp(azArg[0], "nonce")==0 ){ if( nArg!=2 ){ eputz("Usage: .nonce NONCE\n"); rc = 1; }else if( p->zNonce==0 || cli_strcmp(azArg[1],p->zNonce)!=0 ){ eputf("line %d: incorrect nonce: \"%s\"\n", p->lineno, azArg[1]); exit(1); }else{ p->bSafeMode = 0; return 0; /* Return immediately to bypass the safe mode reset ** at the end of this procedure */ } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='n' && cli_strncmp(azArg[0], "nullvalue", n)==0 ){ if( nArg==2 ){ sqlite3_snprintf(sizeof(p->nullValue), p->nullValue, "%.*s", (int)ArraySize(p->nullValue)-1, azArg[1]); }else{ eputz("Usage: .nullvalue STRING\n"); rc = 1; } }else if( c=='o' && cli_strncmp(azArg[0], "open", n)==0 && n>=2 ){ const char *zFN = 0; /* Pointer to constant filename */ char *zNewFilename = 0; /* Name of the database file to open */ |
︙ | ︙ | |||
25504 25505 25506 25507 25508 25509 25510 | openMode = SHELL_OPEN_HEXDB; }else if( optionMatch(z, "maxsize") && iName+1<nArg ){ p->szMax = integerValue(azArg[++iName]); #endif /* SQLITE_OMIT_DESERIALIZE */ }else #endif /* !SQLITE_SHELL_FIDDLE */ if( z[0]=='-' ){ | | | | 27498 27499 27500 27501 27502 27503 27504 27505 27506 27507 27508 27509 27510 27511 27512 27513 27514 27515 27516 | openMode = SHELL_OPEN_HEXDB; }else if( optionMatch(z, "maxsize") && iName+1<nArg ){ p->szMax = integerValue(azArg[++iName]); #endif /* SQLITE_OMIT_DESERIALIZE */ }else #endif /* !SQLITE_SHELL_FIDDLE */ if( z[0]=='-' ){ eputf("unknown option: %s\n", z); rc = 1; goto meta_command_exit; }else if( zFN ){ eputf("extra argument: \"%s\"\n", z); rc = 1; goto meta_command_exit; }else{ zFN = z; } } |
︙ | ︙ | |||
25550 25551 25552 25553 25554 25555 25556 | shell_check_oom(zNewFilename); }else{ zNewFilename = 0; } p->pAuxDb->zDbFilename = zNewFilename; open_db(p, OPEN_DB_KEEPALIVE); if( p->db==0 ){ | | | 27544 27545 27546 27547 27548 27549 27550 27551 27552 27553 27554 27555 27556 27557 27558 | shell_check_oom(zNewFilename); }else{ zNewFilename = 0; } p->pAuxDb->zDbFilename = zNewFilename; open_db(p, OPEN_DB_KEEPALIVE); if( p->db==0 ){ eputf("Error: cannot open '%s'\n", zNewFilename); sqlite3_free(zNewFilename); }else{ p->pAuxDb->zFreeOnClose = zNewFilename; } } if( p->db==0 ){ /* As a fall-back open a TEMP database */ |
︙ | ︙ | |||
25574 25575 25576 25577 25578 25579 25580 | || (c=='e' && n==5 && cli_strcmp(azArg[0],"excel")==0) ){ char *zFile = 0; int bTxtMode = 0; int i; int eMode = 0; int bOnce = 0; /* 0: .output, 1: .once, 2: .excel */ | > | < < < < | | < | < | 27568 27569 27570 27571 27572 27573 27574 27575 27576 27577 27578 27579 27580 27581 27582 27583 27584 27585 27586 27587 27588 27589 27590 27591 27592 27593 27594 27595 27596 27597 27598 27599 27600 27601 27602 27603 27604 27605 27606 27607 27608 27609 27610 27611 27612 27613 27614 27615 | || (c=='e' && n==5 && cli_strcmp(azArg[0],"excel")==0) ){ char *zFile = 0; int bTxtMode = 0; int i; int eMode = 0; int bOnce = 0; /* 0: .output, 1: .once, 2: .excel */ static const char *zBomUtf8 = "\xef\xbb\xbf"; const char *zBom = 0; failIfSafeMode(p, "cannot run .%s in safe mode", azArg[0]); if( c=='e' ){ eMode = 'x'; bOnce = 2; }else if( cli_strncmp(azArg[0],"once",n)==0 ){ bOnce = 1; } for(i=1; i<nArg; i++){ char *z = azArg[i]; if( z[0]=='-' ){ if( z[1]=='-' ) z++; if( cli_strcmp(z,"-bom")==0 ){ zBom = zBomUtf8; }else if( c!='e' && cli_strcmp(z,"-x")==0 ){ eMode = 'x'; /* spreadsheet */ }else if( c!='e' && cli_strcmp(z,"-e")==0 ){ eMode = 'e'; /* text editor */ }else{ oputf("ERROR: unknown option: \"%s\". Usage:\n", azArg[i]); showHelp(p->out, azArg[0]); rc = 1; goto meta_command_exit; } }else if( zFile==0 && eMode!='e' && eMode!='x' ){ zFile = sqlite3_mprintf("%s", z); if( zFile && zFile[0]=='|' ){ while( i+1<nArg ) zFile = sqlite3_mprintf("%z %s", zFile, azArg[++i]); break; } }else{ oputf("ERROR: extra parameter: \"%s\". Usage:\n", azArg[i]); showHelp(p->out, azArg[0]); rc = 1; sqlite3_free(zFile); goto meta_command_exit; } } if( zFile==0 ){ |
︙ | ︙ | |||
25651 25652 25653 25654 25655 25656 25657 | sqlite3_free(zFile); zFile = sqlite3_mprintf("%s", p->zTempFile); } #endif /* SQLITE_NOHAVE_SYSTEM */ shell_check_oom(zFile); if( zFile[0]=='|' ){ #ifdef SQLITE_OMIT_POPEN | | | | | | < > | | | | < > | | 27640 27641 27642 27643 27644 27645 27646 27647 27648 27649 27650 27651 27652 27653 27654 27655 27656 27657 27658 27659 27660 27661 27662 27663 27664 27665 27666 27667 27668 27669 27670 27671 27672 27673 27674 27675 27676 27677 | sqlite3_free(zFile); zFile = sqlite3_mprintf("%s", p->zTempFile); } #endif /* SQLITE_NOHAVE_SYSTEM */ shell_check_oom(zFile); if( zFile[0]=='|' ){ #ifdef SQLITE_OMIT_POPEN eputz("Error: pipes are not supported in this OS\n"); rc = 1; output_redir(p, stdout); #else FILE *pfPipe = popen(zFile + 1, "w"); if( pfPipe==0 ){ eputf("Error: cannot open pipe \"%s\"\n", zFile + 1); rc = 1; }else{ output_redir(p, pfPipe); if( zBom ) oputz(zBom); sqlite3_snprintf(sizeof(p->outfile), p->outfile, "%s", zFile); } #endif }else{ FILE *pfFile = output_file_open(zFile, bTxtMode); if( pfFile==0 ){ if( cli_strcmp(zFile,"off")!=0 ){ eputf("Error: cannot write to \"%s\"\n", zFile); } rc = 1; } else { output_redir(p, pfFile); if( zBom ) oputz(zBom); sqlite3_snprintf(sizeof(p->outfile), p->outfile, "%s", zFile); } } sqlite3_free(zFile); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ |
︙ | ︙ | |||
25715 25716 25717 25718 25719 25720 25721 | sqlite3_finalize(pStmt); pStmt = 0; if( len ){ rx = sqlite3_prepare_v2(p->db, "SELECT key, quote(value) " "FROM temp.sqlite_parameters;", -1, &pStmt, 0); while( rx==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ | | | | 27704 27705 27706 27707 27708 27709 27710 27711 27712 27713 27714 27715 27716 27717 27718 27719 | sqlite3_finalize(pStmt); pStmt = 0; if( len ){ rx = sqlite3_prepare_v2(p->db, "SELECT key, quote(value) " "FROM temp.sqlite_parameters;", -1, &pStmt, 0); while( rx==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ oputf("%-*s %s\n", len, sqlite3_column_text(pStmt,0), sqlite3_column_text(pStmt,1)); } sqlite3_finalize(pStmt); } }else /* .parameter init ** Make sure the TEMP table used to hold bind parameters exists. |
︙ | ︙ | |||
25760 25761 25762 25763 25764 25765 25766 | zSql = sqlite3_mprintf( "REPLACE INTO temp.sqlite_parameters(key,value)" "VALUES(%Q,%Q);", zKey, zValue); shell_check_oom(zSql); rx = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); sqlite3_free(zSql); if( rx!=SQLITE_OK ){ | | | 27749 27750 27751 27752 27753 27754 27755 27756 27757 27758 27759 27760 27761 27762 27763 | zSql = sqlite3_mprintf( "REPLACE INTO temp.sqlite_parameters(key,value)" "VALUES(%Q,%Q);", zKey, zValue); shell_check_oom(zSql); rx = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); sqlite3_free(zSql); if( rx!=SQLITE_OK ){ oputf("Error: %s\n", sqlite3_errmsg(p->db)); sqlite3_finalize(pStmt); pStmt = 0; rc = 1; } } sqlite3_step(pStmt); sqlite3_finalize(pStmt); |
︙ | ︙ | |||
25789 25790 25791 25792 25793 25794 25795 | parameter_syntax_error: showHelp(p->out, "parameter"); }else if( c=='p' && n>=3 && cli_strncmp(azArg[0], "print", n)==0 ){ int i; for(i=1; i<nArg; i++){ | | | | | 27778 27779 27780 27781 27782 27783 27784 27785 27786 27787 27788 27789 27790 27791 27792 27793 27794 27795 | parameter_syntax_error: showHelp(p->out, "parameter"); }else if( c=='p' && n>=3 && cli_strncmp(azArg[0], "print", n)==0 ){ int i; for(i=1; i<nArg; i++){ if( i>1 ) oputz(" "); oputz(azArg[i]); } oputz("\n"); }else #ifndef SQLITE_OMIT_PROGRESS_CALLBACK if( c=='p' && n>=3 && cli_strncmp(azArg[0], "progress", n)==0 ){ int i; int nn = 0; p->flgProgress = 0; |
︙ | ︙ | |||
25821 25822 25823 25824 25825 25826 25827 | } if( cli_strcmp(z,"once")==0 ){ p->flgProgress |= SHELL_PROGRESS_ONCE; continue; } if( cli_strcmp(z,"limit")==0 ){ if( i+1>=nArg ){ | | | | 27810 27811 27812 27813 27814 27815 27816 27817 27818 27819 27820 27821 27822 27823 27824 27825 27826 27827 27828 27829 27830 27831 27832 | } if( cli_strcmp(z,"once")==0 ){ p->flgProgress |= SHELL_PROGRESS_ONCE; continue; } if( cli_strcmp(z,"limit")==0 ){ if( i+1>=nArg ){ eputz("Error: missing argument on --limit\n"); rc = 1; goto meta_command_exit; }else{ p->mxProgress = (int)integerValue(azArg[++i]); } continue; } eputf("Error: unknown option: \"%s\"\n", azArg[i]); rc = 1; goto meta_command_exit; }else{ nn = (int)integerValue(z); } } open_db(p, 0); |
︙ | ︙ | |||
25862 25863 25864 25865 25866 25867 25868 | #ifndef SQLITE_SHELL_FIDDLE if( c=='r' && n>=3 && cli_strncmp(azArg[0], "read", n)==0 ){ FILE *inSaved = p->in; int savedLineno = p->lineno; failIfSafeMode(p, "cannot run .read in safe mode"); if( nArg!=2 ){ | | | | | | 27851 27852 27853 27854 27855 27856 27857 27858 27859 27860 27861 27862 27863 27864 27865 27866 27867 27868 27869 27870 27871 27872 27873 27874 27875 27876 27877 27878 27879 27880 27881 27882 27883 27884 27885 | #ifndef SQLITE_SHELL_FIDDLE if( c=='r' && n>=3 && cli_strncmp(azArg[0], "read", n)==0 ){ FILE *inSaved = p->in; int savedLineno = p->lineno; failIfSafeMode(p, "cannot run .read in safe mode"); if( nArg!=2 ){ eputz("Usage: .read FILE\n"); rc = 1; goto meta_command_exit; } if( azArg[1][0]=='|' ){ #ifdef SQLITE_OMIT_POPEN eputz("Error: pipes are not supported in this OS\n"); rc = 1; p->out = stdout; #else p->in = popen(azArg[1]+1, "r"); if( p->in==0 ){ eputf("Error: cannot open \"%s\"\n", azArg[1]); rc = 1; }else{ rc = process_input(p); pclose(p->in); } #endif }else if( (p->in = openChrSource(azArg[1]))==0 ){ eputf("Error: cannot open \"%s\"\n", azArg[1]); rc = 1; }else{ rc = process_input(p); fclose(p->in); } p->in = inSaved; p->lineno = savedLineno; |
︙ | ︙ | |||
25909 25910 25911 25912 25913 25914 25915 | if( nArg==2 ){ zSrcFile = azArg[1]; zDb = "main"; }else if( nArg==3 ){ zSrcFile = azArg[2]; zDb = azArg[1]; }else{ | | | | | | | | > > > > | | 27898 27899 27900 27901 27902 27903 27904 27905 27906 27907 27908 27909 27910 27911 27912 27913 27914 27915 27916 27917 27918 27919 27920 27921 27922 27923 27924 27925 27926 27927 27928 27929 27930 27931 27932 27933 27934 27935 27936 27937 27938 27939 27940 27941 27942 27943 27944 27945 27946 27947 27948 27949 27950 27951 27952 27953 27954 27955 27956 27957 27958 27959 27960 27961 27962 27963 27964 27965 27966 27967 27968 27969 27970 27971 27972 | if( nArg==2 ){ zSrcFile = azArg[1]; zDb = "main"; }else if( nArg==3 ){ zSrcFile = azArg[2]; zDb = azArg[1]; }else{ eputz("Usage: .restore ?DB? FILE\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_open(zSrcFile, &pSrc); if( rc!=SQLITE_OK ){ eputf("Error: cannot open \"%s\"\n", zSrcFile); close_db(pSrc); return 1; } open_db(p, 0); pBackup = sqlite3_backup_init(p->db, zDb, pSrc, "main"); if( pBackup==0 ){ eputf("Error: %s\n", sqlite3_errmsg(p->db)); close_db(pSrc); return 1; } while( (rc = sqlite3_backup_step(pBackup,100))==SQLITE_OK || rc==SQLITE_BUSY ){ if( rc==SQLITE_BUSY ){ if( nTimeout++ >= 3 ) break; sqlite3_sleep(100); } } sqlite3_backup_finish(pBackup); if( rc==SQLITE_DONE ){ rc = 0; }else if( rc==SQLITE_BUSY || rc==SQLITE_LOCKED ){ eputz("Error: source database is busy\n"); rc = 1; }else{ eputf("Error: %s\n", sqlite3_errmsg(p->db)); rc = 1; } close_db(pSrc); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='s' && cli_strncmp(azArg[0], "scanstats", n)==0 ){ if( nArg==2 ){ if( cli_strcmp(azArg[1], "vm")==0 ){ p->scanstatsOn = 3; }else if( cli_strcmp(azArg[1], "est")==0 ){ p->scanstatsOn = 2; }else{ p->scanstatsOn = (u8)booleanValue(azArg[1]); } open_db(p, 0); sqlite3_db_config( p->db, SQLITE_DBCONFIG_STMT_SCANSTATUS, p->scanstatsOn, (int*)0 ); #if !defined(SQLITE_ENABLE_STMT_SCANSTATUS) eputz("Warning: .scanstats not available in this build.\n"); #elif !defined(SQLITE_ENABLE_BYTECODE_VTAB) if( p->scanstatsOn==3 ){ eputz("Warning: \".scanstats vm\" not available in this build.\n"); } #endif }else{ eputz("Usage: .scanstats on|off|est\n"); rc = 1; } }else if( c=='s' && cli_strncmp(azArg[0], "schema", n)==0 ){ ShellText sSelect; ShellState data; |
︙ | ︙ | |||
25994 25995 25996 25997 25998 25999 26000 | if( optionMatch(azArg[ii],"indent") ){ data.cMode = data.mode = MODE_Pretty; }else if( optionMatch(azArg[ii],"debug") ){ bDebug = 1; }else if( optionMatch(azArg[ii],"nosys") ){ bNoSystemTabs = 1; }else if( azArg[ii][0]=='-' ){ | | < | | 27987 27988 27989 27990 27991 27992 27993 27994 27995 27996 27997 27998 27999 28000 28001 28002 28003 28004 28005 28006 28007 | if( optionMatch(azArg[ii],"indent") ){ data.cMode = data.mode = MODE_Pretty; }else if( optionMatch(azArg[ii],"debug") ){ bDebug = 1; }else if( optionMatch(azArg[ii],"nosys") ){ bNoSystemTabs = 1; }else if( azArg[ii][0]=='-' ){ eputf("Unknown option: \"%s\"\n", azArg[ii]); rc = 1; goto meta_command_exit; }else if( zName==0 ){ zName = azArg[ii]; }else{ eputz("Usage: .schema ?--indent? ?--nosys? ?LIKE-PATTERN?\n"); rc = 1; goto meta_command_exit; } } if( zName!=0 ){ int isSchema = sqlite3_strlike(zName, "sqlite_master", '\\')==0 || sqlite3_strlike(zName, "sqlite_schema", '\\')==0 |
︙ | ︙ | |||
26034 26035 26036 26037 26038 26039 26040 | } } if( zDiv ){ sqlite3_stmt *pStmt = 0; rc = sqlite3_prepare_v2(p->db, "SELECT name FROM pragma_database_list", -1, &pStmt, 0); if( rc ){ | | | 28026 28027 28028 28029 28030 28031 28032 28033 28034 28035 28036 28037 28038 28039 28040 | } } if( zDiv ){ sqlite3_stmt *pStmt = 0; rc = sqlite3_prepare_v2(p->db, "SELECT name FROM pragma_database_list", -1, &pStmt, 0); if( rc ){ eputf("Error: %s\n", sqlite3_errmsg(p->db)); sqlite3_finalize(pStmt); rc = 1; goto meta_command_exit; } appendText(&sSelect, "SELECT sql FROM", 0); iSchema = 0; while( sqlite3_step(pStmt)==SQLITE_ROW ){ |
︙ | ︙ | |||
26096 26097 26098 26099 26100 26101 26102 | } if( bNoSystemTabs ){ appendText(&sSelect, "name NOT LIKE 'sqlite_%%' AND ", 0); } appendText(&sSelect, "sql IS NOT NULL" " ORDER BY snum, rowid", 0); if( bDebug ){ | | | | | 28088 28089 28090 28091 28092 28093 28094 28095 28096 28097 28098 28099 28100 28101 28102 28103 28104 28105 28106 28107 28108 28109 28110 28111 28112 28113 | } if( bNoSystemTabs ){ appendText(&sSelect, "name NOT LIKE 'sqlite_%%' AND ", 0); } appendText(&sSelect, "sql IS NOT NULL" " ORDER BY snum, rowid", 0); if( bDebug ){ oputf("SQL: %s;\n", sSelect.z); }else{ rc = sqlite3_exec(p->db, sSelect.z, callback, &data, &zErrMsg); } freeText(&sSelect); } if( zErrMsg ){ eputf("Error: %s\n", zErrMsg); sqlite3_free(zErrMsg); rc = 1; }else if( rc != SQLITE_OK ){ eputz("Error: querying schema information\n"); rc = 1; }else{ rc = 0; } }else if( (c=='s' && n==11 && cli_strncmp(azArg[0], "selecttrace", n)==0) |
︙ | ︙ | |||
26153 26154 26155 26156 26157 26158 26159 | ** Invoke the sqlite3session_attach() interface to attach a particular ** table so that it is never filtered. */ if( cli_strcmp(azCmd[0],"attach")==0 ){ if( nCmd!=2 ) goto session_syntax_error; if( pSession->p==0 ){ session_not_open: | | | | | | | < | 28145 28146 28147 28148 28149 28150 28151 28152 28153 28154 28155 28156 28157 28158 28159 28160 28161 28162 28163 28164 28165 28166 28167 28168 28169 28170 28171 28172 28173 28174 28175 28176 28177 28178 28179 28180 28181 28182 28183 28184 28185 28186 28187 28188 28189 28190 28191 28192 28193 28194 28195 28196 28197 28198 | ** Invoke the sqlite3session_attach() interface to attach a particular ** table so that it is never filtered. */ if( cli_strcmp(azCmd[0],"attach")==0 ){ if( nCmd!=2 ) goto session_syntax_error; if( pSession->p==0 ){ session_not_open: eputz("ERROR: No sessions are open\n"); }else{ rc = sqlite3session_attach(pSession->p, azCmd[1]); if( rc ){ eputf("ERROR: sqlite3session_attach() returns %d\n",rc); rc = 0; } } }else /* .session changeset FILE ** .session patchset FILE ** Write a changeset or patchset into a file. The file is overwritten. */ if( cli_strcmp(azCmd[0],"changeset")==0 || cli_strcmp(azCmd[0],"patchset")==0 ){ FILE *out = 0; failIfSafeMode(p, "cannot run \".session %s\" in safe mode", azCmd[0]); if( nCmd!=2 ) goto session_syntax_error; if( pSession->p==0 ) goto session_not_open; out = fopen(azCmd[1], "wb"); if( out==0 ){ eputf("ERROR: cannot open \"%s\" for writing\n", azCmd[1]); }else{ int szChng; void *pChng; if( azCmd[0][0]=='c' ){ rc = sqlite3session_changeset(pSession->p, &szChng, &pChng); }else{ rc = sqlite3session_patchset(pSession->p, &szChng, &pChng); } if( rc ){ sputf(stdout, "Error: error code %d\n", rc); rc = 0; } if( pChng && fwrite(pChng, szChng, 1, out)!=1 ){ eputf("ERROR: Failed to write entire %d-byte output\n", szChng); } sqlite3_free(pChng); fclose(out); } }else /* .session close |
︙ | ︙ | |||
26220 26221 26222 26223 26224 26225 26226 | */ if( cli_strcmp(azCmd[0], "enable")==0 ){ int ii; if( nCmd>2 ) goto session_syntax_error; ii = nCmd==1 ? -1 : booleanValue(azCmd[1]); if( pAuxDb->nSession ){ ii = sqlite3session_enable(pSession->p, ii); | | < | < < < | < | < | | < | | | 28211 28212 28213 28214 28215 28216 28217 28218 28219 28220 28221 28222 28223 28224 28225 28226 28227 28228 28229 28230 28231 28232 28233 28234 28235 28236 28237 28238 28239 28240 28241 28242 28243 28244 28245 28246 28247 28248 28249 28250 28251 28252 28253 28254 28255 28256 28257 28258 28259 28260 28261 28262 28263 28264 28265 28266 28267 28268 28269 28270 28271 28272 28273 28274 28275 28276 28277 28278 28279 28280 28281 28282 28283 28284 28285 28286 28287 28288 28289 28290 28291 28292 28293 28294 28295 28296 28297 28298 28299 28300 28301 28302 28303 28304 28305 28306 28307 | */ if( cli_strcmp(azCmd[0], "enable")==0 ){ int ii; if( nCmd>2 ) goto session_syntax_error; ii = nCmd==1 ? -1 : booleanValue(azCmd[1]); if( pAuxDb->nSession ){ ii = sqlite3session_enable(pSession->p, ii); oputf("session %s enable flag = %d\n", pSession->zName, ii); } }else /* .session filter GLOB .... ** Set a list of GLOB patterns of table names to be excluded. */ if( cli_strcmp(azCmd[0], "filter")==0 ){ int ii, nByte; if( nCmd<2 ) goto session_syntax_error; if( pAuxDb->nSession ){ for(ii=0; ii<pSession->nFilter; ii++){ sqlite3_free(pSession->azFilter[ii]); } sqlite3_free(pSession->azFilter); nByte = sizeof(pSession->azFilter[0])*(nCmd-1); pSession->azFilter = sqlite3_malloc( nByte ); shell_check_oom( pSession->azFilter ); for(ii=1; ii<nCmd; ii++){ char *x = pSession->azFilter[ii-1] = sqlite3_mprintf("%s", azCmd[ii]); shell_check_oom(x); } pSession->nFilter = ii-1; } }else /* .session indirect ?BOOLEAN? ** Query or set the indirect flag */ if( cli_strcmp(azCmd[0], "indirect")==0 ){ int ii; if( nCmd>2 ) goto session_syntax_error; ii = nCmd==1 ? -1 : booleanValue(azCmd[1]); if( pAuxDb->nSession ){ ii = sqlite3session_indirect(pSession->p, ii); oputf("session %s indirect flag = %d\n", pSession->zName, ii); } }else /* .session isempty ** Determine if the session is empty */ if( cli_strcmp(azCmd[0], "isempty")==0 ){ int ii; if( nCmd!=1 ) goto session_syntax_error; if( pAuxDb->nSession ){ ii = sqlite3session_isempty(pSession->p); oputf("session %s isempty flag = %d\n", pSession->zName, ii); } }else /* .session list ** List all currently open sessions */ if( cli_strcmp(azCmd[0],"list")==0 ){ for(i=0; i<pAuxDb->nSession; i++){ oputf("%d %s\n", i, pAuxDb->aSession[i].zName); } }else /* .session open DB NAME ** Open a new session called NAME on the attached database DB. ** DB is normally "main". */ if( cli_strcmp(azCmd[0],"open")==0 ){ char *zName; if( nCmd!=3 ) goto session_syntax_error; zName = azCmd[2]; if( zName[0]==0 ) goto session_syntax_error; for(i=0; i<pAuxDb->nSession; i++){ if( cli_strcmp(pAuxDb->aSession[i].zName,zName)==0 ){ eputf("Session \"%s\" already exists\n", zName); goto meta_command_exit; } } if( pAuxDb->nSession>=ArraySize(pAuxDb->aSession) ){ eputf("Maximum of %d sessions\n", ArraySize(pAuxDb->aSession)); goto meta_command_exit; } pSession = &pAuxDb->aSession[pAuxDb->nSession]; rc = sqlite3session_create(p->db, azCmd[1], &pSession->p); if( rc ){ eputf("Cannot open session: error code=%d\n", rc); rc = 0; goto meta_command_exit; } pSession->nFilter = 0; sqlite3session_table_filter(pSession->p, session_filter, pSession); pAuxDb->nSession++; pSession->zName = sqlite3_mprintf("%s", zName); |
︙ | ︙ | |||
26333 26334 26335 26336 26337 26338 26339 | /* Undocumented commands for internal testing. Subject to change ** without notice. */ if( c=='s' && n>=10 && cli_strncmp(azArg[0], "selftest-", 9)==0 ){ if( cli_strncmp(azArg[0]+9, "boolean", n-9)==0 ){ int i, v; for(i=1; i<nArg; i++){ v = booleanValue(azArg[i]); | | | | 28317 28318 28319 28320 28321 28322 28323 28324 28325 28326 28327 28328 28329 28330 28331 28332 28333 28334 28335 28336 28337 28338 28339 28340 | /* Undocumented commands for internal testing. Subject to change ** without notice. */ if( c=='s' && n>=10 && cli_strncmp(azArg[0], "selftest-", 9)==0 ){ if( cli_strncmp(azArg[0]+9, "boolean", n-9)==0 ){ int i, v; for(i=1; i<nArg; i++){ v = booleanValue(azArg[i]); oputf("%s: %d 0x%x\n", azArg[i], v, v); } } if( cli_strncmp(azArg[0]+9, "integer", n-9)==0 ){ int i; sqlite3_int64 v; for(i=1; i<nArg; i++){ char zBuf[200]; v = integerValue(azArg[i]); sqlite3_snprintf(sizeof(zBuf),zBuf,"%s: %lld 0x%llx\n", azArg[i],v,v); oputz(zBuf); } } }else #endif if( c=='s' && n>=4 && cli_strncmp(azArg[0],"selftest",n)==0 ){ int bIsInit = 0; /* True to initialize the SELFTEST table */ |
︙ | ︙ | |||
26369 26370 26371 26372 26373 26374 26375 | if( cli_strcmp(z,"-init")==0 ){ bIsInit = 1; }else if( cli_strcmp(z,"-v")==0 ){ bVerbose++; }else { | | < | | 28353 28354 28355 28356 28357 28358 28359 28360 28361 28362 28363 28364 28365 28366 28367 28368 | if( cli_strcmp(z,"-init")==0 ){ bIsInit = 1; }else if( cli_strcmp(z,"-v")==0 ){ bVerbose++; }else { eputf("Unknown option \"%s\" on \"%s\"\n", azArg[i], azArg[0]); eputz("Should be one of: --init -v\n"); rc = 1; goto meta_command_exit; } } if( sqlite3_table_column_metadata(p->db,"main","selftest",0,0,0,0,0,0) != SQLITE_OK ){ bSelftestExists = 0; |
︙ | ︙ | |||
26400 26401 26402 26403 26404 26405 26406 | }else{ rc = sqlite3_prepare_v2(p->db, "VALUES(0,'memo','Missing SELFTEST table - default checks only','')," " (1,'run','PRAGMA integrity_check','ok')", -1, &pStmt, 0); } if( rc ){ | | | | | | | | > | < < | | | | 28383 28384 28385 28386 28387 28388 28389 28390 28391 28392 28393 28394 28395 28396 28397 28398 28399 28400 28401 28402 28403 28404 28405 28406 28407 28408 28409 28410 28411 28412 28413 28414 28415 28416 28417 28418 28419 28420 28421 28422 28423 28424 28425 28426 28427 28428 28429 28430 28431 28432 28433 28434 28435 28436 28437 28438 28439 28440 28441 28442 28443 28444 28445 28446 28447 28448 28449 28450 28451 28452 28453 | }else{ rc = sqlite3_prepare_v2(p->db, "VALUES(0,'memo','Missing SELFTEST table - default checks only','')," " (1,'run','PRAGMA integrity_check','ok')", -1, &pStmt, 0); } if( rc ){ eputz("Error querying the selftest table\n"); rc = 1; sqlite3_finalize(pStmt); goto meta_command_exit; } for(i=1; sqlite3_step(pStmt)==SQLITE_ROW; i++){ int tno = sqlite3_column_int(pStmt, 0); const char *zOp = (const char*)sqlite3_column_text(pStmt, 1); const char *zSql = (const char*)sqlite3_column_text(pStmt, 2); const char *zAns = (const char*)sqlite3_column_text(pStmt, 3); if( zOp==0 ) continue; if( zSql==0 ) continue; if( zAns==0 ) continue; k = 0; if( bVerbose>0 ){ sputf(stdout, "%d: %s %s\n", tno, zOp, zSql); } if( cli_strcmp(zOp,"memo")==0 ){ oputf("%s\n", zSql); }else if( cli_strcmp(zOp,"run")==0 ){ char *zErrMsg = 0; str.n = 0; str.z[0] = 0; rc = sqlite3_exec(p->db, zSql, captureOutputCallback, &str, &zErrMsg); nTest++; if( bVerbose ){ oputf("Result: %s\n", str.z); } if( rc || zErrMsg ){ nErr++; rc = 1; oputf("%d: error-code-%d: %s\n", tno, rc, zErrMsg); sqlite3_free(zErrMsg); }else if( cli_strcmp(zAns,str.z)!=0 ){ nErr++; rc = 1; oputf("%d: Expected: [%s]\n", tno, zAns); oputf("%d: Got: [%s]\n", tno, str.z); } } else{ eputf("Unknown operation \"%s\" on selftest line %d\n", zOp, tno); rc = 1; break; } } /* End loop over rows of content from SELFTEST */ sqlite3_finalize(pStmt); } /* End loop over k */ freeText(&str); oputf("%d errors out of %d tests\n", nErr, nTest); }else if( c=='s' && cli_strncmp(azArg[0], "separator", n)==0 ){ if( nArg<2 || nArg>3 ){ eputz("Usage: .separator COL ?ROW?\n"); rc = 1; } if( nArg>=2 ){ sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, "%.*s", (int)ArraySize(p->colSeparator)-1, azArg[1]); } if( nArg>=3 ){ |
︙ | ︙ | |||
26500 26501 26502 26503 26504 26505 26506 | ){ iSize = atoi(&z[5]); }else if( cli_strcmp(z,"debug")==0 ){ bDebug = 1; }else { | | < | | 28482 28483 28484 28485 28486 28487 28488 28489 28490 28491 28492 28493 28494 28495 28496 28497 28498 28499 28500 28501 28502 | ){ iSize = atoi(&z[5]); }else if( cli_strcmp(z,"debug")==0 ){ bDebug = 1; }else { eputf("Unknown option \"%s\" on \"%s\"\n", azArg[i], azArg[0]); showHelp(p->out, azArg[0]); rc = 1; goto meta_command_exit; } }else if( zLike ){ eputz("Usage: .sha3sum ?OPTIONS? ?LIKE-PATTERN?\n"); rc = 1; goto meta_command_exit; }else{ zLike = z; bSeparate = 1; if( sqlite3_strlike("sqlite\\_%", zLike, '\\')==0 ) bSchema = 1; } |
︙ | ︙ | |||
26579 26580 26581 26582 26583 26584 26585 | " FROM [sha3sum$query]", sSql.z, iSize); } shell_check_oom(zSql); freeText(&sQuery); freeText(&sSql); if( bDebug ){ | | | 28560 28561 28562 28563 28564 28565 28566 28567 28568 28569 28570 28571 28572 28573 28574 | " FROM [sha3sum$query]", sSql.z, iSize); } shell_check_oom(zSql); freeText(&sQuery); freeText(&sSql); if( bDebug ){ oputf("%s\n", zSql); }else{ shell_exec(p, zSql, 0); } #if !defined(SQLITE_OMIT_SCHEMA_PRAGMAS) && !defined(SQLITE_OMIT_VIRTUALTABLE) { int lrc; char *zRevText = /* Query for reversible to-blob-to-text check */ |
︙ | ︙ | |||
26609 26610 26611 26612 26613 26614 26615 | " from (select 'SELECT COUNT(*) AS bad_text_count\n" "FROM '||tname||' WHERE '\n" "||group_concat('CAST(CAST('||cname||' AS BLOB) AS TEXT)<>'||cname\n" "|| ' AND typeof('||cname||')=''text'' ',\n" "' OR ') as query, tname from tabcols group by tname)" , zRevText); shell_check_oom(zRevText); | | | < | | | | > > | | | | | | | | < | | | | | | | | | | | | | | | | | | | | | | | | 28590 28591 28592 28593 28594 28595 28596 28597 28598 28599 28600 28601 28602 28603 28604 28605 28606 28607 28608 28609 28610 28611 28612 28613 28614 28615 28616 28617 28618 28619 28620 28621 28622 28623 28624 28625 28626 28627 28628 28629 28630 28631 28632 28633 28634 28635 28636 28637 28638 28639 28640 28641 28642 28643 28644 28645 28646 28647 28648 28649 28650 28651 28652 28653 28654 28655 28656 28657 28658 28659 28660 28661 28662 28663 28664 28665 28666 28667 28668 28669 28670 28671 28672 28673 28674 28675 28676 28677 28678 28679 28680 28681 28682 28683 28684 28685 28686 28687 28688 28689 28690 28691 28692 28693 28694 28695 28696 28697 28698 28699 28700 28701 28702 28703 28704 28705 28706 28707 28708 28709 28710 28711 28712 28713 28714 28715 28716 28717 28718 28719 28720 28721 28722 28723 28724 28725 28726 28727 28728 28729 28730 28731 | " from (select 'SELECT COUNT(*) AS bad_text_count\n" "FROM '||tname||' WHERE '\n" "||group_concat('CAST(CAST('||cname||' AS BLOB) AS TEXT)<>'||cname\n" "|| ' AND typeof('||cname||')=''text'' ',\n" "' OR ') as query, tname from tabcols group by tname)" , zRevText); shell_check_oom(zRevText); if( bDebug ) oputf("%s\n", zRevText); lrc = sqlite3_prepare_v2(p->db, zRevText, -1, &pStmt, 0); if( lrc!=SQLITE_OK ){ /* assert(lrc==SQLITE_NOMEM); // might also be SQLITE_ERROR if the ** user does cruel and unnatural things like ".limit expr_depth 0". */ rc = 1; }else{ if( zLike ) sqlite3_bind_text(pStmt,1,zLike,-1,SQLITE_STATIC); lrc = SQLITE_ROW==sqlite3_step(pStmt); if( lrc ){ const char *zGenQuery = (char*)sqlite3_column_text(pStmt,0); sqlite3_stmt *pCheckStmt; lrc = sqlite3_prepare_v2(p->db, zGenQuery, -1, &pCheckStmt, 0); if( bDebug ) oputf("%s\n", zGenQuery); if( lrc!=SQLITE_OK ){ rc = 1; }else{ if( SQLITE_ROW==sqlite3_step(pCheckStmt) ){ double countIrreversible = sqlite3_column_double(pCheckStmt, 0); if( countIrreversible>0 ){ int sz = (int)(countIrreversible + 0.5); eputf("Digest includes %d invalidly encoded text field%s.\n", sz, (sz>1)? "s": ""); } } sqlite3_finalize(pCheckStmt); } sqlite3_finalize(pStmt); } } if( rc ) eputz(".sha3sum failed.\n"); sqlite3_free(zRevText); } #endif /* !defined(*_OMIT_SCHEMA_PRAGMAS) && !defined(*_OMIT_VIRTUALTABLE) */ sqlite3_free(zSql); }else #if !defined(SQLITE_NOHAVE_SYSTEM) && !defined(SQLITE_SHELL_FIDDLE) if( c=='s' && (cli_strncmp(azArg[0], "shell", n)==0 || cli_strncmp(azArg[0],"system",n)==0) ){ char *zCmd; int i, x; failIfSafeMode(p, "cannot run .%s in safe mode", azArg[0]); if( nArg<2 ){ eputz("Usage: .system COMMAND\n"); rc = 1; goto meta_command_exit; } zCmd = sqlite3_mprintf(strchr(azArg[1],' ')==0?"%s":"\"%s\"", azArg[1]); for(i=2; i<nArg && zCmd!=0; i++){ zCmd = sqlite3_mprintf(strchr(azArg[i],' ')==0?"%z %s":"%z \"%s\"", zCmd, azArg[i]); } consoleRestore(); x = zCmd!=0 ? system(zCmd) : 1; consoleRenewSetup(); sqlite3_free(zCmd); if( x ) eputf("System command returns %d\n", x); }else #endif /* !defined(SQLITE_NOHAVE_SYSTEM) && !defined(SQLITE_SHELL_FIDDLE) */ if( c=='s' && cli_strncmp(azArg[0], "show", n)==0 ){ static const char *azBool[] = { "off", "on", "trigger", "full"}; const char *zOut; int i; if( nArg!=1 ){ eputz("Usage: .show\n"); rc = 1; goto meta_command_exit; } oputf("%12.12s: %s\n","echo", azBool[ShellHasFlag(p, SHFLG_Echo)]); oputf("%12.12s: %s\n","eqp", azBool[p->autoEQP&3]); oputf("%12.12s: %s\n","explain", p->mode==MODE_Explain ? "on" : p->autoExplain ? "auto" : "off"); oputf("%12.12s: %s\n","headers", azBool[p->showHeader!=0]); if( p->mode==MODE_Column || (p->mode>=MODE_Markdown && p->mode<=MODE_Box) ){ oputf("%12.12s: %s --wrap %d --wordwrap %s --%squote\n", "mode", modeDescr[p->mode], p->cmOpts.iWrap, p->cmOpts.bWordWrap ? "on" : "off", p->cmOpts.bQuote ? "" : "no"); }else{ oputf("%12.12s: %s\n","mode", modeDescr[p->mode]); } oputf("%12.12s: ", "nullvalue"); output_c_string(p->nullValue); oputz("\n"); oputf("%12.12s: %s\n","output", strlen30(p->outfile) ? p->outfile : "stdout"); oputf("%12.12s: ", "colseparator"); output_c_string(p->colSeparator); oputz("\n"); oputf("%12.12s: ", "rowseparator"); output_c_string(p->rowSeparator); oputz("\n"); switch( p->statsOn ){ case 0: zOut = "off"; break; default: zOut = "on"; break; case 2: zOut = "stmt"; break; case 3: zOut = "vmstep"; break; } oputf("%12.12s: %s\n","stats", zOut); oputf("%12.12s: ", "width"); for (i=0;i<p->nWidth;i++) { oputf("%d ", p->colWidth[i]); } oputz("\n"); oputf("%12.12s: %s\n", "filename", p->pAuxDb->zDbFilename ? p->pAuxDb->zDbFilename : ""); }else if( c=='s' && cli_strncmp(azArg[0], "stats", n)==0 ){ if( nArg==2 ){ if( cli_strcmp(azArg[1],"stmt")==0 ){ p->statsOn = 2; }else if( cli_strcmp(azArg[1],"vmstep")==0 ){ p->statsOn = 3; }else{ p->statsOn = (u8)booleanValue(azArg[1]); } }else if( nArg==1 ){ display_stats(p->db, p, 0); }else{ eputz("Usage: .stats ?on|off|stmt|vmstep?\n"); rc = 1; } }else if( (c=='t' && n>1 && cli_strncmp(azArg[0], "tables", n)==0) || (c=='i' && (cli_strncmp(azArg[0], "indices", n)==0 || cli_strncmp(azArg[0], "indexes", n)==0) ) |
︙ | ︙ | |||
26762 26763 26764 26765 26766 26767 26768 | return shellDatabaseError(p->db); } if( nArg>2 && c=='i' ){ /* It is an historical accident that the .indexes command shows an error ** when called with the wrong number of arguments whereas the .tables ** command does not. */ | | | 28743 28744 28745 28746 28747 28748 28749 28750 28751 28752 28753 28754 28755 28756 28757 | return shellDatabaseError(p->db); } if( nArg>2 && c=='i' ){ /* It is an historical accident that the .indexes command shows an error ** when called with the wrong number of arguments whereas the .tables ** command does not. */ eputz("Usage: .indexes ?LIKE-PATTERN?\n"); rc = 1; sqlite3_finalize(pStmt); goto meta_command_exit; } for(ii=0; sqlite3_step(pStmt)==SQLITE_ROW; ii++){ const char *zDbName = (const char*)sqlite3_column_text(pStmt, 1); if( zDbName==0 ) continue; |
︙ | ︙ | |||
26838 26839 26840 26841 26842 26843 26844 | } nPrintCol = 80/(maxlen+2); if( nPrintCol<1 ) nPrintCol = 1; nPrintRow = (nRow + nPrintCol - 1)/nPrintCol; for(i=0; i<nPrintRow; i++){ for(j=i; j<nRow; j+=nPrintRow){ char *zSp = j<nPrintRow ? "" : " "; | < | | | | 28819 28820 28821 28822 28823 28824 28825 28826 28827 28828 28829 28830 28831 28832 28833 28834 28835 28836 28837 28838 28839 28840 28841 28842 28843 28844 28845 28846 28847 28848 28849 | } nPrintCol = 80/(maxlen+2); if( nPrintCol<1 ) nPrintCol = 1; nPrintRow = (nRow + nPrintCol - 1)/nPrintCol; for(i=0; i<nPrintRow; i++){ for(j=i; j<nRow; j+=nPrintRow){ char *zSp = j<nPrintRow ? "" : " "; oputf("%s%-*s", zSp, maxlen, azResult[j] ? azResult[j]:""); } oputz("\n"); } } for(ii=0; ii<nRow; ii++) sqlite3_free(azResult[ii]); sqlite3_free(azResult); }else #ifndef SQLITE_SHELL_FIDDLE /* Begin redirecting output to the file "testcase-out.txt" */ if( c=='t' && cli_strcmp(azArg[0],"testcase")==0 ){ output_reset(p); p->out = output_file_open("testcase-out.txt", 0); if( p->out==0 ){ eputz("Error: cannot open 'testcase-out.txt'\n"); } if( nArg>=2 ){ sqlite3_snprintf(sizeof(p->zTestcase), p->zTestcase, "%s", azArg[1]); }else{ sqlite3_snprintf(sizeof(p->zTestcase), p->zTestcase, "?"); } }else |
︙ | ︙ | |||
26879 26880 26881 26882 26883 26884 26885 | } aCtrl[] = { {"always", SQLITE_TESTCTRL_ALWAYS, 1, "BOOLEAN" }, {"assert", SQLITE_TESTCTRL_ASSERT, 1, "BOOLEAN" }, /*{"benign_malloc_hooks",SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS,1, "" },*/ /*{"bitvec_test", SQLITE_TESTCTRL_BITVEC_TEST, 1, "" },*/ {"byteorder", SQLITE_TESTCTRL_BYTEORDER, 0, "" }, {"extra_schema_checks",SQLITE_TESTCTRL_EXTRA_SCHEMA_CHECKS,0,"BOOLEAN" }, | | > | | | | | | | | | 28859 28860 28861 28862 28863 28864 28865 28866 28867 28868 28869 28870 28871 28872 28873 28874 28875 28876 28877 28878 28879 28880 28881 28882 28883 28884 28885 28886 28887 28888 28889 28890 28891 28892 28893 28894 28895 28896 28897 28898 28899 28900 28901 28902 28903 28904 28905 28906 28907 28908 28909 28910 28911 28912 28913 28914 28915 28916 28917 28918 28919 28920 28921 28922 28923 28924 28925 28926 28927 28928 28929 28930 28931 28932 28933 28934 28935 28936 28937 28938 28939 28940 | } aCtrl[] = { {"always", SQLITE_TESTCTRL_ALWAYS, 1, "BOOLEAN" }, {"assert", SQLITE_TESTCTRL_ASSERT, 1, "BOOLEAN" }, /*{"benign_malloc_hooks",SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS,1, "" },*/ /*{"bitvec_test", SQLITE_TESTCTRL_BITVEC_TEST, 1, "" },*/ {"byteorder", SQLITE_TESTCTRL_BYTEORDER, 0, "" }, {"extra_schema_checks",SQLITE_TESTCTRL_EXTRA_SCHEMA_CHECKS,0,"BOOLEAN" }, {"fault_install", SQLITE_TESTCTRL_FAULT_INSTALL, 1,"args..." }, {"fk_no_action", SQLITE_TESTCTRL_FK_NO_ACTION, 0, "BOOLEAN" }, {"imposter", SQLITE_TESTCTRL_IMPOSTER,1,"SCHEMA ON/OFF ROOTPAGE"}, {"internal_functions", SQLITE_TESTCTRL_INTERNAL_FUNCTIONS,0,"" }, {"json_selfcheck", SQLITE_TESTCTRL_JSON_SELFCHECK ,0,"BOOLEAN" }, {"localtime_fault", SQLITE_TESTCTRL_LOCALTIME_FAULT,0,"BOOLEAN" }, {"never_corrupt", SQLITE_TESTCTRL_NEVER_CORRUPT,1, "BOOLEAN" }, {"optimizations", SQLITE_TESTCTRL_OPTIMIZATIONS,0,"DISABLE-MASK" }, #ifdef YYCOVERAGE {"parser_coverage", SQLITE_TESTCTRL_PARSER_COVERAGE,0,"" }, #endif {"pending_byte", SQLITE_TESTCTRL_PENDING_BYTE,0, "OFFSET " }, {"prng_restore", SQLITE_TESTCTRL_PRNG_RESTORE,0, "" }, {"prng_save", SQLITE_TESTCTRL_PRNG_SAVE, 0, "" }, {"prng_seed", SQLITE_TESTCTRL_PRNG_SEED, 0, "SEED ?db?" }, {"seek_count", SQLITE_TESTCTRL_SEEK_COUNT, 0, "" }, {"sorter_mmap", SQLITE_TESTCTRL_SORTER_MMAP, 0, "NMAX" }, {"tune", SQLITE_TESTCTRL_TUNE, 1, "ID VALUE" }, {"uselongdouble", SQLITE_TESTCTRL_USELONGDOUBLE,0,"?BOOLEAN|\"default\"?"}, }; int testctrl = -1; int iCtrl = -1; int rc2 = 0; /* 0: usage. 1: %d 2: %x 3: no-output */ int isOk = 0; int i, n2; const char *zCmd = 0; open_db(p, 0); zCmd = nArg>=2 ? azArg[1] : "help"; /* The argument can optionally begin with "-" or "--" */ if( zCmd[0]=='-' && zCmd[1] ){ zCmd++; if( zCmd[0]=='-' && zCmd[1] ) zCmd++; } /* --help lists all test-controls */ if( cli_strcmp(zCmd,"help")==0 ){ oputz("Available test-controls:\n"); for(i=0; i<ArraySize(aCtrl); i++){ if( aCtrl[i].unSafe && !ShellHasFlag(p,SHFLG_TestingMode) ) continue; oputf(" .testctrl %s %s\n", aCtrl[i].zCtrlName, aCtrl[i].zUsage); } rc = 1; goto meta_command_exit; } /* convert testctrl text option to value. allow any unique prefix ** of the option name, or a numerical value. */ n2 = strlen30(zCmd); for(i=0; i<ArraySize(aCtrl); i++){ if( aCtrl[i].unSafe && !ShellHasFlag(p,SHFLG_TestingMode) ) continue; if( cli_strncmp(zCmd, aCtrl[i].zCtrlName, n2)==0 ){ if( testctrl<0 ){ testctrl = aCtrl[i].ctrlCode; iCtrl = i; }else{ eputf("Error: ambiguous test-control: \"%s\"\n" "Use \".testctrl --help\" for help\n", zCmd); rc = 1; goto meta_command_exit; } } } if( testctrl<0 ){ eputf("Error: unknown test-control: %s\n" "Use \".testctrl --help\" for help\n", zCmd); }else{ switch(testctrl){ /* sqlite3_test_control(int, db, int) */ case SQLITE_TESTCTRL_OPTIMIZATIONS: case SQLITE_TESTCTRL_FK_NO_ACTION: if( nArg==3 ){ |
︙ | ︙ | |||
26985 26986 26987 26988 26989 26990 26991 | /* sqlite3_test_control(int, int, sqlite3*) */ case SQLITE_TESTCTRL_PRNG_SEED: if( nArg==3 || nArg==4 ){ int ii = (int)integerValue(azArg[2]); sqlite3 *db; if( ii==0 && cli_strcmp(azArg[2],"random")==0 ){ sqlite3_randomness(sizeof(ii),&ii); | | | 28966 28967 28968 28969 28970 28971 28972 28973 28974 28975 28976 28977 28978 28979 28980 | /* sqlite3_test_control(int, int, sqlite3*) */ case SQLITE_TESTCTRL_PRNG_SEED: if( nArg==3 || nArg==4 ){ int ii = (int)integerValue(azArg[2]); sqlite3 *db; if( ii==0 && cli_strcmp(azArg[2],"random")==0 ){ sqlite3_randomness(sizeof(ii),&ii); sputf(stdout, "-- random seed: %d\n", ii); } if( nArg==3 ){ db = 0; }else{ db = p->db; /* Make sure the schema has been loaded */ sqlite3_table_column_metadata(db, 0, "x", 0, 0, 0, 0, 0, 0); |
︙ | ︙ | |||
27053 27054 27055 27056 27057 27058 27059 | isOk = 3; } break; case SQLITE_TESTCTRL_SEEK_COUNT: { u64 x = 0; rc2 = sqlite3_test_control(testctrl, p->db, &x); | | | 29034 29035 29036 29037 29038 29039 29040 29041 29042 29043 29044 29045 29046 29047 29048 | isOk = 3; } break; case SQLITE_TESTCTRL_SEEK_COUNT: { u64 x = 0; rc2 = sqlite3_test_control(testctrl, p->db, &x); oputf("%llu\n", x); isOk = 3; break; } #ifdef YYCOVERAGE case SQLITE_TESTCTRL_PARSER_COVERAGE: { if( nArg==2 ){ |
︙ | ︙ | |||
27084 27085 27086 27087 27088 27089 27090 | isOk = 1; }else if( nArg==2 ){ int id = 1; while(1){ int val = 0; rc2 = sqlite3_test_control(testctrl, -id, &val); if( rc2!=SQLITE_OK ) break; | | | | > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > | | | | | | 29065 29066 29067 29068 29069 29070 29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085 29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100 29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115 29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130 29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145 29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160 29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175 29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190 | isOk = 1; }else if( nArg==2 ){ int id = 1; while(1){ int val = 0; rc2 = sqlite3_test_control(testctrl, -id, &val); if( rc2!=SQLITE_OK ) break; if( id>1 ) oputz(" "); oputf("%d: %d", id, val); id++; } if( id>1 ) oputz("\n"); isOk = 3; } break; } #endif case SQLITE_TESTCTRL_SORTER_MMAP: if( nArg==3 ){ int opt = (unsigned int)integerValue(azArg[2]); rc2 = sqlite3_test_control(testctrl, p->db, opt); isOk = 3; } break; case SQLITE_TESTCTRL_JSON_SELFCHECK: if( nArg==2 ){ rc2 = -1; isOk = 1; }else{ rc2 = booleanValue(azArg[2]); isOk = 3; } sqlite3_test_control(testctrl, &rc2); break; case SQLITE_TESTCTRL_FAULT_INSTALL: { int kk; int bShowHelp = nArg<=2; isOk = 3; for(kk=2; kk<nArg; kk++){ const char *z = azArg[kk]; if( z[0]=='-' && z[1]=='-' ) z++; if( cli_strcmp(z,"off")==0 ){ sqlite3_test_control(testctrl, 0); }else if( cli_strcmp(z,"on")==0 ){ faultsim_state.iCnt = faultsim_state.iInterval; if( faultsim_state.iErr==0 ) faultsim_state.iErr = 1; sqlite3_test_control(testctrl, faultsim_callback); }else if( cli_strcmp(z,"reset")==0 ){ faultsim_state.iCnt = faultsim_state.iInterval; }else if( cli_strcmp(z,"status")==0 ){ oputf("faultsim.iId: %d\n", faultsim_state.iId); oputf("faultsim.iErr: %d\n", faultsim_state.iErr); oputf("faultsim.iCnt: %d\n", faultsim_state.iCnt); oputf("faultsim.iInterval: %d\n", faultsim_state.iInterval); oputf("faultsim.eVerbose: %d\n", faultsim_state.eVerbose); }else if( cli_strcmp(z,"-v")==0 ){ if( faultsim_state.eVerbose<2 ) faultsim_state.eVerbose++; }else if( cli_strcmp(z,"-q")==0 ){ if( faultsim_state.eVerbose>0 ) faultsim_state.eVerbose--; }else if( cli_strcmp(z,"-id")==0 && kk+1<nArg ){ faultsim_state.iId = atoi(azArg[++kk]); }else if( cli_strcmp(z,"-errcode")==0 && kk+1<nArg ){ faultsim_state.iErr = atoi(azArg[++kk]); }else if( cli_strcmp(z,"-interval")==0 && kk+1<nArg ){ faultsim_state.iInterval = atoi(azArg[++kk]); }else if( cli_strcmp(z,"-?")==0 || sqlite3_strglob("*help*",z)==0){ bShowHelp = 1; }else{ eputf("Unrecognized fault_install argument: \"%s\"\n", azArg[kk]); rc = 1; bShowHelp = 1; break; } } if( bShowHelp ){ oputz( "Usage: .testctrl fault_install ARGS\n" "Possible arguments:\n" " off Disable faultsim\n" " on Activate faultsim\n" " reset Reset the trigger counter\n" " status Show current status\n" " -v Increase verbosity\n" " -q Decrease verbosity\n" " --errcode N When triggered, return N as error code\n" " --id ID Trigger only for the ID specified\n" " --interval N Trigger only after every N-th call\n" ); } break; } } } if( isOk==0 && iCtrl>=0 ){ oputf("Usage: .testctrl %s %s\n", zCmd,aCtrl[iCtrl].zUsage); rc = 1; }else if( isOk==1 ){ oputf("%d\n", rc2); }else if( isOk==2 ){ oputf("0x%08x\n", rc2); } }else #endif /* !defined(SQLITE_UNTESTABLE) */ if( c=='t' && n>4 && cli_strncmp(azArg[0], "timeout", n)==0 ){ open_db(p, 0); sqlite3_busy_timeout(p->db, nArg>=2 ? (int)integerValue(azArg[1]) : 0); }else if( c=='t' && n>=5 && cli_strncmp(azArg[0], "timer", n)==0 ){ if( nArg==2 ){ enableTimer = booleanValue(azArg[1]); if( enableTimer && !HAS_TIMER ){ eputz("Error: timer not available on this system.\n"); enableTimer = 0; } }else{ eputz("Usage: .timer on|off\n"); rc = 1; } }else #ifndef SQLITE_OMIT_TRACE if( c=='t' && cli_strncmp(azArg[0], "trace", n)==0 ){ int mType = 0; |
︙ | ︙ | |||
27164 27165 27166 27167 27168 27169 27170 | else if( optionMatch(z, "stmt") ){ mType |= SQLITE_TRACE_STMT; } else if( optionMatch(z, "close") ){ mType |= SQLITE_TRACE_CLOSE; } else { | | | 29213 29214 29215 29216 29217 29218 29219 29220 29221 29222 29223 29224 29225 29226 29227 | else if( optionMatch(z, "stmt") ){ mType |= SQLITE_TRACE_STMT; } else if( optionMatch(z, "close") ){ mType |= SQLITE_TRACE_CLOSE; } else { eputf("Unknown option \"%s\" on \".trace\"\n", z); rc = 1; goto meta_command_exit; } }else{ output_file_close(p->traceOut); p->traceOut = output_file_open(z, 0); } |
︙ | ︙ | |||
27188 27189 27190 27191 27192 27193 27194 | #if defined(SQLITE_DEBUG) && !defined(SQLITE_OMIT_VIRTUALTABLE) if( c=='u' && cli_strncmp(azArg[0], "unmodule", n)==0 ){ int ii; int lenOpt; char *zOpt; if( nArg<2 ){ | | | 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250 29251 | #if defined(SQLITE_DEBUG) && !defined(SQLITE_OMIT_VIRTUALTABLE) if( c=='u' && cli_strncmp(azArg[0], "unmodule", n)==0 ){ int ii; int lenOpt; char *zOpt; if( nArg<2 ){ eputz("Usage: .unmodule [--allexcept] NAME ...\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); zOpt = azArg[1]; if( zOpt[0]=='-' && zOpt[1]=='-' && zOpt[2]!=0 ) zOpt++; lenOpt = (int)strlen(zOpt); |
︙ | ︙ | |||
27210 27211 27212 27213 27214 27215 27216 | } }else #endif #if SQLITE_USER_AUTHENTICATION if( c=='u' && cli_strncmp(azArg[0], "user", n)==0 ){ if( nArg<2 ){ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 29259 29260 29261 29262 29263 29264 29265 29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280 29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295 29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310 29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325 29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340 29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355 29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370 29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385 29386 29387 29388 29389 29390 29391 | } }else #endif #if SQLITE_USER_AUTHENTICATION if( c=='u' && cli_strncmp(azArg[0], "user", n)==0 ){ if( nArg<2 ){ eputz("Usage: .user SUBCOMMAND ...\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); if( cli_strcmp(azArg[1],"login")==0 ){ if( nArg!=4 ){ eputz("Usage: .user login USER PASSWORD\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_authenticate(p->db, azArg[2], azArg[3], strlen30(azArg[3])); if( rc ){ eputf("Authentication failed for user %s\n", azArg[2]); rc = 1; } }else if( cli_strcmp(azArg[1],"add")==0 ){ if( nArg!=5 ){ eputz("Usage: .user add USER PASSWORD ISADMIN\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_add(p->db, azArg[2], azArg[3], strlen30(azArg[3]), booleanValue(azArg[4])); if( rc ){ eputf("User-Add failed: %d\n", rc); rc = 1; } }else if( cli_strcmp(azArg[1],"edit")==0 ){ if( nArg!=5 ){ eputz("Usage: .user edit USER PASSWORD ISADMIN\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_change(p->db, azArg[2], azArg[3], strlen30(azArg[3]), booleanValue(azArg[4])); if( rc ){ eputf("User-Edit failed: %d\n", rc); rc = 1; } }else if( cli_strcmp(azArg[1],"delete")==0 ){ if( nArg!=3 ){ eputz("Usage: .user delete USER\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_delete(p->db, azArg[2]); if( rc ){ eputf("User-Delete failed: %d\n", rc); rc = 1; } }else{ eputz("Usage: .user login|add|edit|delete ...\n"); rc = 1; goto meta_command_exit; } }else #endif /* SQLITE_USER_AUTHENTICATION */ if( c=='v' && cli_strncmp(azArg[0], "version", n)==0 ){ char *zPtrSz = sizeof(void*)==8 ? "64-bit" : "32-bit"; oputf("SQLite %s %s\n" /*extra-version-info*/, sqlite3_libversion(), sqlite3_sourceid()); #if SQLITE_HAVE_ZLIB oputf("zlib version %s\n", zlibVersion()); #endif #define CTIMEOPT_VAL_(opt) #opt #define CTIMEOPT_VAL(opt) CTIMEOPT_VAL_(opt) #if defined(__clang__) && defined(__clang_major__) oputf("clang-" CTIMEOPT_VAL(__clang_major__) "." CTIMEOPT_VAL(__clang_minor__) "." CTIMEOPT_VAL(__clang_patchlevel__) " (%s)\n", zPtrSz); #elif defined(_MSC_VER) oputf("msvc-" CTIMEOPT_VAL(_MSC_VER) " (%s)\n", zPtrSz); #elif defined(__GNUC__) && defined(__VERSION__) oputf("gcc-" __VERSION__ " (%s)\n", zPtrSz); #endif }else if( c=='v' && cli_strncmp(azArg[0], "vfsinfo", n)==0 ){ const char *zDbName = nArg==2 ? azArg[1] : "main"; sqlite3_vfs *pVfs = 0; if( p->db ){ sqlite3_file_control(p->db, zDbName, SQLITE_FCNTL_VFS_POINTER, &pVfs); if( pVfs ){ oputf("vfs.zName = \"%s\"\n", pVfs->zName); oputf("vfs.iVersion = %d\n", pVfs->iVersion); oputf("vfs.szOsFile = %d\n", pVfs->szOsFile); oputf("vfs.mxPathname = %d\n", pVfs->mxPathname); } } }else if( c=='v' && cli_strncmp(azArg[0], "vfslist", n)==0 ){ sqlite3_vfs *pVfs; sqlite3_vfs *pCurrent = 0; if( p->db ){ sqlite3_file_control(p->db, "main", SQLITE_FCNTL_VFS_POINTER, &pCurrent); } for(pVfs=sqlite3_vfs_find(0); pVfs; pVfs=pVfs->pNext){ oputf("vfs.zName = \"%s\"%s\n", pVfs->zName, pVfs==pCurrent ? " <--- CURRENT" : ""); oputf("vfs.iVersion = %d\n", pVfs->iVersion); oputf("vfs.szOsFile = %d\n", pVfs->szOsFile); oputf("vfs.mxPathname = %d\n", pVfs->mxPathname); if( pVfs->pNext ){ oputz("-----------------------------------\n"); } } }else if( c=='v' && cli_strncmp(azArg[0], "vfsname", n)==0 ){ const char *zDbName = nArg==2 ? azArg[1] : "main"; char *zVfsName = 0; if( p->db ){ sqlite3_file_control(p->db, zDbName, SQLITE_FCNTL_VFSNAME, &zVfsName); if( zVfsName ){ oputf("%s\n", zVfsName); sqlite3_free(zVfsName); } } }else if( c=='w' && cli_strncmp(azArg[0], "wheretrace", n)==0 ){ unsigned int x = nArg>=2? (unsigned int)integerValue(azArg[1]) : 0xffffffff; |
︙ | ︙ | |||
27352 27353 27354 27355 27356 27357 27358 | if( p->nWidth ) p->actualWidth = &p->colWidth[p->nWidth]; for(j=1; j<nArg; j++){ p->colWidth[j-1] = (int)integerValue(azArg[j]); } }else { | | | | 29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415 29416 | if( p->nWidth ) p->actualWidth = &p->colWidth[p->nWidth]; for(j=1; j<nArg; j++){ p->colWidth[j-1] = (int)integerValue(azArg[j]); } }else { eputf("Error: unknown command or invalid arguments: " " \"%s\". Enter \".help\" for help\n", azArg[0]); rc = 1; } meta_command_exit: if( p->outCount ){ p->outCount--; if( p->outCount==0 ) output_reset(p); |
︙ | ︙ | |||
27506 27507 27508 27509 27510 27511 27512 27513 27514 27515 27516 27517 27518 27519 | if( zSql==0 ) return 1; zSql[nSql] = ';'; zSql[nSql+1] = 0; rc = sqlite3_complete(zSql); zSql[nSql] = 0; return rc; } /* ** Run a single line of SQL. Return the number of errors. */ static int runOneSqlLine(ShellState *p, char *zSql, FILE *in, int startline){ int rc; char *zErrMsg = 0; | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 29555 29556 29557 29558 29559 29560 29561 29562 29563 29564 29565 29566 29567 29568 29569 29570 29571 29572 29573 29574 29575 29576 29577 29578 29579 29580 29581 29582 29583 29584 29585 29586 29587 29588 29589 29590 29591 29592 29593 29594 29595 29596 29597 29598 29599 29600 29601 29602 29603 29604 29605 29606 29607 29608 29609 29610 29611 29612 29613 29614 29615 29616 29617 29618 29619 29620 29621 29622 29623 29624 29625 29626 29627 29628 29629 29630 29631 29632 29633 29634 29635 29636 29637 29638 29639 29640 29641 29642 29643 29644 29645 29646 29647 29648 29649 29650 | if( zSql==0 ) return 1; zSql[nSql] = ';'; zSql[nSql+1] = 0; rc = sqlite3_complete(zSql); zSql[nSql] = 0; return rc; } /* ** This function is called after processing each line of SQL in the ** runOneSqlLine() function. Its purpose is to detect scenarios where ** defensive mode should be automatically turned off. Specifically, when ** ** 1. The first line of input is "PRAGMA foreign_keys=OFF;", ** 2. The second line of input is "BEGIN TRANSACTION;", ** 3. The database is empty, and ** 4. The shell is not running in --safe mode. ** ** The implementation uses the ShellState.eRestoreState to maintain state: ** ** 0: Have not seen any SQL. ** 1: Have seen "PRAGMA foreign_keys=OFF;". ** 2-6: Currently running .dump transaction. If the "2" bit is set, ** disable DEFENSIVE when done. If "4" is set, disable DQS_DDL. ** 7: Nothing left to do. This function becomes a no-op. */ static int doAutoDetectRestore(ShellState *p, const char *zSql){ int rc = SQLITE_OK; if( p->eRestoreState<7 ){ switch( p->eRestoreState ){ case 0: { const char *zExpect = "PRAGMA foreign_keys=OFF;"; assert( strlen(zExpect)==24 ); if( p->bSafeMode==0 && memcmp(zSql, zExpect, 25)==0 ){ p->eRestoreState = 1; }else{ p->eRestoreState = 7; } break; }; case 1: { int bIsDump = 0; const char *zExpect = "BEGIN TRANSACTION;"; assert( strlen(zExpect)==18 ); if( memcmp(zSql, zExpect, 19)==0 ){ /* Now check if the database is empty. */ const char *zQuery = "SELECT 1 FROM sqlite_schema LIMIT 1"; sqlite3_stmt *pStmt = 0; bIsDump = 1; shellPrepare(p->db, &rc, zQuery, &pStmt); if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ bIsDump = 0; } shellFinalize(&rc, pStmt); } if( bIsDump && rc==SQLITE_OK ){ int bDefense = 0; int bDqsDdl = 0; sqlite3_db_config(p->db, SQLITE_DBCONFIG_DEFENSIVE, -1, &bDefense); sqlite3_db_config(p->db, SQLITE_DBCONFIG_DQS_DDL, -1, &bDqsDdl); sqlite3_db_config(p->db, SQLITE_DBCONFIG_DEFENSIVE, 0, 0); sqlite3_db_config(p->db, SQLITE_DBCONFIG_DQS_DDL, 1, 0); p->eRestoreState = (bDefense ? 2 : 0) + (bDqsDdl ? 4 : 0); }else{ p->eRestoreState = 7; } break; } default: { if( sqlite3_get_autocommit(p->db) ){ if( (p->eRestoreState & 2) ){ sqlite3_db_config(p->db, SQLITE_DBCONFIG_DEFENSIVE, 1, 0); } if( (p->eRestoreState & 4) ){ sqlite3_db_config(p->db, SQLITE_DBCONFIG_DQS_DDL, 0, 0); } p->eRestoreState = 7; } break; } } } return rc; } /* ** Run a single line of SQL. Return the number of errors. */ static int runOneSqlLine(ShellState *p, char *zSql, FILE *in, int startline){ int rc; char *zErrMsg = 0; |
︙ | ︙ | |||
27543 27544 27545 27546 27547 27548 27549 | } if( in!=0 || !stdin_is_interactive ){ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s near line %d:", zErrorType, startline); }else{ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s:", zErrorType); } | | | > > | | 29674 29675 29676 29677 29678 29679 29680 29681 29682 29683 29684 29685 29686 29687 29688 29689 29690 29691 29692 29693 29694 29695 29696 29697 29698 29699 29700 29701 29702 29703 29704 29705 | } if( in!=0 || !stdin_is_interactive ){ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s near line %d:", zErrorType, startline); }else{ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s:", zErrorType); } eputf("%s %s\n", zPrefix, zErrorTail); sqlite3_free(zErrMsg); zErrMsg = 0; return 1; }else if( ShellHasFlag(p, SHFLG_CountChanges) ){ char zLineBuf[2000]; sqlite3_snprintf(sizeof(zLineBuf), zLineBuf, "changes: %lld total_changes: %lld", sqlite3_changes64(p->db), sqlite3_total_changes64(p->db)); oputf("%s\n", zLineBuf); } if( doAutoDetectRestore(p, zSql) ) return 1; return 0; } static void echo_group_input(ShellState *p, const char *zDo){ if( ShellHasFlag(p, SHFLG_Echo) ) oputf("%s\n", zDo); } #ifdef SQLITE_SHELL_FIDDLE /* ** Alternate one_input_line() impl for wasm mode. This is not in the primary ** impl because we need the global shellState and cannot access it from that ** function without moving lots of code around (creating a larger/messier diff). |
︙ | ︙ | |||
27616 27617 27618 27619 27620 27621 27622 | int rc; /* Error code */ int errCnt = 0; /* Number of errors seen */ i64 startline = 0; /* Line number for start of current input */ QuickScanState qss = QSS_Start; /* Accumulated line status (so far) */ if( p->inputNesting==MAX_INPUT_NESTING ){ /* This will be more informative in a later version. */ | | | | | 29749 29750 29751 29752 29753 29754 29755 29756 29757 29758 29759 29760 29761 29762 29763 29764 29765 29766 29767 29768 29769 29770 29771 29772 29773 29774 29775 | int rc; /* Error code */ int errCnt = 0; /* Number of errors seen */ i64 startline = 0; /* Line number for start of current input */ QuickScanState qss = QSS_Start; /* Accumulated line status (so far) */ if( p->inputNesting==MAX_INPUT_NESTING ){ /* This will be more informative in a later version. */ eputf("Input nesting limit (%d) reached at line %d." " Check recursion.\n", MAX_INPUT_NESTING, p->lineno); return 1; } ++p->inputNesting; p->lineno = 0; CONTINUE_PROMPT_RESET; while( errCnt==0 || !bail_on_error || (p->in==0 && stdin_is_interactive) ){ fflush(p->out); zLine = one_input_line(p->in, zLine, nSql>0); if( zLine==0 ){ /* End of input */ if( p->in==0 && stdin_is_interactive ) oputz("\n"); break; } if( seenInterrupt ){ if( p->in!=0 ) break; seenInterrupt = 0; } p->lineno++; |
︙ | ︙ | |||
27838 27839 27840 27841 27842 27843 27844 | if( sqliterc == NULL ){ sqliterc = find_xdg_config(); } if( sqliterc == NULL ){ home_dir = find_home_dir(0); if( home_dir==0 ){ | | | | | | 29971 29972 29973 29974 29975 29976 29977 29978 29979 29980 29981 29982 29983 29984 29985 29986 29987 29988 29989 29990 29991 29992 29993 29994 29995 29996 29997 29998 29999 30000 30001 | if( sqliterc == NULL ){ sqliterc = find_xdg_config(); } if( sqliterc == NULL ){ home_dir = find_home_dir(0); if( home_dir==0 ){ eputz("-- warning: cannot find home directory;" " cannot read ~/.sqliterc\n"); return; } zBuf = sqlite3_mprintf("%s/.sqliterc",home_dir); shell_check_oom(zBuf); sqliterc = zBuf; } p->in = fopen(sqliterc,"rb"); if( p->in ){ if( stdin_is_interactive ){ eputf("-- Loading resources from %s\n", sqliterc); } if( process_input(p) && bail_on_error ) exit(1); fclose(p->in); }else if( sqliterc_override!=0 ){ eputf("cannot open: \"%s\"\n", sqliterc); if( bail_on_error ) exit(1); } p->in = inSaved; p->lineno = savedLineno; sqlite3_free(zBuf); } |
︙ | ︙ | |||
27904 27905 27906 27907 27908 27909 27910 | #endif " -memtrace trace all memory allocations and deallocations\n" " -mmap N default mmap size set to N\n" #ifdef SQLITE_ENABLE_MULTIPLEX " -multiplex enable the multiplexor VFS\n" #endif " -newline SEP set output row separator. Default: '\\n'\n" | < < < > < < < < | | | | | | | | | 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 30050 30051 30052 30053 30054 30055 30056 30057 30058 30059 30060 30061 30062 30063 30064 30065 30066 30067 30068 30069 30070 30071 30072 30073 30074 30075 30076 30077 30078 30079 30080 30081 30082 30083 30084 30085 30086 30087 30088 30089 30090 30091 30092 30093 30094 30095 30096 | #endif " -memtrace trace all memory allocations and deallocations\n" " -mmap N default mmap size set to N\n" #ifdef SQLITE_ENABLE_MULTIPLEX " -multiplex enable the multiplexor VFS\n" #endif " -newline SEP set output row separator. Default: '\\n'\n" " -nofollow refuse to open symbolic links to database files\n" " -nonce STRING set the safe-mode escape nonce\n" " -no-rowid-in-view Disable rowid-in-view using sqlite3_config()\n" " -nullvalue TEXT set text string for NULL values. Default ''\n" " -pagecache SIZE N use N slots of SZ bytes each for page cache memory\n" " -pcachetrace trace all page cache operations\n" " -quote set output mode to 'quote'\n" " -readonly open the database read-only\n" " -safe enable safe-mode\n" " -separator SEP set output column separator. Default: '|'\n" #ifdef SQLITE_ENABLE_SORTER_REFERENCES " -sorterref SIZE sorter references threshold size\n" #endif " -stats print memory stats before each finalize\n" " -table set output mode to 'table'\n" " -tabs set output mode to 'tabs'\n" " -unsafe-testing allow unsafe commands and modes for testing\n" " -version show SQLite version\n" " -vfs NAME use NAME as the default VFS\n" #ifdef SQLITE_ENABLE_VFSTRACE " -vfstrace enable tracing of all VFS calls\n" #endif #ifdef SQLITE_HAVE_ZLIB " -zip open the file as a ZIP Archive\n" #endif ; static void usage(int showDetail){ eputf("Usage: %s [OPTIONS] [FILENAME [SQL]]\n" "FILENAME is the name of an SQLite database. A new database is created\n" "if the file does not previously exist. Defaults to :memory:.\n", Argv0); if( showDetail ){ eputf("OPTIONS include:\n%s", zOptions); }else{ eputz("Use the -help option for additional information\n"); } exit(0); } /* ** Internal check: Verify that the SQLite is uninitialized. Print a ** error message if it is initialized. */ static void verify_uninitialized(void){ if( sqlite3_config(-1)==SQLITE_MISUSE ){ sputz(stdout, "WARNING: attempt to configure SQLite after" " initialization.\n"); } } /* ** Initialize the state information in data */ static void main_init(ShellState *data) { |
︙ | ︙ | |||
27984 27985 27986 27987 27988 27989 27990 | sqlite3_snprintf(sizeof(mainPrompt), mainPrompt,"sqlite> "); sqlite3_snprintf(sizeof(continuePrompt), continuePrompt," ...> "); } /* ** Output text to the console in a font that attracts extra attention. */ | | | | | < | | 30111 30112 30113 30114 30115 30116 30117 30118 30119 30120 30121 30122 30123 30124 30125 30126 30127 30128 30129 30130 30131 30132 30133 30134 30135 30136 30137 30138 30139 30140 30141 30142 30143 30144 30145 30146 30147 30148 30149 30150 30151 30152 30153 30154 30155 30156 30157 30158 30159 | sqlite3_snprintf(sizeof(mainPrompt), mainPrompt,"sqlite> "); sqlite3_snprintf(sizeof(continuePrompt), continuePrompt," ...> "); } /* ** Output text to the console in a font that attracts extra attention. */ #if defined(_WIN32) || defined(WIN32) static void printBold(const char *zText){ #if !SQLITE_OS_WINRT HANDLE out = GetStdHandle(STD_OUTPUT_HANDLE); CONSOLE_SCREEN_BUFFER_INFO defaultScreenInfo; GetConsoleScreenBufferInfo(out, &defaultScreenInfo); SetConsoleTextAttribute(out, FOREGROUND_RED|FOREGROUND_INTENSITY ); #endif sputz(stdout, zText); #if !SQLITE_OS_WINRT SetConsoleTextAttribute(out, defaultScreenInfo.wAttributes); #endif } #else static void printBold(const char *zText){ sputf(stdout, "\033[1m%s\033[0m", zText); } #endif /* ** Get the argument to an --option. Throw an error and die if no argument ** is available. */ static char *cmdline_option_value(int argc, char **argv, int i){ if( i==argc ){ eputf("%s: Error: missing argument to %s\n", argv[0], argv[argc-1]); exit(1); } return argv[i]; } static void sayAbnormalExit(void){ if( seenInterrupt ) eputz("Program interrupted.\n"); } #ifndef SQLITE_SHELL_IS_UTF8 # if (defined(_WIN32) || defined(WIN32)) \ && (defined(_MSC_VER) || (defined(UNICODE) && defined(__GNUC__))) # define SQLITE_SHELL_IS_UTF8 (0) # else |
︙ | ︙ | |||
28049 28050 28051 28052 28053 28054 28055 28056 28057 28058 28059 28060 28061 28062 | sqlite3_int64 mem_main_enter = 0; #endif char *zErrMsg = 0; #ifdef SQLITE_SHELL_FIDDLE # define data shellState #else ShellState data; #endif const char *zInitFile = 0; int i; int rc = 0; int warnInmemoryDb = 0; int readStdin = 1; int nCmd = 0; | > | 30175 30176 30177 30178 30179 30180 30181 30182 30183 30184 30185 30186 30187 30188 30189 | sqlite3_int64 mem_main_enter = 0; #endif char *zErrMsg = 0; #ifdef SQLITE_SHELL_FIDDLE # define data shellState #else ShellState data; StreamsAreConsole consStreams = SAC_NoConsole; #endif const char *zInitFile = 0; int i; int rc = 0; int warnInmemoryDb = 0; int readStdin = 1; int nCmd = 0; |
︙ | ︙ | |||
28070 28071 28072 28073 28074 28075 28076 | setvbuf(stderr, 0, _IONBF, 0); /* Make sure stderr is unbuffered */ #ifdef SQLITE_SHELL_FIDDLE stdin_is_interactive = 0; stdout_is_console = 1; data.wasm.zDefaultDbName = "/fiddle.sqlite3"; #else | > | | < < < | < | | | | | | 30197 30198 30199 30200 30201 30202 30203 30204 30205 30206 30207 30208 30209 30210 30211 30212 30213 30214 30215 30216 30217 30218 30219 30220 30221 30222 30223 30224 30225 30226 30227 30228 30229 30230 30231 30232 30233 30234 30235 30236 30237 30238 30239 30240 30241 30242 30243 30244 30245 30246 30247 30248 30249 30250 30251 | setvbuf(stderr, 0, _IONBF, 0); /* Make sure stderr is unbuffered */ #ifdef SQLITE_SHELL_FIDDLE stdin_is_interactive = 0; stdout_is_console = 1; data.wasm.zDefaultDbName = "/fiddle.sqlite3"; #else consStreams = consoleClassifySetup(stdin, stdout, stderr); stdin_is_interactive = (consStreams & SAC_InConsole)!=0; stdout_is_console = (consStreams & SAC_OutConsole)!=0; atexit(consoleRestore); #endif atexit(sayAbnormalExit); #ifdef SQLITE_DEBUG mem_main_enter = sqlite3_memory_used(); #endif #if !defined(_WIN32_WCE) if( getenv("SQLITE_DEBUG_BREAK") ){ if( isatty(0) && isatty(2) ){ eputf("attach debugger to process %d and press any key to continue.\n", GETPID()); fgetc(stdin); }else{ #if defined(_WIN32) || defined(WIN32) #if SQLITE_OS_WINRT __debugbreak(); #else DebugBreak(); #endif #elif defined(SIGTRAP) raise(SIGTRAP); #endif } } #endif /* Register a valid signal handler early, before much else is done. */ #ifdef SIGINT signal(SIGINT, interrupt_handler); #elif (defined(_WIN32) || defined(WIN32)) && !defined(_WIN32_WCE) if( !SetConsoleCtrlHandler(ConsoleCtrlHandler, TRUE) ){ eputz("No ^C handler.\n"); } #endif #if USE_SYSTEM_SQLITE+0!=1 if( cli_strncmp(sqlite3_sourceid(),SQLITE_SOURCE_ID,60)!=0 ){ eputf("SQLite header and source version mismatch\n%s\n%s\n", sqlite3_sourceid(), SQLITE_SOURCE_ID); exit(1); } #endif main_init(&data); /* On Windows, we must translate command-line arguments into UTF-8. ** The SQLite memory allocator subsystem has to be enabled in order to |
︙ | ︙ | |||
28198 28199 28200 28201 28202 28203 28204 | || cli_strcmp(z,"-newline")==0 || cli_strcmp(z,"-cmd")==0 ){ (void)cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-init")==0 ){ zInitFile = cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-interactive")==0 ){ | < < < < < < < < < > | < > > | 30322 30323 30324 30325 30326 30327 30328 30329 30330 30331 30332 30333 30334 30335 30336 30337 30338 30339 30340 30341 30342 30343 30344 30345 30346 30347 | || cli_strcmp(z,"-newline")==0 || cli_strcmp(z,"-cmd")==0 ){ (void)cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-init")==0 ){ zInitFile = cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-interactive")==0 ){ }else if( cli_strcmp(z,"-batch")==0 ){ /* Need to check for batch mode here to so we can avoid printing ** informational messages (like from process_sqliterc) before ** we do the actual processing of arguments later in a second pass. */ stdin_is_interactive = 0; }else if( cli_strcmp(z,"-utf8")==0 ){ }else if( cli_strcmp(z,"-no-utf8")==0 ){ }else if( cli_strcmp(z,"-no-rowid-in-view")==0 ){ int val = 0; sqlite3_config(SQLITE_CONFIG_ROWID_IN_VIEW, &val); assert( val==0 ); }else if( cli_strcmp(z,"-heap")==0 ){ #if defined(SQLITE_ENABLE_MEMSYS3) || defined(SQLITE_ENABLE_MEMSYS5) const char *zSize; sqlite3_int64 szHeap; zSize = cmdline_option_value(argc, argv, ++i); szHeap = integerValue(zSize); |
︙ | ︙ | |||
28351 28352 28353 28354 28355 28356 28357 | #endif if( zVfs ){ sqlite3_vfs *pVfs = sqlite3_vfs_find(zVfs); if( pVfs ){ sqlite3_vfs_register(pVfs, 1); }else{ | | < < < < < < < < < | | 30468 30469 30470 30471 30472 30473 30474 30475 30476 30477 30478 30479 30480 30481 30482 30483 30484 30485 30486 30487 30488 30489 30490 30491 30492 | #endif if( zVfs ){ sqlite3_vfs *pVfs = sqlite3_vfs_find(zVfs); if( pVfs ){ sqlite3_vfs_register(pVfs, 1); }else{ eputf("no such VFS: \"%s\"\n", zVfs); exit(1); } } if( data.pAuxDb->zDbFilename==0 ){ #ifndef SQLITE_OMIT_MEMORYDB data.pAuxDb->zDbFilename = ":memory:"; warnInmemoryDb = argc==1; #else eputf("%s: Error: no database filename specified\n", Argv0); return 1; #endif } data.out = stdout; #ifndef SQLITE_SHELL_FIDDLE sqlite3_appendvfs_init(0,0,0); #endif |
︙ | ︙ | |||
28487 28488 28489 28490 28491 28492 28493 | ** prior to sending the SQL into SQLite. Useful for injecting ** crazy bytes in the middle of SQL statements for testing and debugging. */ ShellSetFlag(&data, SHFLG_Backslash); }else if( cli_strcmp(z,"-bail")==0 ){ /* No-op. The bail_on_error flag should already be set. */ }else if( cli_strcmp(z,"-version")==0 ){ | | | | > > > > > | 30595 30596 30597 30598 30599 30600 30601 30602 30603 30604 30605 30606 30607 30608 30609 30610 30611 30612 30613 30614 30615 30616 30617 30618 30619 30620 30621 30622 30623 | ** prior to sending the SQL into SQLite. Useful for injecting ** crazy bytes in the middle of SQL statements for testing and debugging. */ ShellSetFlag(&data, SHFLG_Backslash); }else if( cli_strcmp(z,"-bail")==0 ){ /* No-op. The bail_on_error flag should already be set. */ }else if( cli_strcmp(z,"-version")==0 ){ sputf(stdout, "%s %s (%d-bit)\n", sqlite3_libversion(), sqlite3_sourceid(), 8*(int)sizeof(char*)); return 0; }else if( cli_strcmp(z,"-interactive")==0 ){ /* Need to check for interactive override here to so that it can ** affect console setup (for Windows only) and testing thereof. */ stdin_is_interactive = 1; }else if( cli_strcmp(z,"-batch")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-utf8")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-no-utf8")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-no-rowid-in-view")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-heap")==0 ){ i++; }else if( cli_strcmp(z,"-pagecache")==0 ){ i+=2; }else if( cli_strcmp(z,"-lookaside")==0 ){ i+=2; |
︙ | ︙ | |||
28544 28545 28546 28547 28548 28549 28550 | if( z[0]=='.' ){ rc = do_meta_command(z, &data); if( rc && bail_on_error ) return rc==2 ? 0 : rc; }else{ open_db(&data, 0); rc = shell_exec(&data, z, &zErrMsg); if( zErrMsg!=0 ){ | | | | | | | | 30657 30658 30659 30660 30661 30662 30663 30664 30665 30666 30667 30668 30669 30670 30671 30672 30673 30674 30675 30676 30677 30678 30679 30680 30681 30682 30683 30684 30685 30686 30687 30688 30689 30690 30691 30692 30693 30694 30695 30696 30697 30698 30699 30700 30701 | if( z[0]=='.' ){ rc = do_meta_command(z, &data); if( rc && bail_on_error ) return rc==2 ? 0 : rc; }else{ open_db(&data, 0); rc = shell_exec(&data, z, &zErrMsg); if( zErrMsg!=0 ){ eputf("Error: %s\n", zErrMsg); if( bail_on_error ) return rc!=0 ? rc : 1; }else if( rc!=0 ){ eputf("Error: unable to process SQL \"%s\"\n", z); if( bail_on_error ) return rc; } } #if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_HAVE_ZLIB) }else if( cli_strncmp(z, "-A", 2)==0 ){ if( nCmd>0 ){ eputf("Error: cannot mix regular SQL or dot-commands" " with \"%s\"\n", z); return 1; } open_db(&data, OPEN_DB_ZIPFILE); if( z[2] ){ argv[i] = &z[2]; arDotCommand(&data, 1, argv+(i-1), argc-(i-1)); }else{ arDotCommand(&data, 1, argv+i, argc-i); } readStdin = 0; break; #endif }else if( cli_strcmp(z,"-safe")==0 ){ data.bSafeMode = data.bSafeModePersist = 1; }else if( cli_strcmp(z,"-unsafe-testing")==0 ){ /* Acted upon in first pass. */ }else{ eputf("%s: Error: unknown option: %s\n", Argv0, z); eputz("Use -help for a list of options.\n"); return 1; } data.cMode = data.mode; } if( !readStdin ){ /* Run all arguments that do not begin with '-' as if they were separate |
︙ | ︙ | |||
28598 28599 28600 28601 28602 28603 28604 | } }else{ open_db(&data, 0); echo_group_input(&data, azCmd[i]); rc = shell_exec(&data, azCmd[i], &zErrMsg); if( zErrMsg || rc ){ if( zErrMsg!=0 ){ | | | < | | > | < < < < < | | | < | | | | 30711 30712 30713 30714 30715 30716 30717 30718 30719 30720 30721 30722 30723 30724 30725 30726 30727 30728 30729 30730 30731 30732 30733 30734 30735 30736 30737 30738 30739 30740 30741 30742 30743 30744 30745 30746 30747 30748 30749 30750 30751 30752 30753 30754 | } }else{ open_db(&data, 0); echo_group_input(&data, azCmd[i]); rc = shell_exec(&data, azCmd[i], &zErrMsg); if( zErrMsg || rc ){ if( zErrMsg!=0 ){ eputf("Error: %s\n", zErrMsg); }else{ eputf("Error: unable to process SQL: %s\n", azCmd[i]); } sqlite3_free(zErrMsg); free(azCmd); return rc!=0 ? rc : 1; } } } }else{ /* Run commands received from standard input */ if( stdin_is_interactive ){ char *zHome; char *zHistory; int nHistory; #if CIO_WIN_WC_XLATE # define SHELL_CIO_CHAR_SET (stdout_is_console? " (UTF-16 console I/O)" : "") #else # define SHELL_CIO_CHAR_SET "" #endif sputf(stdout, "SQLite version %s %.19s%s\n" /*extra-version-info*/ "Enter \".help\" for usage hints.\n", sqlite3_libversion(), sqlite3_sourceid(), SHELL_CIO_CHAR_SET); if( warnInmemoryDb ){ sputz(stdout, "Connected to a "); printBold("transient in-memory database"); sputz(stdout, ".\nUse \".open FILENAME\" to reopen on a" " persistent database.\n"); } zHistory = getenv("SQLITE_HISTORY"); if( zHistory ){ zHistory = strdup(zHistory); }else if( (zHome = find_home_dir(0))!=0 ){ nHistory = strlen30(zHome) + 20; if( (zHistory = malloc(nHistory))!=0 ){ |
︙ | ︙ | |||
28665 28666 28667 28668 28669 28670 28671 28672 28673 28674 28675 28676 28677 28678 | data.in = stdin; rc = process_input(&data); } } #ifndef SQLITE_SHELL_FIDDLE /* In WASM mode we have to leave the db state in place so that ** client code can "push" SQL into it after this call returns. */ free(azCmd); set_table_name(&data, 0); if( data.db ){ session_close_all(&data, -1); close_db(data.db); } for(i=0; i<ArraySize(data.aAuxDb); i++){ | > > > > > | 30772 30773 30774 30775 30776 30777 30778 30779 30780 30781 30782 30783 30784 30785 30786 30787 30788 30789 30790 | data.in = stdin; rc = process_input(&data); } } #ifndef SQLITE_SHELL_FIDDLE /* In WASM mode we have to leave the db state in place so that ** client code can "push" SQL into it after this call returns. */ #ifndef SQLITE_OMIT_VIRTUALTABLE if( data.expert.pExpert ){ expertFinish(&data, 1, 0); } #endif free(azCmd); set_table_name(&data, 0); if( data.db ){ session_close_all(&data, -1); close_db(data.db); } for(i=0; i<ArraySize(data.aAuxDb); i++){ |
︙ | ︙ | |||
28693 28694 28695 28696 28697 28698 28699 | free(data.colWidth); free(data.zNonce); /* Clear the global data structure so that valgrind will detect memory ** leaks */ memset(&data, 0, sizeof(data)); #ifdef SQLITE_DEBUG if( sqlite3_memory_used()>mem_main_enter ){ | | | | 30805 30806 30807 30808 30809 30810 30811 30812 30813 30814 30815 30816 30817 30818 30819 30820 | free(data.colWidth); free(data.zNonce); /* Clear the global data structure so that valgrind will detect memory ** leaks */ memset(&data, 0, sizeof(data)); #ifdef SQLITE_DEBUG if( sqlite3_memory_used()>mem_main_enter ){ eputf("Memory leaked: %u bytes\n", (unsigned int)(sqlite3_memory_used()-mem_main_enter)); } #endif #endif /* !SQLITE_SHELL_FIDDLE */ return rc; } |
︙ | ︙ | |||
28731 28732 28733 28734 28735 28736 28737 | SQLITE_FCNTL_VFS_POINTER, &pVfs); } return pVfs; } /* Only for emcc experimentation purposes. */ sqlite3 * fiddle_db_arg(sqlite3 *arg){ | | | 30843 30844 30845 30846 30847 30848 30849 30850 30851 30852 30853 30854 30855 30856 30857 | SQLITE_FCNTL_VFS_POINTER, &pVfs); } return pVfs; } /* Only for emcc experimentation purposes. */ sqlite3 * fiddle_db_arg(sqlite3 *arg){ oputf("fiddle_db_arg(%p)\n", (const void*)arg); return arg; } /* ** Intended to be called via a SharedWorker() while a separate ** SharedWorker() (which manages the wasm module) is performing work ** which should be interrupted. Unfortunately, SharedWorker is not |
︙ | ︙ | |||
28757 28758 28759 28760 28761 28762 28763 | return globalDb ? sqlite3_db_filename(globalDb, zDbName ? zDbName : "main") : NULL; } /* ** Completely wipes out the contents of the currently-opened database | | > > > > > > > > > > | | | 30869 30870 30871 30872 30873 30874 30875 30876 30877 30878 30879 30880 30881 30882 30883 30884 30885 30886 30887 30888 30889 30890 30891 30892 30893 30894 30895 30896 30897 30898 | return globalDb ? sqlite3_db_filename(globalDb, zDbName ? zDbName : "main") : NULL; } /* ** Completely wipes out the contents of the currently-opened database ** but leaves its storage intact for reuse. If any transactions are ** active, they are forcibly rolled back. */ void fiddle_reset_db(void){ if( globalDb ){ int rc; while( sqlite3_txn_state(globalDb,0)>0 ){ /* ** Resolve problem reported in ** https://sqlite.org/forum/forumpost/0b41a25d65 */ oputz("Rolling back in-progress transaction.\n"); sqlite3_exec(globalDb,"ROLLBACK", 0, 0, 0); } rc = sqlite3_db_config(globalDb, SQLITE_DBCONFIG_RESET_DATABASE, 1, 0); if( 0==rc ) sqlite3_exec(globalDb, "VACUUM", 0, 0, 0); sqlite3_db_config(globalDb, SQLITE_DBCONFIG_RESET_DATABASE, 0, 0); } } /* ** Uses the current database's VFS xRead to stream the db file's ** contents out to the given callback. The callback gets a single |
︙ | ︙ |
Changes to extsrc/sqlite3.c.
more than 10,000 changes
Changes to extsrc/sqlite3.h.
︙ | ︙ | |||
142 143 144 145 146 147 148 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ | | | | | 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.46.0" #define SQLITE_VERSION_NUMBER 3046000 #define SQLITE_SOURCE_ID "2024-03-26 11:14:52 a49296de0061931badaf3db6b965131a78b1c6c21b1eeb62815ea7adf767d0b3" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ | |||
416 417 418 419 420 421 422 423 424 425 426 427 428 429 | ** <ul> ** <li> The application must ensure that the 1st parameter to sqlite3_exec() ** is a valid and open [database connection]. ** <li> The application must not close the [database connection] specified by ** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not modify the SQL statement text passed into ** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. ** </ul> */ SQLITE_API int sqlite3_exec( sqlite3*, /* An open database */ const char *sql, /* SQL to be evaluated */ int (*callback)(void*,int,char**,char**), /* Callback function */ void *, /* 1st argument to callback */ | > > | 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 | ** <ul> ** <li> The application must ensure that the 1st parameter to sqlite3_exec() ** is a valid and open [database connection]. ** <li> The application must not close the [database connection] specified by ** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not modify the SQL statement text passed into ** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not dereference the arrays or string pointers ** passed as the 3rd and 4th callback parameters after it returns. ** </ul> */ SQLITE_API int sqlite3_exec( sqlite3*, /* An open database */ const char *sql, /* SQL to be evaluated */ int (*callback)(void*,int,char**,char**), /* Callback function */ void *, /* 1st argument to callback */ |
︙ | ︙ | |||
758 759 760 761 762 763 764 | ** <li> [SQLITE_LOCK_SHARED], ** <li> [SQLITE_LOCK_RESERVED], ** <li> [SQLITE_LOCK_PENDING], or ** <li> [SQLITE_LOCK_EXCLUSIVE]. ** </ul> ** xLock() upgrades the database file lock. In other words, xLock() moves the ** database file lock in the direction NONE toward EXCLUSIVE. The argument to | | | | 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 | ** <li> [SQLITE_LOCK_SHARED], ** <li> [SQLITE_LOCK_RESERVED], ** <li> [SQLITE_LOCK_PENDING], or ** <li> [SQLITE_LOCK_EXCLUSIVE]. ** </ul> ** xLock() upgrades the database file lock. In other words, xLock() moves the ** database file lock in the direction NONE toward EXCLUSIVE. The argument to ** xLock() is always one of SHARED, RESERVED, PENDING, or EXCLUSIVE, never ** SQLITE_LOCK_NONE. If the database file lock is already at or above the ** requested lock, then the call to xLock() is a no-op. ** xUnlock() downgrades the database file lock to either SHARED or NONE. ** If the lock is already at or below the requested lock state, then the call ** to xUnlock() is a no-op. ** The xCheckReservedLock() method checks whether any database connection, ** either in this process or in some other process, is holding a RESERVED, ** PENDING, or EXCLUSIVE lock on the file. It returns true ** if such a lock exists and false otherwise. ** ** The xFileControl() method is a generic interface that allows custom |
︙ | ︙ | |||
2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 | ** [sqlite3_int64] parameter which is the default maximum size for an in-memory ** database created using [sqlite3_deserialize()]. This default maximum ** size can be adjusted up or down for individual databases using the ** [SQLITE_FCNTL_SIZE_LIMIT] [sqlite3_file_control|file-control]. If this ** configuration setting is never used, then the default maximum is determined ** by the [SQLITE_MEMDB_DEFAULT_MAXSIZE] compile-time option. If that ** compile-time option is not set, then the default maximum is 1073741824. ** </dl> */ #define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */ #define SQLITE_CONFIG_MULTITHREAD 2 /* nil */ #define SQLITE_CONFIG_SERIALIZED 3 /* nil */ #define SQLITE_CONFIG_MALLOC 4 /* sqlite3_mem_methods* */ #define SQLITE_CONFIG_GETMALLOC 5 /* sqlite3_mem_methods* */ | > > > > > > > > > > > > > > > > | 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 | ** [sqlite3_int64] parameter which is the default maximum size for an in-memory ** database created using [sqlite3_deserialize()]. This default maximum ** size can be adjusted up or down for individual databases using the ** [SQLITE_FCNTL_SIZE_LIMIT] [sqlite3_file_control|file-control]. If this ** configuration setting is never used, then the default maximum is determined ** by the [SQLITE_MEMDB_DEFAULT_MAXSIZE] compile-time option. If that ** compile-time option is not set, then the default maximum is 1073741824. ** ** [[SQLITE_CONFIG_ROWID_IN_VIEW]] ** <dt>SQLITE_CONFIG_ROWID_IN_VIEW ** <dd>The SQLITE_CONFIG_ROWID_IN_VIEW option enables or disables the ability ** for VIEWs to have a ROWID. The capability can only be enabled if SQLite is ** compiled with -DSQLITE_ALLOW_ROWID_IN_VIEW, in which case the capability ** defaults to on. This configuration option queries the current setting or ** changes the setting to off or on. The argument is a pointer to an integer. ** If that integer initially holds a value of 1, then the ability for VIEWs to ** have ROWIDs is activated. If the integer initially holds zero, then the ** ability is deactivated. Any other initial value for the integer leaves the ** setting unchanged. After changes, if any, the integer is written with ** a 1 or 0, if the ability for VIEWs to have ROWIDs is on or off. If SQLite ** is compiled without -DSQLITE_ALLOW_ROWID_IN_VIEW (which is the usual and ** recommended case) then the integer is always filled with zero, regardless ** if its initial value. ** </dl> */ #define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */ #define SQLITE_CONFIG_MULTITHREAD 2 /* nil */ #define SQLITE_CONFIG_SERIALIZED 3 /* nil */ #define SQLITE_CONFIG_MALLOC 4 /* sqlite3_mem_methods* */ #define SQLITE_CONFIG_GETMALLOC 5 /* sqlite3_mem_methods* */ |
︙ | ︙ | |||
2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 | #define SQLITE_CONFIG_WIN32_HEAPSIZE 23 /* int nByte */ #define SQLITE_CONFIG_PCACHE_HDRSZ 24 /* int *psz */ #define SQLITE_CONFIG_PMASZ 25 /* unsigned int szPma */ #define SQLITE_CONFIG_STMTJRNL_SPILL 26 /* int nByte */ #define SQLITE_CONFIG_SMALL_MALLOC 27 /* boolean */ #define SQLITE_CONFIG_SORTERREF_SIZE 28 /* int nByte */ #define SQLITE_CONFIG_MEMDB_MAXSIZE 29 /* sqlite3_int64 */ /* ** CAPI3REF: Database Connection Configuration Options ** ** These constants are the available integer configuration options that ** can be passed as the second argument to the [sqlite3_db_config()] interface. ** | > | 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 | #define SQLITE_CONFIG_WIN32_HEAPSIZE 23 /* int nByte */ #define SQLITE_CONFIG_PCACHE_HDRSZ 24 /* int *psz */ #define SQLITE_CONFIG_PMASZ 25 /* unsigned int szPma */ #define SQLITE_CONFIG_STMTJRNL_SPILL 26 /* int nByte */ #define SQLITE_CONFIG_SMALL_MALLOC 27 /* boolean */ #define SQLITE_CONFIG_SORTERREF_SIZE 28 /* int nByte */ #define SQLITE_CONFIG_MEMDB_MAXSIZE 29 /* sqlite3_int64 */ #define SQLITE_CONFIG_ROWID_IN_VIEW 30 /* int* */ /* ** CAPI3REF: Database Connection Configuration Options ** ** These constants are the available integer configuration options that ** can be passed as the second argument to the [sqlite3_db_config()] interface. ** |
︙ | ︙ | |||
3282 3283 3284 3285 3286 3287 3288 | #define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ #define SQLITE_FUNCTION 31 /* NULL Function Name */ #define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ #define SQLITE_COPY 0 /* No longer used */ #define SQLITE_RECURSIVE 33 /* NULL NULL */ /* | | | | 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 | #define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ #define SQLITE_FUNCTION 31 /* NULL Function Name */ #define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ #define SQLITE_COPY 0 /* No longer used */ #define SQLITE_RECURSIVE 33 /* NULL NULL */ /* ** CAPI3REF: Deprecated Tracing And Profiling Functions ** DEPRECATED ** ** These routines are deprecated. Use the [sqlite3_trace_v2()] interface ** instead of the routines described here. ** ** These routines register callback functions that can be used for ** tracing and profiling the execution of SQL statements. ** |
︙ | ︙ | |||
3950 3951 3952 3953 3954 3955 3956 | ** <li> sqlite3_extended_errcode() ** <li> sqlite3_errmsg() ** <li> sqlite3_errmsg16() ** <li> sqlite3_error_offset() ** </ul> ** ** ^The sqlite3_errmsg() and sqlite3_errmsg16() return English-language | | > | | > | 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 | ** <li> sqlite3_extended_errcode() ** <li> sqlite3_errmsg() ** <li> sqlite3_errmsg16() ** <li> sqlite3_error_offset() ** </ul> ** ** ^The sqlite3_errmsg() and sqlite3_errmsg16() return English-language ** text that describes the error, as either UTF-8 or UTF-16 respectively, ** or NULL if no error message is available. ** (See how SQLite handles [invalid UTF] for exceptions to this rule.) ** ^(Memory to hold the error message string is managed internally. ** The application does not need to worry about freeing the result. ** However, the error string might be overwritten or deallocated by ** subsequent calls to other SQLite interface functions.)^ ** ** ^The sqlite3_errstr(E) interface returns the English-language text ** that describes the [result code] E, as UTF-8, or NULL if E is not an ** result code for which a text error message is available. ** ^(Memory to hold the error message string is managed internally ** and must not be freed by the application)^. ** ** ^If the most recent error references a specific token in the input ** SQL, the sqlite3_error_offset() interface returns the byte offset ** of the start of that token. ^The byte offset returned by ** sqlite3_error_offset() assumes that the input SQL is UTF8. |
︙ | ︙ | |||
5569 5570 5571 5572 5573 5574 5575 | ** are innocuous. Developers are advised to avoid using the ** SQLITE_INNOCUOUS flag for application-defined functions unless the ** function has been carefully audited and found to be free of potentially ** security-adverse side-effects and information-leaks. ** </dd> ** ** [[SQLITE_SUBTYPE]] <dt>SQLITE_SUBTYPE</dt><dd> | | | | | | > | > > > > > > > > > > > > > > | 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 | ** are innocuous. Developers are advised to avoid using the ** SQLITE_INNOCUOUS flag for application-defined functions unless the ** function has been carefully audited and found to be free of potentially ** security-adverse side-effects and information-leaks. ** </dd> ** ** [[SQLITE_SUBTYPE]] <dt>SQLITE_SUBTYPE</dt><dd> ** The SQLITE_SUBTYPE flag indicates to SQLite that a function might call ** [sqlite3_value_subtype()] to inspect the sub-types of its arguments. ** This flag instructs SQLite to omit some corner-case optimizations that ** might disrupt the operation of the [sqlite3_value_subtype()] function, ** causing it to return zero rather than the correct subtype(). ** SQL functions that invokes [sqlite3_value_subtype()] should have this ** property. If the SQLITE_SUBTYPE property is omitted, then the return ** value from [sqlite3_value_subtype()] might sometimes be zero even though ** a non-zero subtype was specified by the function argument expression. ** ** [[SQLITE_RESULT_SUBTYPE]] <dt>SQLITE_RESULT_SUBTYPE</dt><dd> ** The SQLITE_RESULT_SUBTYPE flag indicates to SQLite that a function might call ** [sqlite3_result_subtype()] to cause a sub-type to be associated with its ** result. ** Every function that invokes [sqlite3_result_subtype()] should have this ** property. If it does not, then the call to [sqlite3_result_subtype()] ** might become a no-op if the function is used as term in an ** [expression index]. On the other hand, SQL functions that never invoke ** [sqlite3_result_subtype()] should avoid setting this property, as the ** purpose of this property is to disable certain optimizations that are ** incompatible with subtypes. ** </dd> ** </dl> */ #define SQLITE_DETERMINISTIC 0x000000800 #define SQLITE_DIRECTONLY 0x000080000 #define SQLITE_SUBTYPE 0x000100000 #define SQLITE_INNOCUOUS 0x000200000 #define SQLITE_RESULT_SUBTYPE 0x001000000 /* ** CAPI3REF: Deprecated Functions ** DEPRECATED ** ** These functions are [deprecated]. In order to maintain ** backwards compatibility with older code, these functions continue |
︙ | ︙ | |||
5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 | ** METHOD: sqlite3_value ** ** The sqlite3_value_subtype(V) function returns the subtype for ** an [application-defined SQL function] argument V. The subtype ** information can be used to pass a limited amount of context from ** one SQL function to another. Use the [sqlite3_result_subtype()] ** routine to set the subtype for the return value of an SQL function. */ SQLITE_API unsigned int sqlite3_value_subtype(sqlite3_value*); /* ** CAPI3REF: Copy And Free SQL Values ** METHOD: sqlite3_value ** | > > > > > > | 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 | ** METHOD: sqlite3_value ** ** The sqlite3_value_subtype(V) function returns the subtype for ** an [application-defined SQL function] argument V. The subtype ** information can be used to pass a limited amount of context from ** one SQL function to another. Use the [sqlite3_result_subtype()] ** routine to set the subtype for the return value of an SQL function. ** ** Every [application-defined SQL function] that invoke this interface ** should include the [SQLITE_SUBTYPE] property in the text ** encoding argument when the function is [sqlite3_create_function|registered]. ** If the [SQLITE_SUBTYPE] property is omitted, then sqlite3_value_subtype() ** might return zero instead of the upstream subtype in some corner cases. */ SQLITE_API unsigned int sqlite3_value_subtype(sqlite3_value*); /* ** CAPI3REF: Copy And Free SQL Values ** METHOD: sqlite3_value ** |
︙ | ︙ | |||
5909 5910 5911 5912 5913 5914 5915 | ** SQLite is free to discard the auxiliary data at any time, including: <ul> ** <li> ^(when the corresponding function parameter changes)^, or ** <li> ^(when [sqlite3_reset()] or [sqlite3_finalize()] is called for the ** SQL statement)^, or ** <li> ^(when sqlite3_set_auxdata() is invoked again on the same ** parameter)^, or ** <li> ^(during the original sqlite3_set_auxdata() call when a memory | | > > > | | > > > > > | 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 | ** SQLite is free to discard the auxiliary data at any time, including: <ul> ** <li> ^(when the corresponding function parameter changes)^, or ** <li> ^(when [sqlite3_reset()] or [sqlite3_finalize()] is called for the ** SQL statement)^, or ** <li> ^(when sqlite3_set_auxdata() is invoked again on the same ** parameter)^, or ** <li> ^(during the original sqlite3_set_auxdata() call when a memory ** allocation error occurs.)^ ** <li> ^(during the original sqlite3_set_auxdata() call if the function ** is evaluated during query planning instead of during query execution, ** as sometimes happens with [SQLITE_ENABLE_STAT4].)^ </ul> ** ** Note the last two bullets in particular. The destructor X in ** sqlite3_set_auxdata(C,N,P,X) might be called immediately, before the ** sqlite3_set_auxdata() interface even returns. Hence sqlite3_set_auxdata() ** should be called near the end of the function implementation and the ** function implementation should not make any use of P after ** sqlite3_set_auxdata() has been called. Furthermore, a call to ** sqlite3_get_auxdata() that occurs immediately after a corresponding call ** to sqlite3_set_auxdata() might still return NULL if an out-of-memory ** condition occurred during the sqlite3_set_auxdata() call or if the ** function is being evaluated during query planning rather than during ** query execution. ** ** ^(In practice, auxiliary data is preserved between function calls for ** function parameters that are compile-time constants, including literal ** values and [parameters] and expressions composed from the same.)^ ** ** The value of the N parameter to these interfaces should be non-negative. ** Future enhancements may make use of negative N values to define new |
︙ | ︙ | |||
6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 | ** The sqlite3_result_subtype(C,T) function causes the subtype of ** the result from the [application-defined SQL function] with ** [sqlite3_context] C to be the value T. Only the lower 8 bits ** of the subtype T are preserved in current versions of SQLite; ** higher order bits are discarded. ** The number of subtype bytes preserved by SQLite might increase ** in future releases of SQLite. */ SQLITE_API void sqlite3_result_subtype(sqlite3_context*,unsigned int); /* ** CAPI3REF: Define New Collating Sequences ** METHOD: sqlite3 ** | > > > > > > > > > > > > > > | 6240 6241 6242 6243 6244 6245 6246 6247 6248 6249 6250 6251 6252 6253 6254 6255 6256 6257 6258 6259 6260 6261 6262 6263 6264 6265 6266 6267 | ** The sqlite3_result_subtype(C,T) function causes the subtype of ** the result from the [application-defined SQL function] with ** [sqlite3_context] C to be the value T. Only the lower 8 bits ** of the subtype T are preserved in current versions of SQLite; ** higher order bits are discarded. ** The number of subtype bytes preserved by SQLite might increase ** in future releases of SQLite. ** ** Every [application-defined SQL function] that invokes this interface ** should include the [SQLITE_RESULT_SUBTYPE] property in its ** text encoding argument when the SQL function is ** [sqlite3_create_function|registered]. If the [SQLITE_RESULT_SUBTYPE] ** property is omitted from the function that invokes sqlite3_result_subtype(), ** then in some cases the sqlite3_result_subtype() might fail to set ** the result subtype. ** ** If SQLite is compiled with -DSQLITE_STRICT_SUBTYPE=1, then any ** SQL function that invokes the sqlite3_result_subtype() interface ** and that does not have the SQLITE_RESULT_SUBTYPE property will raise ** an error. Future versions of SQLite might enable -DSQLITE_STRICT_SUBTYPE=1 ** by default. */ SQLITE_API void sqlite3_result_subtype(sqlite3_context*,unsigned int); /* ** CAPI3REF: Define New Collating Sequences ** METHOD: sqlite3 ** |
︙ | ︙ | |||
7990 7991 7992 7993 7994 7995 7996 | ** In such cases, the ** mutex must be exited an equal number of times before another thread ** can enter.)^ If the same thread tries to enter any mutex other ** than an SQLITE_MUTEX_RECURSIVE more than once, the behavior is undefined. ** ** ^(Some systems (for example, Windows 95) do not support the operation ** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() | | | | > > | 8054 8055 8056 8057 8058 8059 8060 8061 8062 8063 8064 8065 8066 8067 8068 8069 8070 8071 8072 | ** In such cases, the ** mutex must be exited an equal number of times before another thread ** can enter.)^ If the same thread tries to enter any mutex other ** than an SQLITE_MUTEX_RECURSIVE more than once, the behavior is undefined. ** ** ^(Some systems (for example, Windows 95) do not support the operation ** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() ** will always return SQLITE_BUSY. In most cases the SQLite core only uses ** sqlite3_mutex_try() as an optimization, so this is acceptable ** behavior. The exceptions are unix builds that set the ** SQLITE_ENABLE_SETLK_TIMEOUT build option. In that case a working ** sqlite3_mutex_try() is required.)^ ** ** ^The sqlite3_mutex_leave() routine exits a mutex that was ** previously entered by the same thread. The behavior ** is undefined if the mutex is not currently entered by the ** calling thread or is not currently allocated. ** ** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), |
︙ | ︙ | |||
8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 8264 | #define SQLITE_TESTCTRL_BITVEC_TEST 8 #define SQLITE_TESTCTRL_FAULT_INSTALL 9 #define SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS 10 #define SQLITE_TESTCTRL_PENDING_BYTE 11 #define SQLITE_TESTCTRL_ASSERT 12 #define SQLITE_TESTCTRL_ALWAYS 13 #define SQLITE_TESTCTRL_RESERVE 14 /* NOT USED */ #define SQLITE_TESTCTRL_OPTIMIZATIONS 15 #define SQLITE_TESTCTRL_ISKEYWORD 16 /* NOT USED */ #define SQLITE_TESTCTRL_SCRATCHMALLOC 17 /* NOT USED */ #define SQLITE_TESTCTRL_INTERNAL_FUNCTIONS 17 #define SQLITE_TESTCTRL_LOCALTIME_FAULT 18 #define SQLITE_TESTCTRL_EXPLAIN_STMT 19 /* NOT USED */ #define SQLITE_TESTCTRL_ONCE_RESET_THRESHOLD 19 | > | 8317 8318 8319 8320 8321 8322 8323 8324 8325 8326 8327 8328 8329 8330 8331 | #define SQLITE_TESTCTRL_BITVEC_TEST 8 #define SQLITE_TESTCTRL_FAULT_INSTALL 9 #define SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS 10 #define SQLITE_TESTCTRL_PENDING_BYTE 11 #define SQLITE_TESTCTRL_ASSERT 12 #define SQLITE_TESTCTRL_ALWAYS 13 #define SQLITE_TESTCTRL_RESERVE 14 /* NOT USED */ #define SQLITE_TESTCTRL_JSON_SELFCHECK 14 #define SQLITE_TESTCTRL_OPTIMIZATIONS 15 #define SQLITE_TESTCTRL_ISKEYWORD 16 /* NOT USED */ #define SQLITE_TESTCTRL_SCRATCHMALLOC 17 /* NOT USED */ #define SQLITE_TESTCTRL_INTERNAL_FUNCTIONS 17 #define SQLITE_TESTCTRL_LOCALTIME_FAULT 18 #define SQLITE_TESTCTRL_EXPLAIN_STMT 19 /* NOT USED */ #define SQLITE_TESTCTRL_ONCE_RESET_THRESHOLD 19 |
︙ | ︙ | |||
12764 12765 12766 12767 12768 12769 12770 | ** an OOM condition or IO error), an appropriate SQLite error code is ** returned. ** ** This function may be quite inefficient if used with an FTS5 table ** created with the "columnsize=0" option. ** ** xColumnText: | > > > | | > > | | | > | | | | 12831 12832 12833 12834 12835 12836 12837 12838 12839 12840 12841 12842 12843 12844 12845 12846 12847 12848 12849 12850 12851 12852 12853 12854 12855 12856 12857 12858 12859 12860 12861 12862 12863 12864 12865 12866 12867 12868 12869 12870 12871 12872 12873 12874 12875 12876 12877 12878 12879 12880 12881 12882 12883 12884 | ** an OOM condition or IO error), an appropriate SQLite error code is ** returned. ** ** This function may be quite inefficient if used with an FTS5 table ** created with the "columnsize=0" option. ** ** xColumnText: ** If parameter iCol is less than zero, or greater than or equal to the ** number of columns in the table, SQLITE_RANGE is returned. ** ** Otherwise, this function attempts to retrieve the text of column iCol of ** the current document. If successful, (*pz) is set to point to a buffer ** containing the text in utf-8 encoding, (*pn) is set to the size in bytes ** (not characters) of the buffer and SQLITE_OK is returned. Otherwise, ** if an error occurs, an SQLite error code is returned and the final values ** of (*pz) and (*pn) are undefined. ** ** xPhraseCount: ** Returns the number of phrases in the current query expression. ** ** xPhraseSize: ** If parameter iCol is less than zero, or greater than or equal to the ** number of phrases in the current query, as returned by xPhraseCount, ** 0 is returned. Otherwise, this function returns the number of tokens in ** phrase iPhrase of the query. Phrases are numbered starting from zero. ** ** xInstCount: ** Set *pnInst to the total number of occurrences of all phrases within ** the query within the current row. Return SQLITE_OK if successful, or ** an error code (i.e. SQLITE_NOMEM) if an error occurs. ** ** This API can be quite slow if used with an FTS5 table created with the ** "detail=none" or "detail=column" option. If the FTS5 table is created ** with either "detail=none" or "detail=column" and "content=" option ** (i.e. if it is a contentless table), then this API always returns 0. ** ** xInst: ** Query for the details of phrase match iIdx within the current row. ** Phrase matches are numbered starting from zero, so the iIdx argument ** should be greater than or equal to zero and smaller than the value ** output by xInstCount(). If iIdx is less than zero or greater than ** or equal to the value returned by xInstCount(), SQLITE_RANGE is returned. ** ** Otherwise, output parameter *piPhrase is set to the phrase number, *piCol ** to the column in which it occurs and *piOff the token offset of the ** first token of the phrase. SQLITE_OK is returned if successful, or an ** error code (i.e. SQLITE_NOMEM) if an error occurs. ** ** This API can be quite slow if used with an FTS5 table created with the ** "detail=none" or "detail=column" option. ** ** xRowid: ** Returns the rowid of the current row. ** |
︙ | ︙ | |||
12822 12823 12824 12825 12826 12827 12828 12829 12830 12831 12832 12833 12834 12835 | ** current query is executed. Any column filter that applies to ** phrase iPhrase of the current query is included in $p. For each ** row visited, the callback function passed as the fourth argument ** is invoked. The context and API objects passed to the callback ** function may be used to access the properties of each matched row. ** Invoking Api.xUserData() returns a copy of the pointer passed as ** the third argument to pUserData. ** ** If the callback function returns any value other than SQLITE_OK, the ** query is abandoned and the xQueryPhrase function returns immediately. ** If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK. ** Otherwise, the error code is propagated upwards. ** ** If the query runs to completion without incident, SQLITE_OK is returned. | > > > > | 12895 12896 12897 12898 12899 12900 12901 12902 12903 12904 12905 12906 12907 12908 12909 12910 12911 12912 | ** current query is executed. Any column filter that applies to ** phrase iPhrase of the current query is included in $p. For each ** row visited, the callback function passed as the fourth argument ** is invoked. The context and API objects passed to the callback ** function may be used to access the properties of each matched row. ** Invoking Api.xUserData() returns a copy of the pointer passed as ** the third argument to pUserData. ** ** If parameter iPhrase is less than zero, or greater than or equal to ** the number of phrases in the query, as returned by xPhraseCount(), ** this function returns SQLITE_RANGE. ** ** If the callback function returns any value other than SQLITE_OK, the ** query is abandoned and the xQueryPhrase function returns immediately. ** If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK. ** Otherwise, the error code is propagated upwards. ** ** If the query runs to completion without incident, SQLITE_OK is returned. |
︙ | ︙ | |||
12937 12938 12939 12940 12941 12942 12943 12944 12945 | ** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext ** (or xInst/xInstCount). The chief advantage of this API is that it is ** significantly more efficient than those alternatives when used with ** "detail=column" tables. ** ** xPhraseNextColumn() ** See xPhraseFirstColumn above. */ struct Fts5ExtensionApi { | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 13014 13015 13016 13017 13018 13019 13020 13021 13022 13023 13024 13025 13026 13027 13028 13029 13030 13031 13032 13033 13034 13035 13036 13037 13038 13039 13040 13041 13042 13043 13044 13045 13046 13047 13048 13049 13050 13051 13052 13053 13054 13055 13056 13057 13058 13059 13060 13061 13062 13063 | ** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext ** (or xInst/xInstCount). The chief advantage of this API is that it is ** significantly more efficient than those alternatives when used with ** "detail=column" tables. ** ** xPhraseNextColumn() ** See xPhraseFirstColumn above. ** ** xQueryToken(pFts5, iPhrase, iToken, ppToken, pnToken) ** This is used to access token iToken of phrase iPhrase of the current ** query. Before returning, output parameter *ppToken is set to point ** to a buffer containing the requested token, and *pnToken to the ** size of this buffer in bytes. ** ** If iPhrase or iToken are less than zero, or if iPhrase is greater than ** or equal to the number of phrases in the query as reported by ** xPhraseCount(), or if iToken is equal to or greater than the number of ** tokens in the phrase, SQLITE_RANGE is returned and *ppToken and *pnToken are both zeroed. ** ** The output text is not a copy of the query text that specified the ** token. It is the output of the tokenizer module. For tokendata=1 ** tables, this includes any embedded 0x00 and trailing data. ** ** xInstToken(pFts5, iIdx, iToken, ppToken, pnToken) ** This is used to access token iToken of phrase hit iIdx within the ** current row. If iIdx is less than zero or greater than or equal to the ** value returned by xInstCount(), SQLITE_RANGE is returned. Otherwise, ** output variable (*ppToken) is set to point to a buffer containing the ** matching document token, and (*pnToken) to the size of that buffer in ** bytes. This API is not available if the specified token matches a ** prefix query term. In that case both output variables are always set ** to 0. ** ** The output text is not a copy of the document text that was tokenized. ** It is the output of the tokenizer module. For tokendata=1 tables, this ** includes any embedded 0x00 and trailing data. ** ** This API can be quite slow if used with an FTS5 table created with the ** "detail=none" or "detail=column" option. */ struct Fts5ExtensionApi { int iVersion; /* Currently always set to 3 */ void *(*xUserData)(Fts5Context*); int (*xColumnCount)(Fts5Context*); int (*xRowCount)(Fts5Context*, sqlite3_int64 *pnRow); int (*xColumnTotalSize)(Fts5Context*, int iCol, sqlite3_int64 *pnToken); |
︙ | ︙ | |||
12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 | void *(*xGetAuxdata)(Fts5Context*, int bClear); int (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*); void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff); int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*); void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol); }; /* ** CUSTOM AUXILIARY FUNCTIONS *************************************************************************/ /************************************************************************* | > > > > > > > | 13084 13085 13086 13087 13088 13089 13090 13091 13092 13093 13094 13095 13096 13097 13098 13099 13100 13101 13102 13103 13104 | void *(*xGetAuxdata)(Fts5Context*, int bClear); int (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*); void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff); int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*); void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol); /* Below this point are iVersion>=3 only */ int (*xQueryToken)(Fts5Context*, int iPhrase, int iToken, const char **ppToken, int *pnToken ); int (*xInstToken)(Fts5Context*, int iIdx, int iToken, const char**, int*); }; /* ** CUSTOM AUXILIARY FUNCTIONS *************************************************************************/ /************************************************************************* |
︙ | ︙ |
Changes to skins/ardoise/css.txt.
︙ | ︙ | |||
614 615 616 617 618 619 620 | ol, p, pre, table, ul { margin-bottom: 1.5rem } | | | | | | | | | | | | 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 | ol, p, pre, table, ul { margin-bottom: 1.5rem } header { color: #888; font-weight: 400; padding-top: 10px; border-width: 0 } .filetree li > ul:before, .filetree li li:before { border-left: 2px solid #888; content: ''; position: absolute } .filetree>ul, header .logo, header .logo h1 { display: inline-block } header .login { padding-top: 2px; text-align: right } header .login .button { margin: 0 } header h1 { margin: 0; color: #888; display: inline-block } header .title h1 { padding-bottom: 10px } header .login, header h1 small, header h2 small { color: #777 } .middle { background-color: #1d2021; padding-bottom: 20px; max-width: 100%; box-sizing: border-box |
︙ | ︙ | |||
682 683 684 685 686 687 688 | } .artifact_content blockquote:first-of-type { padding: 1px 20px; margin: 0 0 20px; background: #000; border-radius: 5px } | | | | | 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 | } .artifact_content blockquote:first-of-type { padding: 1px 20px; margin: 0 0 20px; background: #000; border-radius: 5px } footer { padding: 10px 0 60px; border-top: 0; color: #888 } footer a { color: #527b8f; background-repeat: no-repeat; background-position: center top 10px } footer a:hover { color: #eef8ff } .mainmenu { background-color: #161819; border-top-right-radius: 15px; border-top-left-radius: 15px; clear: both |
︙ | ︙ | |||
731 732 733 734 735 736 737 | .mainmenu li:hover { background-color: #ff8000; border-radius: 5px } .mainmenu li:hover a { color: #000 } | | | 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 | .mainmenu li:hover { background-color: #ff8000; border-radius: 5px } .mainmenu li:hover a { color: #000 } nav#hbdrop { background-color: #161819; border-radius: 15px; display: none; width: 100%; position: absolute; z-index: 20; } |
︙ | ︙ |
Changes to skins/ardoise/footer.txt.
|
| | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | <th1> if {[string first artifact $current_page] == 0 || [string first hexdump $current_page] == 0} { html "</div>" } </th1> </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <footer> <div class="container"> <div class="pull-right"> <a href="https://fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </footer> |
Changes to skins/ardoise/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <header> <div class="container"> <div class="login pull-right"> <th1> if {[info exists login]} { html "<b>$login</b> — <a class='button' href='$home/login'>Logout</a>\n" } else { html "<a class='button' href='$home/login'>Login</a>\n" |
︙ | ︙ | |||
16 17 18 19 20 21 22 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> | | | | | | | | | | | | | | | | | > | | | | | | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> <nav class="mainmenu" title="Main Menu"> <ul> <th1> html "<li><a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a></li>\n" builtin_request_js hbmenu.js set once 1 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {$once && [string match $url\[/?#\]* /$current_page/]} { set class "$class active" set once 0 } html "<li class='$class'>" if {[string match /* $url]} {set url $home$url} html "<a href='$url'>$name</a></li>\n" } </th1> </ul> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> </div> <!-- end div container --> </header> <div class="middle max-full-width"> <div class="container"> <th1> if {[string first artifact $current_page] == 0 || [string first hexdump $current_page] == 0} { html "<div class=\"artifact_content\">" } </th1> |
Changes to skins/black_and_white/css.txt.
︙ | ︙ | |||
47 48 49 50 51 52 53 | color: #333; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | color: #333; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ header { margin:10px 0px 10px 0px; padding:1px 0px 0px 20px; border-style:solid; border-color:black; border-width:1px 0px; background-color:#eee; } /* The main menu bar that appears at the top left of the page beneath ** the header. Width must be co-ordinated with the container below */ nav.mainmenu { float: left; margin-left: 10px; margin-right: 20px; font-size: 0.9em; font-weight: bold; padding:5px; background-color:#eee; border:1px solid #999; width:6em; } /* Main menu is now a list */ nav.mainmenu ul { padding: 0; list-style:none; } nav.mainmenu a, nav.mainmenu a:visited{ padding: 1px 10px 1px 10px; color: #333; text-decoration: none; } nav.mainmenu a:hover { color: #eee; background-color: #333; } /* Container for the sub-menu and content so they don't spread ** out underneath the main menu */ #container { |
︙ | ︙ | |||
147 148 149 150 151 152 153 | float: left; clear: left; color: #333; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | float: left; clear: left; color: #333; white-space: nowrap; } /* The footer at the very bottom of the page */ footer { font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #eee; color: #555; } |
︙ | ︙ |
Changes to skins/black_and_white/footer.txt.
|
| | | | 1 2 3 | <footer> Fossil $release_version $manifest_version $manifest_date </footer> |
Changes to skins/black_and_white/header.txt.
|
| | | | | | | | | | | | | | | | | | | | | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | <header> <div class="logo"> <img src="$logo_image_url" alt="logo"> <br />$<project_name> </div> <div class="title">$<title></div> <div class="status"><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></div> </header> <nav class="mainmenu" title="Main Menu"> <th1> set sitemap 0 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url'>$name</a><br/>\n" if {[string match /sitemap $url]} {set sitemap 1} } if {!$sitemap} { html "<a href='$home/sitemap'>Sitemap</a>\n" } </th1> </nav> |
Changes to skins/blitz/css.txt.
︙ | ︙ | |||
753 754 755 756 757 758 759 | box-sizing: border-box; } /* Header * Div displayed at the top of every page. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ | | | | | | | | | | | 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 | box-sizing: border-box; } /* Header * Div displayed at the top of every page. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ header { color: #666; font-weight: 400; padding-top: 10px; border-width: 0px; border-top: 4px solid #446979; border-bottom: 1px solid #ccc; } header .logo { display: inline-block; } header .login { padding-top: 2px; text-align: right; } header .login .button { margin: 0; } header h1 { margin: 0px; color: #666; display: inline-block; } header .logo h1 { display: inline-block; } header .title h1 { padding-bottom: 10px; } header h1 small, header h2 small { color: #888; } header a.rss { display: inline-block; padding: 10px 15px; background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAABmJLR0QA/wD/AP+gvaeTAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH3wMNDhwn05VjawAAABl0RVh0Q29tbWVudABDcmVhdGVkIHdpdGggR0lNUFeBDhcAAAGlSURBVDjLrdPfb8xREAXwT7tIl+paVNaPJghCKC8kXv0XXvyNXsRfwYPQJqVKiqykWFVZXd12vcxNJtduUtJJvrm7984998ycMxxwNGI9jPs4j7nY+/U/gIdiPYO71dk21rCE7r8ybOHGmMfmcRNnsbEf1gXwNzqYSXs5WljEMXzAaBLg1Ji9Js7hOi6OeeAznqC/X8AcMyHWYpX7E4/Rm1QyHMdefCWGeI/VcMDR2D8S7Fci5y/AeTzCPVyLi1sYJAut4BTaiX0n9kc14MmkcjPY3I5LXezGtxqKtyJ3Lir6VAM2AmCq6m8Hl6PsQTB5hyvxmMhZxk4G3MZLfAwLtdNZM9rwOs528TVVNB3ga7UoQ2wGmyWciFaU0VwIJiP8iL6Xfp7GK+w0JthliDep8UKonTSGvbBTaU8f3QzYxgPcCsBvWK9E6OBFCNGPVjTTqC430p+H6fLVGLGtmIw7SbwevqT+XkgVPJ9Otpmtyl6I9XswLXEp/d6oPN0ugJu14xMLob4kgPRYjtkCOMDTUG+AZ3ibEtfDLorfEmAB3UuTdXDxBzUUZV+B82aLAAAAAElFTkSuQmCC); background-position: center center; background-repeat: no-repeat; } |
︙ | ︙ | |||
828 829 830 831 832 833 834 | color: #002060; } /* Footer * Displayed after the middle div and forms the page bottom. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ | | | | 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 | color: #002060; } /* Footer * Displayed after the middle div and forms the page bottom. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ footer { padding: 10px 0 60px; border-top: 1px solid #ccc; background-color: #f8f8f8; background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFAAAABGCAQAAADxl9ugAAAAAmJLR0QA/4ePzL8AAAAJcEhZcwAACxMAAAsTAQCanBgAAAAHdElNRQffAwkQBRPw+yfrAAAAPHRFWHRDb21tZW50ACBJbWFnZSBnZW5lcmF0ZWQgYnkgRVNQIEdob3N0c2NyaXB0IChkZXZpY2U9cG5tcmF3KQqV01S1AAANg0lEQVRo3s2aa3RVRZbH/3tXnftIbhJCeJMXyEsNCsgrSBzAYXyt0YVj2zSOonbLqNNO+25FB2kdoBVdrp6l09P2NNpOL/ExSrtUdGRU5JEQgggCggIhQIBAIIGQ5N57TlXt+ZAICY+AGCT76zlV91dVe+/a+38uDQ7dYmrxw41QocJuni28vWTedEa7Gd9iatplos7Y2mfDRED6A2G0I2At6AdPIiA84HgbLhx1o9QX9Zoh7QjYHpOkYJYrfNyOKH0O/f2FJpfkSRmMCNqDk9rDXwTAuxK7h3uFf7vXxC6QVO8w19taNOT4Y0wu1sCdW0AA6IZ73bCUaI/GHuqnsJqcEqVrzSo+wNtLdgAPsz6jeVURtQ9gA8bQT8xztW7X2u3I4G6UdA0cohxcSAN7nt939/zDQzl0LnfwaLpxqFT/i7w7XLYXhoUQ+YFW3pJlC0/nt6jZZb4zjXY2AaGXvQOz/gMY09NM4C6ixRL7OU3PTm4ORdiIjyLsJvq1Z28Hj5rBM82xMTqPuhHznmU72vq12W5M3+LykQWUW9pirzXOmmk8wjE87kb8HPmSKlY+wY62R9jhI69xf/AmYiEfiXs+e4CCAZiDUc9wP6cRIJn8tq23PQClbwap9Ivk7usiGj/CDs5xl3Qe/EiQCEWsgXLlq8sfamM73osMna43ef/pfm731Hsxv+Z00ozFzaJpF+gMrsNB2BwL3cGpSsNRWHbmvTQNiTbev9q4xXszMLzqpdgDNjxv/YjmvHlSQIc5brl8nrqT5pllMpHM9wRc4gU3oC8JyDLt8P78ZLyM2lrmAeTg5YrKdf1S1N9QTs6At1YM49DJo1gwWy7uE5kKBQHbler9K8R+L8API2qmFoDY7ejxpxGJxtMctyZcfYvJDYX4i5LXp/MJAS2edkM6edNUDuLNOdMDmz913bIw8RidKO/1w+3NQfePbHQjdbd/NKOjbq7UqQOmYtXb/+BdbGc4AJjPX53CXe6UGweYWxUR8N8rNkzn4wAFc9yYPvGbI+mttoycpo14fWbD0hZv+njOAcBl6Yl8jiEMbSKhqGPypZ5N0B2kDxrjOjF0QuI2oRqj2z6rAebx5pNiRrEo0jhV9wFkP//xpkPHAU6UGdnBLyR8jMOQs6xwcMAznVys+Uqa5YCCnilFkuVSkeKlGKfDED8JaI+VifMWR9LPC/u+UkoZA6edOezVu83pny4yD6qQnCxhDzk/MhUED+8PWHJMkAgWROvvQoxOcEdayynVY9/5tIgAwsfIvaD3XaFx6M6ddMx3WlurCGICiAIhMLxB1VM+wQRMDOeYjHCUMum85OW909/acOlJgqaI3qrulU9dYGlgzVI1tsVrETzlOt3MeUGCudVoCgJmAhFU78F+6Usu0V39iylSHCRZMROM0do5pUDWQamEPZTy5oqyXVt77bG94HGEyTkCE8BsjLOcl11Uvf7V+qvIP86jgWH8zhc9LpKYU7qRhH5NTRknwFw3+jpMQCIItG69PCLfZ1YKAMlS/7PQDCc2qbRzSjET4slINBAvhsPxjarKKy/9FhjZG8Ol4tINi/uFzkcfl8sEnwExATGRY73Ze2VpAHxHoPFki7r2iljN3a6LqqOxnZ+vXUBNwTF6uL0e0Nr3tSYWpqNhQtaIaA0A1oZftuf5Y1xmCMRO2IM1VXqf3aYr5IDXODTx7+43PAjPFcit1MANeHfFxuFpSFW9TbbORpaNEqt6/5tIoxuBOO049NHXNTN5lFwlE3o09JUMlWKdSmJ/UC8TvRwqjNzoVwNIw3upcluyq4hSAq3Idyt5HOyRIDGuGdA42un+6wv/Cq7sFA1F/B4N/5MEgL/yOsSPlFSMOzD5JhkMEQ8l8u41Job7hVoExsiRNMU1Biac4oqD92Np8Xt0BgUiEOeIHLyobbQb9U/8fSAItsFdJv1wECASB5L92xb2yefcE3Q+jgsof//m3+PO2i7YhgV0HisAK48UnE03EaT67axMyjMNXpFkf/za8r3VPP2I46Ri+aYDW0z3SMT6NJb62wPiJEFhEoFCYIhNoyS4VF1CBKArPu6MKWysBZhFVFrD7yri2VkuR4QZAIkTMAOAiDIY9KvFQ3gVPqXVFD9JSbSS3vP7b/QzKY8SlO6G5e2fv/dozvDxciKvCqOYrIURT/XU2+0SKrdVqtpVojz4Sq1Vi1duV0UE+PSEy33AhRRZAzDDc4vXrB1xKU0gRUR0LCCRi/be8XZ12/0M41Ka5K/5JjA80PlQKMjfeVVN9ZG0NZbmH8xf5Y8nJxIKmYC7SmLlexeVp2wKb+y++f6KZ3c/03g+qSISPONGDqHRZOk7QGde6XsrjTP+kXTTCpCVGErZteZUDRdhD3Ls65tz64J85ZGyQ0tXDAzckadFlOtv+ZQLoYMgHHaGcrJ7fb56inR3WVJJRVTdVM3kQSh5r1hjlbIGIKVX6ammE1vnmImh4Y4BZHGcvmvx5XQ6BcRIfnPHxV82ZnFUYsh+bWXLzMuwquGwCnvdBUTkqHcPmb+l5cJZcKervlwc8N04sbYwSJC1tsnneemJqh32LksZflr1SUh+pT6q+/ov/krztitDq1s4hP1p7mbfuFfkEKJgSejLh/Wa7Y5GJsdQmEYDoVocDCvnLCvnwPDMzOK/uiVoPS8IxoQi409D2xAU4Hd2xE2DZ3gjvcpVZY+0iqgAbFh7F9grp84xf5D6QFOE/rYo1vto4z6ZVg/COHKAiGJrACaljNHaiaqTp0NeXlr37Ydz/ahSLY9YzLIPMzlxSrxMzM/ochfnO8UHvJJpgTnm+UE5GKMeOu3LrLJPd32+Z1FamfczKli36jzTtHplvPg46o2WgNwEaOqoTk8Jxsu4+vGUruDkCCCJkg171g2ntqMkjrmuYUBwG7LglIq/U7Zr1HEjJrtiay9imIzsqt37gb4/RSZlSsP88iZf5T3kX2xOuBEck2yp13F32NbDuZbHqSjwFrzJbeP5eN6NHOJuoQyXRIZ7Y+3Gh08wohjpW/VOkFMyqagHILnOSAOuflh1bqKggew524Zs1ex+R6cmz1W7Z6clvzzF8T7rLsvnSfBgJVO9cN+ax4/IRwazZLZrqmOSWJjAR0hCdFc/F2AtToTos34POQHAkVw2Sp2m7EKwOEyLVj59w6EtpyjdFYSCCYhaC0n9fcmmr5pb8QBrQl0wqn/hNc82J8Rfc8lWVwVGwLeNDVPTdW30+KZN0Sab5JTBqOBBowIHzeZVJcBjXHdKXfQp93+DgwuIgy9QtqJ8evPhOtRRzZi30iKfBZPxQVNhofDP6sUXh05zA5MH+WKdpAgAJ/lAChqhyQmd5HgFChFoMvar0AaKq8qSGmA6o03lVFCEqx1Q0Ce0075hkgvW93EPNeMZ3Cl93eRl5Q8GVjoBs1tUN1l/rvkn1VOuNXUIAyATv6zz3bXPEo3t7N/L6bBirfV0MimktOclgnDUJHiTWxo9oJOZQT/TDUk0nIYfFODFaMMN4WFqbnEl8BzXtlBX++ATr2KW3eC9kbg/9d2l6/qmdxrq9bdRkBwMtrgd+lpkpxwyWSwAkvRKSfl0phf4l+6acGVeak6QQbCJYH/Gwci+zPi/+oMFEJpLdTDfQ8IpTTl8r/lkQuniX3oR98mjX65sJQZ0wYNu1NU0wZ8RjSUf0Qo22aCU1oAoCazTFe5bdZ1pVIqcv2B12XSm6awxCFNaichCr1I1auDOQFvKxzQHFD4mexGXNfds/PoYB+iMhxzwIC8vcJe7HKVIrBEoLRQkuVL+Ii70lDlMUOR/uHrJdNaAwXo82orksR8gfVUAGFkgy5JfhKdEKncelwhq8SinwMeidX+303VHuvOgCGSRRF2o6vb6FyKmQTGRNTraQt1qJ6EaAPA4V22MYH20wXx+aCyfKFvFAcymiYfCh/YhSQZACFHpBB/l6On2l0s/AgTBWdIHHbpIulzfWDbvAW4r3gPUIwWZ0lW6SoaE0AgDYG6Six0R4OjgWRMwGUAVHuUz+SS2iNQ+XSMAvKqzrLCemdtUo3ifWhpoTi7b/RSfVRH9zBc2y43umXVgtImf3R08c7zCIeYpVoPOtoh+ZjYEhd3kWtV4IHSldEjAV7U/ATHxig89Tx0QUFCtvUJisxnNknuHArSY4+xtNuk4ZQWai4wOBThJRqZJvhYXtptnNTVAHQnQYbT4k7QI6/Vd6w6f/U9h39cUgEg+BCG3rk/AHQ9wlhsz1IUCI3tRPrDVtdlhzO9rxPNke+nuXR0TUIdAMOrDlkJshwEM4QId98C8vKTmUUJHAwww03VNT8ny9hd/cJ9qCaU7QnrpjXvcsKuT6W7VdcvHc+sq95wDCuZILxp9t+TZJWVL/56PPdJzDhjDhG75UyUrsXXtBw9z/PjAObd3729leA++mTOldO07dyl9ghbmnALmYpjybndhO3/VV/dx9IQdlj53YQEAo2din3p1cv1VfDIF8EfvSdJQSZvU3ljGJJOvNuH94kOPs22jwWq3P5edTryG8G/OdtsxOPiZd6O1ppTeK0n43Hb/9yPuYBhPuFFXumEq3231S+ibL/e+yhtP2Zz+iD74hCu83vblr+W1yJ5LgguxmzedRu/8//AUSaqMTR1xAAAAAElFTkSuQmCC); background-repeat: no-repeat; background-position: center top 10px; } footer a { color: #3b5c6b; } /* Main Menu * Displayed in header, contains repository links. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ |
︙ | ︙ | |||
869 870 871 872 873 874 875 | .mainmenu li.active { background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABEAAAAJCAYAAADU6McMAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEQAACxEBf2RfkQAAAAd0SU1FB90FDxEXAZ2XRzAAAABJSURBVCjPY2CgBzhz5sx/QmoYiTXAxMSEkWRDsLkAl0GMpHoBm0EoAlu3bmUQFxcnGAboBjEhc4gxAJtLGUmJBVwuYiTXAGSDAIx5IBObnuVxAAAAAElFTkSuQmCC); background-repeat: no-repeat; background-position: center bottom; } .mainmenu li a, | | | | | 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 | .mainmenu li.active { background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABEAAAAJCAYAAADU6McMAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEQAACxEBf2RfkQAAAAd0SU1FB90FDxEXAZ2XRzAAAABJSURBVCjPY2CgBzhz5sx/QmoYiTXAxMSEkWRDsLkAl0GMpHoBm0EoAlu3bmUQFxcnGAboBjEhc4gxAJtLGUmJBVwuYiTXAGSDAIx5IBObnuVxAAAAAElFTkSuQmCC); background-repeat: no-repeat; background-position: center bottom; } .mainmenu li a, nav#hbdrop a { color: #3b5c6b; padding: 10px 15px; } .mainmenu li.active a { font-weight: bold; } .mainmenu li:hover nav#hbdrop a:hover { background-color: #eee; } nav#hbdrop { background-color: white; border: 2px solid #ccc; display: none; width: 100%; position: absolute; z-index: 20; } |
︙ | ︙ |
Changes to skins/blitz/footer.txt.
1 2 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> | | | | 1 2 3 4 5 6 7 8 9 10 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <footer> <div class="container"> <div class="pull-right"> <a href="https://www.fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </footer> |
Changes to skins/blitz/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <header> <div class="container"> <!-- Header --> <div class="login pull-right"> <th1> if {[info exists login]} { html "<b>$login</b> — <a class='button' href='$home/login'>Logout</a>\n" |
︙ | ︙ | |||
18 19 20 21 22 23 24 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> | | > | | | | | | | | | | | | | | | | > | | | | | | 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> <nav class="mainmenu" title="Main Menu"> <ul> <th1> html "<li><a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a></li>\n" builtin_request_js hbmenu.js set once 1 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {$once && [string match $url\[/?#\]* /$current_page/]} { set class "active $class" set once 0 } html "<li class='$class'>" if {[string match /* $url]} {set url $home$url} html "<a href='$url'>$name</a></li>\n" } </th1> </ul> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> </div> <!-- end div container --> </header> <div class="middle max-full-width"> <div class="container"> |
Changes to skins/darkmode/css.txt.
︙ | ︙ | |||
32 33 34 35 36 37 38 | ** the area to show as blank. The purpose is to cause the ** title to be exactly centered. */ div.leftoftitle { visibility: hidden; } /* The header across the top of the page */ | | | | | | | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | ** the area to show as blank. The purpose is to cause the ** title to be exactly centered. */ div.leftoftitle { visibility: hidden; } /* The header across the top of the page */ header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ nav.mainmenu { padding: 0.25em 0.5em; font-size: 0.9em; font-weight: bold; text-align: center; border-top-left-radius: 0.5em; border-top-right-radius: 0.5em; border-bottom: 1px dotted rgba(200,200,200,0.3); z-index: 21; /* just above hbdrop */ } nav#hbdrop { background-color: #1f1f1f; border: 2px solid #303536; border-radius: 0 0 0.5em 0.5em; display: none; left: 2em; width: calc(100% - 4em); position: absolute; z-index: 20; /* just below mainmenu, but above timeline bubbles */ } nav.mainmenu, div.submenu, div.sectionmenu { color: #ffffffcc; background-color: #303536/*#0000ff60*/; } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 0.15em 0.5em 0.15em 0; font-size: 0.9em; text-align: center; border-bottom-left-radius: 0.5em; border-bottom-right-radius: 0.5em; } a, a:visited { color: rgba(127, 201, 255, 0.9); display: inline; text-decoration: none; } a:visited {opacity: 0.8} nav.mainmenu a, div.submenu a, div.sectionmenu>a.button, div.submenu label, footer a { padding: 0.15em 0.5em; } nav.mainmenu a.active { border-bottom: 1px solid #FF4500f0; } a:hover, a:visited:hover { background-color: #FF4500f0; color: rgba(24,24,24,0.8); border-radius: 0.1em; |
︙ | ︙ | |||
170 171 172 173 174 175 176 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ footer { clear: both; font-size: 0.8em; padding: 0.15em 0.5em; text-align: right; background-color: #303536/*#0000ff60*/; border-top: 1px dotted rgba(200,200,200,0.3); border-bottom-left-radius: 0.5em; |
︙ | ︙ |
Changes to skins/darkmode/footer.txt.
|
| | | | 1 2 3 4 5 6 7 8 | <footer> <div class="container"> <div class="pull-right"> <a href="https://www.fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </footer> |
Changes to skins/darkmode/header.txt.
|
| | | | | | | | | | | | | | | | | | | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | <header> <div class="status leftoftitle"><th1> if {[info exists login]} { set logintext "<a href='$home/login'>$login</a>\n" } else { set logintext "<a href='$home/login'>Login</a>\n" } html $logintext </th1></div> <div class="title">$<title></div> <div class="status"><nobr><th1> html $logintext </th1></nobr></div> </header> <nav class="mainmenu" title="Main Menu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} { if {[string match $url\[/?#\]* /$current_page/]} { set class "active $class" } set url $home$url } html "<a href='$url' class='$class'>$name</a>\n" } </th1> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> |
Changes to skins/default/README.md.
|
| | > | > | > > > | | 1 2 3 4 5 6 7 8 9 10 | This skin was originally contributed by Étienne Deparis on 2015-02-22, promoted to the default on 2015-03-14, and subsequently changed by many: https://fossil-scm.org/home/finfo/skins/default/css.txt https://fossil-scm.org/home/blame?filename=skins/default/css.txt&checkin=trunk In February 2024, a sufficiently large set of changes were made to the skin that we forked the old version for the benefit of those who needed to reference the old one — as when migrating custom skin changes to work atop the new default — or who simply preferred it. See ../etienne. |
Changes to skins/default/css.txt.
|
| | > < < < < > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | /* Overall page style; vi: filetype=css */ body { margin: 0 auto; background-color: white; font-family: sans-serif; } a { /* Unvisited links are a lightness-adjusted version of this skin's * header blue, balancing contrast between the body text and the * background in order to meet the goals specified by the WCAG 2 * accessbility standard, earning us an "AA" grade according to * the calculator result here: * * https://webaim.org/resources/linkcontrastchecker/?fcolor=2E2E2E&bcolor=FFFFFF&lcolor=3779BF * * It is for this same reason that our not-quite-black body text * color is the shade of dark gray that it is. It can't be any * lighter and still allow us to meet both targets. */ color: #3779BF; text-decoration: none; } a:hover { color: #4183C4; text-decoration: underline; } /* Page title, above menu bars */ .title { color: #4183C4; float: left; } h1.page-title { font-size: 1.60em; /* match content > h1 */ margin-bottom: 0; /* div.content top margin suffices */ display: none; /* don't use body-area h1 except… */ } .artifact h1.page-title, .dir h1.page-title, .doc h1.page-title, .wiki h1.page-title { display: block; /* …for potentially long doc titles… */ } .artifact .title > .page-title, .dir .title > .page-title, .doc .title > .page-title, .wiki .title > .page-title { display: none; /* …where we suppress the title area h1 instead */ } .title h1 { display: inline; font-size: 2.20em; } .title h1:after { content: " / "; color: #777; font-weight: normal; } .artifact .title h1:after, .dir .title h1:after, .doc .title h1:after, .wiki .title h1:after { content: ""; /* hide solidus for docs along with title h1 */ } .status { float: right; font-size: 0.8em; } div.logo { float: left; padding-right: 10px; } div.logo img { max-height: 2em; /* smaller than title to keep it above the baseline */ } /* Main menu and optional sub-menu */ .mainmenu { clear: both; background: #eaeaea linear-gradient(#fafafa, #eaeaea) repeat-x; border: 1px solid #eaeaea; border-radius: 5px; overflow-x: auto; overflow-y: hidden; white-space: nowrap; z-index: 21; /* just above hbdrop */ } .mainmenu a { text-decoration: none; color: #777; border-right: 1px solid #eaeaea; } .mainmenu a.active, .mainmenu a:hover { color: #000; border-bottom: 2px solid #D26911; } nav#hbdrop { background-color: white; border: 1px solid black; border-top: white; border-radius: 0 0 0.5em 0.5em; display: none; font-size: 80%; left: 2em; width: 90%; padding-right: 1em; position: absolute; z-index: 20; /* just below mainmenu, but above timeline bubbles */ } .submenu { font-size: 0.8em; padding: 10px; border-bottom: 1px solid #ccc; } .submenu a, .submenu label { padding: 10px 11px; text-decoration: none; color: #777; |
︙ | ︙ | |||
103 104 105 106 107 108 109 | white-space: nowrap; } /* Main document area; elements common to most pages. */ .content { | | < | < < < | < < | | < | | < | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | white-space: nowrap; } /* Main document area; elements common to most pages. */ .content { padding: 1ex; color: #2e2e2e; /* justified above in "WCAG 2" comment */ } .content h1 { font-size: 1.60em; color: #444; } .content h2 { font-size: 1.45em; color: #444; } .content h3 { font-size: 1.15em; color: #444; } .content h4 { font-size: 1.05em; color: #444; } .content h5 { font-size: 1.00em; color: #444; } .section { font-size: 1em; font-weight: bold; background-color: #f5f5f5; border: 1px solid #d8d8d8; border-radius: 3px 3px 0 0; |
︙ | ︙ | |||
149 150 151 152 153 154 155 | hr { color: #eee; } /* Page footer */ | | | | > > > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > < > | 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 | hr { color: #eee; } /* Page footer */ footer { border-top: 1px solid #ccc; padding: 10px; font-size: 0.8em; margin-top: 10px; color: #ccc; } /* Forum */ .forum a:visited { color: #6A7F94; } div.forumSel { animation: 1s linear 0s sel-fade; background-color: white; /* animation end state */ border-left: 4px solid black; /* after-animation selection indicator */ } @keyframes sel-fade { from { background-color: #cef; } to { background-color: white; } } .forum form input { margin: 0.5em 0; } /* Markdown and Wiki-formatted pages: /wiki, /doc, /file... */ .markdown blockquote, p.blockquote, .sidebar { /* Override default.css version with our accent colors. Background is * the solid version of rgba(65, 131, 196, 0.1) on white, needed to * avoid tinting pre block backgrounds going "under" them. */ background-color: #ebf2f9; border-left-color: #4183c4; } div.sidebar { /* Add extra whitespace between sidebar and content, both for spacing * and to put a gap between it and any <pre> blocks that happen to run * up against it. */ outline: 1em solid white; } /* Mark inline code fragments in the near-universal manner pioneered by * Stack Overflow, then picked up by approximately everyone, including * us, now. * * This combinatorial selector explosion results from a need to apply * these stylings inside multiple page container types, multiplied by * the surprisingly large number of tags HTML defines for semantically * differentiated monospaced inline markup. If we do not target the * elements we want to affect carefully, we'll end up overreaching, * styling Fossil UI elements that use these tags for local purposes. * * HTML generated and emitted by Fossil UI does not always fall under * the skin's generic rules; we must avoid intruding on its domain. * Our limited intent here is to style user content only, where it is * unreasonable to expect its author to take the time to hand-craft * per-document styling. Contrast Fossil UI, which often does exactly * that in order to get particular results. * * Its rough equivalent in Sass syntax is far more compact, thus clearer: * * .artifact, .dir, .doc, .forum, .wiki // the page types we target * > .content // hands off header & footer * &, > .fossil-doc, > .markdown // wiki, HTML & MD emb docs * > p // in top-level paras only * > code, > kbd, > samp, > tt, > var // monospaced tag types * background-color: #f4f4f4 // pale gray box which… * padding: 0 4px // …extends around the sides * * We then need something similar for the block-level pre elements. * * The CSS below is based on feeding that Sass code through this: * * $ sassc code.sass | sed -e 's/, /,\n/g' * * …then hand-cleansing it to make it _somewhat_ more understandable. * That largely amounts to whitespace tweaks, but we've also done things * like trim back the forum-specific styling to apply to the default MD * markup only; direct HTML formatting isn't even an option there, and * while wiki markup _is_ supported, MD was the default from day 1. * Another quirk of the forum post handling is that the .markdown class * gets applied per-post, not up at the top level as with the wiki, * embedded docs, etc. */ .artifact > .content > p > code, .artifact > .content > p > kbd, .artifact > .content > p > samp, .artifact > .content > p > tt, .artifact > .content > p > var, .artifact > .content > .fossil-doc > p > code, .artifact > .content > .fossil-doc > p > kbd, .artifact > .content > .fossil-doc > p > samp, .artifact > .content > .fossil-doc > p > tt, .artifact > .content > .fossil-doc > p > var, .artifact > .content > .markdown > p > code, .artifact > .content > .markdown > p > kbd, .artifact > .content > .markdown > p > samp, .artifact > .content > .markdown > p > tt, .artifact > .content > .markdown > p > var, .dir > .content > p > code, .dir > .content > p > kbd, .dir > .content > p > samp, .dir > .content > p > tt, .dir > .content > p > var, .dir > .content > .fossil-doc > p > code, .dir > .content > .fossil-doc > p > kbd, .dir > .content > .fossil-doc > p > samp, .dir > .content > .fossil-doc > p > tt, .dir > .content > .fossil-doc > p > var, .dir > .content > .markdown > p > code, .dir > .content > .markdown > p > kbd, .dir > .content > .markdown > p > samp, .dir > .content > .markdown > p > tt, .dir > .content > .markdown > p > var, .doc > .content > p > code, .doc > .content > p > kbd, .doc > .content > p > samp, .doc > .content > p > tt, .doc > .content > p > var, .doc > .content > .fossil-doc > p > code, .doc > .content > .fossil-doc > p > kbd, .doc > .content > .fossil-doc > p > samp, .doc > .content > .fossil-doc > p > tt, .doc > .content > .fossil-doc > p > var, .doc > .content > .markdown > p > code, .doc > .content > .markdown > p > kbd, .doc > .content > .markdown > p > samp, .doc > .content > .markdown > p > tt, .doc > .content > .markdown > p > var, .forum > .content .markdown > p > code, .forum > .content .markdown > p > kbd, .forum > .content .markdown > p > samp, .forum > .content .markdown > p > tt, .forum > .content .markdown > p > var, .wiki > .content > p > code, .wiki > .content > p > kbd, .wiki > .content > p > samp, .wiki > .content > p > tt, .wiki > .content > p > var, .wiki > .content > .fossil-doc > p > code, .wiki > .content > .fossil-doc > p > kbd, .wiki > .content > .fossil-doc > p > samp, .wiki > .content > .fossil-doc > p > tt, .wiki > .content > .fossil-doc > p > var, .wiki > .content > .markdown > p > code, .wiki > .content > .markdown > p > kbd, .wiki > .content > .markdown > p > samp, .wiki > .content > .markdown > p > tt, .wiki > .content > .markdown > p > var, .artifact > .content > pre, .artifact > .content > .fossil-doc > pre, .artifact > .content > .markdown > pre, .dir > .content > pre, .dir > .content > .fossil-doc > pre, .dir > .content > .markdown > pre, .doc > .content > pre, .doc > .content > .fossil-doc > pre, .doc > .content > .markdown > pre, .forum > .content .markdown > pre, .wiki > .content > pre, .wiki > .content > .fossil-doc > pre, .wiki > .content > .markdown > pre { background-color: #f4f4f4; padding: 0 4px; } .content pre, table.numbered-lines > tbody > tr { hyphens: none; line-height: 1.25; } .content ul li { list-style-type: disc; } .artifact > .content table, .dir > .content table, .doc > .content table { background-color: #f0f5f9; border: 1px solid #a7c2dc; border-radius: 0.5em; border-spacing: 0; padding: 6px; } .artifact > .content th, .dir > .content th, .doc > .content th { border-bottom: 1px solid #dee8f2; padding-bottom: 4px; padding-right: 6px; text-align: left; } .artifact > .content tr > th, .dir > .content tr > th, .doc > .content tr > th { background-color: #dee8f0; } .artifact > .content tr:nth-child(odd), .dir > .content tr:nth-child(odd), .doc > .content tr:nth-child(odd) { background-color: #e0e8ee; } .artifact > .content td, .dir > .content td, .doc > .content td { padding-bottom: 4px; padding-right: 6px; text-align: left; } /* Wiki adjustments */ pre.verbatim { /* keep code examples from crashing into sidebars, etc. */ white-space: pre-wrap; } textarea.wikiedit { /* Monospace fonts tend to have smaller x-heights; compensate. * Can't do this generally because not all fonts have this problem. * A textarea stands alone, whereas inline <code> has to work with * the browser's choice of sans-serif proportional font. */ font-size: 1.1em; } /* Tickets */ table.report { cursor: auto; border: 1px solid #ccc; border-radius: 0.5em; margin: 1em 0; } .report td, .report th { border: 0; font-size: .8em; padding: 10px; } |
︙ | ︙ | |||
225 226 227 228 229 230 231 | white-space: pre-wrap; } /* Timeline */ span.timelineDetail { | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 | white-space: pre-wrap; } /* Timeline */ span.timelineDetail { font-size: 90%; } div.timelineDate { font-weight: bold; white-space: nowrap; } /* Extend default.css comment cell rounding to the whole row for the * various types of "selected" rows, making them "hang" into the left * margin, distinguishing them from the coloring used for branch cells. * Care must be taken to avoid having the box-shadow rounded but the * background squared-off. */ table.timelineTable { padding: 0 3px; /* leave space to sides for box shadow; can clip otherwise */ } table.timelineTable tr { border-radius: 1em; } tr.timelineSelected, tr.timelineSecondary { background-color: unset; } tr.timelineSelected td, span.timelineSelected { background-color: #fbe8d5; } tr.timelineSecondary td, span.timelineSecondary { background-color: #d5e8fb; } tr.timelineCurrent td:first-child, tr.timelineSecondary td:first-child, tr.timelineSelected td:first-child { border-top-left-radius: 1em; border-bottom-left-radius: 1em; } tr.timelineCurrent td:last-child, tr.timelineSecondary td:last-child, tr.timelineSelected td:last-child { border-top-right-radius: 1em; border-bottom-right-radius: 1em; } tr.timelineCurrent td { border-top: 1px dashed #446979; border-bottom: 1px dashed #446979; } tr.timelineCurrent td:first-child { border-left: 1px dashed #446979; } tr.timelineCurrent td:last-child { border-right: 1px dashed #446979; } /* Miscellaneous UI elements */ .fossil-tooltip.help-buttonlet-content { background-color: lightyellow; } /* Exceptions for specific screen sizes */ @media screen and (max-width: 600px) { /* Spacing for mobile */ body { padding-left: 4px; padding-right: 4px; } .content { font-size: 0.9em; } .title { padding-top: 0px; padding-bottom: 0px; } .title > .page-title { display: inline; /* show page titles above menu bar… */ } .artifact .title > .page-title, .dir .title > .page-title, .doc .title > .page-title, .wiki .title > .page-title { display: none; /* …except for docs, where it may force wrapping */ } .status {padding-top: 0px;} .mainmenu a { padding: 8px 10px; } .mainmenu { padding: 10px; } } @media screen and (min-width: 600px) { /* Spacing for desktop */ body { padding-left: 20px; padding-right: 20px; } .title { padding-top: 10px; padding-bottom: 10px; } span.page-title { font-size: 18px; } div.logo { padding-top: 10px; } .status {padding-top: 30px;} .mainmenu a { padding: 8px 20px; } .mainmenu { padding: 10px; } /* Wide screens mean long lines. Add extra leading to give the eye a * "gutter" to follow from the end of one to the start of the next. */ .content dd, .content dt, .content div, .content li, .content p, .content table { line-height: 1.4em; } /* This horror show has the same cause that informed our handling of * <code> and friends above; see "combinatorial selector explosion." * Without this careful targeting, we'd not only overreach into areas * of Fossil UI where our meddling is not wanted, we would mistakenly * apply double indents to nested formatting in MD forum posts, p * within td tags, and more. * * Rather than give the equivalent Sass code here, see the SCSS file * that the [Inskinerator](https://tangentsoft.com/inskinerator/) * project ships as override/modern/media.scss. Rendering that * through sassc gives substantially identical output, modulo the * hand-polishing we've done here. */ .artifact > .content > p, .artifact > .content > .markdown > p, .artifact > .content > .fossil-doc > p, .artifact > .content > ol, .artifact > .content > ul, .artifact > .content > .markdown > ol, .artifact > .content > .markdown > ul, .artifact > .content > .fossil-doc > ol, .artifact > .content > .fossil-doc > ul, .artifact > .content > table, .artifact > .content > .markdown > table, .artifact > .content > .fossil-doc > table, .dir > .content > p, .dir > .content > .markdown > p, .dir > .content > .fossil-doc > p, .dir > .content > ol, .dir > .content > ul, .dir > .content > .markdown > ol, .dir > .content > .markdown > ul, .dir > .content > .fossil-doc > ol, .dir > .content > .fossil-doc > ul, .dir > .content > table, .dir > .content > .markdown > table, .dir > .content > .fossil-doc > table, .doc > .content > p, .doc > .content > .markdown > p, .doc > .content > .fossil-doc > p, .doc > .content > ol, .doc > .content > ul, .doc > .content > .markdown > ol, .doc > .content > .markdown > ul, .doc > .content > .fossil-doc > ol, .doc > .content > .fossil-doc > ul, .doc > .content > table, .doc > .content > .markdown > table, .doc > .content > .fossil-doc > table, .wiki > .content > p, .wiki > .content > .markdown > p, .wiki > .content > .fossil-doc > p, .wiki > .content > ol, .wiki > .content > ul, .wiki > .content > .markdown > ol, .wiki > .content > .markdown > ul, .wiki > .content > .fossil-doc > ol, .wiki > .content > .fossil-doc > ul, .wiki > .content > table, .wiki > .content > .markdown > table, .wiki > .content > .fossil-doc > table, #fileedit-tab-preview-wrapper > p, #fileedit-tab-preview-wrapper > ol, #fileedit-tab-preview-wrapper > ul, #fileedit-tab-preview-wrapper > table, #fileedit-tab-preview-wrapper > .markdown > p, #fileedit-tab-preview-wrapper > .markdown > ol, #fileedit-tab-preview-wrapper > .markdown > ul, #fileedit-tab-preview-wrapper > .markdown > table, #wikiedit-tab-preview-wrapper > p, #wikiedit-tab-preview-wrapper > ol, #wikiedit-tab-preview-wrapper > ul, #wikiedit-tab-preview-wrapper > table, #wikiedit-tab-preview-wrapper > .markdown > p, #wikiedit-tab-preview-wrapper > .markdown > ol, #wikiedit-tab-preview-wrapper > .markdown > ul, #wikiedit-tab-preview-wrapper > .markdown > table { margin-left: 50pt; margin-right: 50pt; } /* Code blocks get extra indenting. We need a selector explosion * equally powerful to the one above for inline <code> fragments and * similar elements, for essentially the same reason: Fossil UI also * uses <pre>, and we want to affect user content only. * * The equivalent Sass code is: * * .artifact, .dir, .doc, .wiki // doc types we target * > .content // hands off header & footer * @import 'pre-doc-margins.sass' * * #fileedit-tab-preview-wrapper, // include /fileedit previews * #wikiedit-tab-preview-wrapper // ditto /wikiedit * @import 'pre-doc-margins.sass' * * …where pre-doc-margins.sass contains the elements common to both: * * &, > .fossil-doc, > .markdown // wiki, HTML & MD doc types * > pre // direct pre descendants only * margin-left: 70pt; * margin-right: 50pt; * * This is a technical overreach since /wiki & /wikiedit lack support * for Fossil's HTML embedded doc markup capability, but we prefer to * draw the /fileedit parallel in our Sass example over the dubious * pleasure of being nit-picky on this point. Instead, we've chosen * to back that overreach out by hand below. */ .artifact > .content > pre, .artifact > .content > .fossil-doc > pre, .artifact > .content > .markdown > pre, .dir > .content > pre, .dir > .content > .fossil-doc > pre, .dir > .content > .markdown > pre, .doc > .content > pre, .doc > .content > .fossil-doc > pre, .doc > .content > .markdown > pre, .wiki > .content > pre, .wiki > .content > .markdown > pre { margin-left: 70pt; margin-right: 50pt; } #fileedit-tab-preview-wrapper > pre, #wikiedit-tab-preview-wrapper > pre, #fileedit-tab-preview-wrapper > .fossil-doc > pre, #fileedit-tab-preview-wrapper > .markdown > pre, #wikiedit-tab-preview-wrapper > .markdown > pre { margin-left: 70pt; margin-right: 50pt; } .forum > .content .markdown > pre { margin-left: 20pt; /* special case for MD in forum; need less indent */ } /* Fossil UI uses these, but in sufficiently constrained ways that we * don't have to be nearly as careful to avoid an overreach. */ .doc > .content h1, .artifact h1, .dir h1, .fileedit h1, .wiki h1 { margin-left: 10pt; } .doc > .content h2, .artifact h2, .dir h2, .fileedit h2, .wiki h2 { margin-left: 20pt; } .doc > .content h3, .artifact h3, .dir h3, .fileedit h3, .wiki h3 { margin-left: 30pt; } .doc > .content h4, .artifact h4, .dir h4, .fileedit h4, .wiki h4 { margin-left: 40pt; } .doc > .content h5, .artifact h5, .dir h5, .fileedit h5, .wiki h5 { margin-left: 50pt; } .doc > .content hr, .artifact hr, .dir hr, .fileedit hr, .wiki hr { margin-left: 10pt; } /* Don't need to be nearly as careful with tags Fossil UI doesn't use. */ .doc dd, .artifact dd, .dir dd, .fileedit dd, .wikiedit dd { margin-left: 30pt; margin-bottom: 1em; } .doc dl, .artifact dl, .dir dl, .fileedit dl, .wikiedit dl { margin-left: 60pt; } .doc dt, .artifact dt, .dir dt, .fileedit dt, .wikiedit dt { margin-left: 10pt; } /* Fossil UI doesn't use Pikchr at all (yet?) so we can be quite loose * with these selectors. */ .content .pikchr-wrapper { margin-left: 70pt; } div.pikchr-wrapper.indent:not(.source) { /* Selector naming scheme mismatch is intentional: it must match the * way it's given in default.css exactly if it is to override it. */ margin-left: 70pt; margin-right: 50pt; } div.pikchr-wrapper.center:not(.source), div.pikchr-wrapper.float-right:not(.source) { margin-left: 0; } /* Special treatment for backward compatibility. */ .indent, /* clean alternative to misusing <blockquote> */ .artifact > .content > blockquote:not(.file-content), .dir > .content > blockquote, .doc > .content > blockquote, .fileedit > .content > blockquote, .wiki > .content > blockquote { /* We must apply extra indent relative to "p" since Fossil's wiki * generator misuses the blockquote tag against HTML and MD norms * to mean "indented paragraph." Skip it for file content retrieved * by /dir URLs. */ margin-left: 80pt; } .artifact > .content > .markdown > blockquote, .dir > .content > .markdown > blockquote, .doc > .content > .markdown > blockquote, .fileedit > .content > .markdown > blockquote, .wiki > .content > .markdown > blockquote { /* Fossil MD didn't inherit that bug; its HTML generator emits * blockquote tags only for _block quotes_! A moderate indent * suffices due to the visual styling applied above. */ margin-left: 60pt; } /* Alternative to BLOCK.indent when wrapped in something that is * itself indented. The value is the delta between p and blockquote * above, expressed as padding instead of margin so it adds to the * outer margin instead of forcing the browser into picking one. */ .local-indent { padding-left: 30pt; } } |
Changes to skins/default/details.txt.
1 2 3 4 | timeline-arrowheads: 1 timeline-circle-nodes: 1 timeline-color-graph-lines: 1 white-foreground: 0 | > > | 1 2 3 4 5 6 | pikchr-fontscale: "0.9" pikchr-scale: "1.1" timeline-arrowheads: 1 timeline-circle-nodes: 1 timeline-color-graph-lines: 1 white-foreground: 0 |
Changes to skins/default/footer.txt.
|
| | | | 1 2 3 4 5 | <footer> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by Fossil $release_version $manifest_version $manifest_date </footer> |
Changes to skins/default/header.txt.
|
| > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > | > | | | | | | | > | | | | | | | | | | | | | | | > | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | <header> <div class="logo"> <th1> ## See skins/original/header.txt for commentary; not repeated here. proc getLogoUrl { baseurl } { set idx(first) [string first // $baseurl] if {$idx(first) != -1} { set idx(first+1) [expr {$idx(first) + 2}] set idx(nextRange) [string range $baseurl $idx(first+1) end] set idx(next) [string first / $idx(nextRange)] if {$idx(next) != -1} { set idx(next) [expr {$idx(next) + $idx(first+1)}] set idx(next-1) [expr {$idx(next) - 1}] set scheme [string range $baseurl 0 $idx(first)] set host [string range $baseurl $idx(first+1) $idx(next-1)] if {[string compare $scheme http:/] == 0} { set scheme http:// } else { set scheme https:// } set logourl $scheme$host/ } else { set logourl $baseurl } } else { set logourl $baseurl } return $logourl } set logourl [getLogoUrl $baseurl] </th1> <a href="$logourl"> <img src="$logo_image_url" border="0" alt="$project_name"> </a> </div> <div class="title"> <h1>$<project_name></h1> <span class="page-title">$<title></span> </div> <div class="status"> <th1> if {[info exists login]} { html "<a href='$home/login'>$login</a>\n" } else { html "<a href='$home/login'>Login</a>\n" } </th1> </div> </header> <nav class="mainmenu" title="Main Menu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} { if {[string match $url\[/?#\]* /$current_page/]} { set class "active $class" } set url $home$url } html "<a href='$url' class='$class'>$name</a>\n" } </th1> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> <h1 class="page-title">$<title></h1> |
Changes to skins/eagle/css.txt.
︙ | ︙ | |||
45 46 47 48 49 50 51 | color: white; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | color: white; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ nav.mainmenu { padding: 5px 10px 5px 10px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #76869D; border-top-left-radius: 8px; border-top-right-radius: 8px; color: white; } nav#hbdrop { background-color: #485D7B; border-radius: 0 0 15px 15px; border-left: 0.5em solid #76869d; border-bottom: 1.2em solid #76869d; display: none; width: 98%; position: absolute; z-index: 20; } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; font-weight: bold; text-align: center; background-color: #485D7B; color: white; } nav.mainmenu a, nav.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } nav.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { text-decoration: underline; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ | |||
129 130 131 132 133 134 135 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ footer { clear: both; font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #485D7B; border-bottom-left-radius: 8px; |
︙ | ︙ |
Changes to skins/eagle/footer.txt.
|
| | | 1 2 3 4 5 6 7 8 | <footer> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { return "<a href=\"https://www.tcl.tk/\">Tcl</a> version $tclVersion" } return "" } |
︙ | ︙ | |||
17 18 19 20 21 22 23 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> | | | 17 18 19 20 21 22 23 24 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> </footer> |
Changes to skins/eagle/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <header> <div class="logo"> <th1> ## ## NOTE: The purpose of this procedure is to take the base URL of the ## Fossil project and return the root of the entire web site using ## the same URI scheme as the base URL (e.g. http or https). ## |
︙ | ︙ | |||
74 75 76 77 78 79 80 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> | | | | | | | | | | | | | | | | | | | > | | | | | | | | > | | 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> </header> <th1>html "<script nonce='$nonce'>"</th1> (function updateClock(){ var e = document.getElementById("clock"); if(!e) return; if(!updateClock.fmt){ updateClock.fmt = function(n){ return n < 10 ? '0' + n : n; }; } var d = new Date(); e.innerHTML = d.getUTCFullYear()+ '-' + updateClock.fmt(d.getUTCMonth() + 1) + '-' + updateClock.fmt(d.getUTCDate()) + ' ' + updateClock.fmt(d.getUTCHours()) + ':' + updateClock.fmt(d.getUTCMinutes()); setTimeout(updateClock,(60-d.getUTCSeconds())*1000); })(); </script> <nav class="mainmenu" title="Main Menu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>\n" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" } </th1> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> |
Added skins/etienne/README.md.
> > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | This skin was contributed by Étienne Deparis. It was promoted to the default from 2015-03-14 until February 2024, when it was forked into this location for use by those who do not want the large number of changes merged into trunk at that time. Even if you agree with us that the changes improve readability, you may prefer to pack more information onto the screen at the expense of readability. Other reasons to choose this fork are to migrate custom skin changes to work atop the new base, or to make a comparative design evaluation. A bare minimum of changes have been made to this fork, primarily to allow this skin to render the Fossil documentation in a readable fashion. The intent is that you be able to toggle between these two skins at will. |
Added skins/etienne/css.txt.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 | /* Overall page style; vi: filetype=css */ body { margin: 0 auto; background-color: white; font-family: sans-serif; font-size: 14pt; } a { color: #4183C4; text-decoration: none; } a:hover { color: #4183C4; text-decoration: underline; } /* Page title, above menu bars */ .title { color: #4183C4; float: left; } .title h1 { display: inline; } .title h1:after { content: " / "; color: #777; font-weight: normal; } .status { float: right; font-size: 0.7em; } /* Main menu and optional sub-menu */ .mainmenu { font-size: 0.8em; clear: both; background: #eaeaea linear-gradient(#fafafa, #eaeaea) repeat-x; border: 1px solid #eaeaea; border-radius: 5px; overflow-x: auto; overflow-y: hidden; white-space: nowrap; z-index: 21; /* just above hbdrop */ } .mainmenu a { text-decoration: none; color: #777; border-right: 1px solid #eaeaea; } .mainmenu a.active, .mainmenu a:hover { color: #000; border-bottom: 2px solid #D26911; } nav#hbdrop { background-color: white; border: 1px solid black; border-top: white; border-radius: 0 0 0.5em 0.5em; display: none; font-size: 80%; left: 2em; width: 90%; padding-right: 1em; position: absolute; z-index: 20; /* just below mainmenu, but above timeline bubbles */ } .submenu { font-size: .7em; padding: 10px; border-bottom: 1px solid #ccc; } .submenu a, .submenu label { padding: 10px 11px; text-decoration: none; color: #777; } .submenu label { white-space: nowrap; } .submenu a:hover, .submenu label:hover { padding: 6px 10px; border: 1px solid #ccc; border-radius: 5px; color: #000; } span.submenuctrl, span.submenuctrl input, select.submenuctrl { color: #777; } span.submenuctrl { white-space: nowrap; } /* Main document area; elements common to most pages. */ .content { padding-top: 10px; font-size: 0.8em; color: #444; } .content blockquote { padding: 0 15px; } .content h1 { font-size: 1.25em; } .content h2 { font-size: 1.15em; } .content h3 { font-size: 1.05em; } .section { font-size: 1em; font-weight: bold; background-color: #f5f5f5; border: 1px solid #d8d8d8; border-radius: 3px 3px 0 0; padding: 9px 10px 10px; margin: 10px 0; } .sectionmenu { border: 1px solid #d8d8d8; border-radius: 0 0 3px 3px; border-top: 0; margin-top: -10px; margin-bottom: 10px; padding: 10px; } .sectionmenu a { display: inline-block; margin-right: 1em; } hr { color: #eee; } /* Page footer */ footer { border-top: 1px solid #ccc; padding: 10px; font-size: 0.7em; margin-top: 10px; color: #ccc; } /* Forum */ .forum a:visited { color: #6A7F94; } .forum blockquote { background-color: rgba(65, 131, 196, 0.1); border-left: 3px solid #254769; padding: .1em 1em; } /* Markdown and Wiki-formatted pages: /wiki, /doc, /file... */ .doc > .content table { background-color: rgba(0, 0, 0, 0.05); border: 1px solid #aaa; border-radius: 0.5em; border-spacing: 0; padding: 6px; } .doc > .content th { border-bottom: 1px solid #ddd; padding-bottom: 4px; padding-right: 6px; text-align: left; } .doc > .content tr > th { background-color: #eee; } .doc > .content tr:nth-child(odd) { background-color: #e8e8e8; } .doc > .content td { padding-bottom: 4px; padding-right: 6px; text-align: left; } /* Tickets */ table.report { cursor: auto; border-radius: 5px; border: 1px solid #ccc; margin: 1em 0; } .report td, .report th { border: 0; font-size: .8em; padding: 10px; } .report td:first-child { border-top-left-radius: 5px; } .report tbody tr:last-child td:first-child { border-bottom-left-radius: 5px; } .report td:last-child { border-top-right-radius: 5px; } .report tbody tr:last-child { border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; } .report tbody tr:last-child td:last-child { border-bottom-right-radius: 5px; } .report th { cursor: pointer; } .report thead+tbody tr:hover { background-color: #f5f9fc !important; } td.tktDspLabel { width: 70px; text-align: right; overflow: hidden; } td.tktDspValue { text-align: left; vertical-align: top; background-color: #f8f8f8; border: 1px solid #ccc; } td.tktDspValue pre { white-space: pre-wrap; } /* Timeline */ span.timelineDetail { font-size: 90%; } div.timelineDate { font-weight: bold; white-space: nowrap; } /* Miscellaneous UI elements */ .fossil-tooltip.help-buttonlet-content { background-color: lightyellow; } /* Exceptions for specific screen sizes */ @media screen and (max-width: 600px) { /* Spacing for mobile */ body { padding-left: 4px; padding-right: 4px; } .title { padding-top: 0px; padding-bottom: 0px; } .status {padding-top: 0px;} .mainmenu a { padding: 8px 10px; } .mainmenu { padding: 10px; } } @media screen and (min-width: 600px) { /* Spacing for desktop */ body { padding-left: 20px; padding-right: 20px; } .title { padding-top: 10px; padding-bottom: 10px; } .status {padding-top: 30px;} .mainmenu a { padding: 8px 20px; } .mainmenu { padding: 10px; } } |
Added skins/etienne/details.txt.
> > > > | 1 2 3 4 | timeline-arrowheads: 1 timeline-circle-nodes: 1 timeline-color-graph-lines: 1 white-foreground: 0 |
Added skins/etienne/footer.txt.
> > > > > | 1 2 3 4 5 | <footer> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by Fossil $release_version $manifest_version $manifest_date </footer> |
Added skins/etienne/header.txt.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | <header> <div class="title"><h1>$<project_name></h1>$<title></div> <div class="status"> <th1> if {[info exists login]} { html "<a href='$home/login'>$login</a>\n" } else { html "<a href='$home/login'>Login</a>\n" } </th1> </div> </header> <nav class="mainmenu" title="Main Menu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} { if {[string match $url\[/?#\]* /$current_page/]} { set class "active $class" } set url $home$url } html "<a href='$url' class='$class'>$name</a>\n" } </th1> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> |
Changes to skins/khaki/css.txt.
︙ | ︙ | |||
39 40 41 42 43 44 45 | padding: 5px 5px 0 0; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | | | | | | | 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | padding: 5px 5px 0 0; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ nav.mainmenu { padding: 5px 10px 5px 10px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #a09048; color: black; z-index: 21; /* just above hbdrop */ } nav#hbdrop { background-color: #fef3bc; border: 2px solid #a09048; border-radius: 0 0 0.5em 0.5em; display: none; left: 2em; width: 90%; padding-right: 1em; position: absolute; z-index: 20; /* just below mainmenu, but above timeline bubbles */ } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #c0af58; color: white; } nav.mainmenu a, nav.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } nav.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover, nav#hbdrop a:hover { color: #a09048; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { padding: 1ex 5px; } div.content a, nav#hbdrop a { color: #706532; } div.content a:link, nav#hbdrop a:link { color: #706532; } div.content a:visited, nav#hbdrop a:visited { color: #704032; } div.content a:hover, nav#hbdrop a:hover { background-color: white; color: #706532; } a, a:visited { text-decoration: none; } |
︙ | ︙ | |||
131 132 133 134 135 136 137 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | | | | | 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ footer { font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #a09048; color: white; } /* Hyperlink colors */ footer a { color: white; } footer a:link { color: white; } footer a:visited { color: white; } footer a:hover { background-color: white; color: #558195; } /* <verbatim> blocks */ pre.verbatim { background-color: #f5f5f5; padding: 0.5em; white-space: pre-wrap; } |
︙ | ︙ |
Changes to skins/khaki/footer.txt.
|
| | | | 1 2 3 | <footer> Fossil $release_version $manifest_version $manifest_date </footer> |
Changes to skins/khaki/header.txt.
|
| | | | | | | | | > | > | | | | | | | | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | <header> <div class="title">$<title></div> <div class="status"> <div class="logo">$<project_name></div><br/> <th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1> </div> </header> <nav class="mainmenu" title="Main Menu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" } </th1> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> |
Changes to skins/original/css.txt.
︙ | ︙ | |||
40 41 42 43 44 45 46 | color: #558195; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | color: #558195; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ nav.mainmenu { padding: 5px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #558195; border-top-left-radius: 8px; border-top-right-radius: 8px; color: white; } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } nav.mainmenu a, nav.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } nav.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #558195; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ |
︙ | ︙ | |||
113 114 115 116 117 118 119 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | | | | | 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ footer { clear: both; font-size: 0.8em; padding: 5px 10px 5px 10px; text-align: right; background-color: #558195; border-bottom-left-radius: 8px; border-bottom-right-radius: 8px; color: white; } /* Hyperlink colors in the footer */ footer a { color: white; } footer a:link { color: white; } footer a:visited { color: white; } footer a:hover { background-color: white; color: #558195; } /* verbatim blocks */ pre.verbatim { background-color: #f5f5f5; padding: 0.5em; white-space: pre-wrap; } |
︙ | ︙ |
Changes to skins/original/footer.txt.
|
| | | 1 2 3 4 5 6 7 8 | <footer> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { return "<a href=\"https://www.tcl.tk/\">Tcl</a> version $tclVersion" } return "" } |
︙ | ︙ | |||
17 18 19 20 21 22 23 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> | | | 17 18 19 20 21 22 23 24 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> </footer> |
Changes to skins/original/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <header> <div class="logo"> <th1> ## ## NOTE: The purpose of this procedure is to take the base URL of the ## Fossil project and return the root of the entire web site using ## the same URI scheme as the base URL (e.g. http or https). ## |
︙ | ︙ | |||
68 69 70 71 72 73 74 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> | | | | | | | | | | | | | | | | | | | > | | | | | | | | | | | > | 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> </header> <th1>html "<script nonce='$nonce'>"</th1> function updateClock(){ var e = document.getElementById("clock"); if(e){ var d = new Date(); function f(n) { return n < 10 ? '0' + n : n; } e.innerHTML = d.getUTCFullYear()+ '-' + f(d.getUTCMonth() + 1) + '-' + f(d.getUTCDate()) + ' ' + f(d.getUTCHours()) + ':' + f(d.getUTCMinutes()); setTimeout(updateClock,(60-d.getUTCSeconds())*1000); } } updateClock(); </script> <nav class="mainmenu" title="Main Menu"> <th1> set sitemap 0 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" if {[string match */sitemap $url]} {set sitemap 1} } if {!$sitemap} { html "<a href='$home/sitemap'>...</a>" } </th1> </nav> |
Changes to skins/plain_gray/css.txt.
︙ | ︙ | |||
26 27 28 29 30 31 32 | vertical-align: bottom; color: #404040; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | vertical-align: bottom; color: #404040; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ nav.mainmenu { padding: 5px 10px 5px 10px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #404040; color: white; |
︙ | ︙ | |||
65 66 67 68 69 70 71 | div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } | | | | | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } nav.mainmenu a, nav.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } nav.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #404040; background-color: white; } a, a:visited { |
︙ | ︙ | |||
129 130 131 132 133 134 135 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ footer { font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #404040; color: white; } |
︙ | ︙ |
Changes to skins/plain_gray/footer.txt.
|
| | | | 1 2 3 | <footer> Fossil $release_version $manifest_version $manifest_date </footer> |
Changes to skins/plain_gray/header.txt.
|
| | | | | | | | | | | | | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | <header> <div class="title">$<project_name>: $<title></div> </header> <nav class="mainmenu" title="Main Menu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" } </th1> </nav> <nav id="hbdrop" class='hbdrop' title="sitemap"></nav> |
Changes to skins/xekri/css.txt.
︙ | ︙ | |||
59 60 61 62 63 64 65 | h2 { font-size: 1.5rem; } h3 { font-size: 1.25rem; } | < < < < < < < < < | | | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | h2 { font-size: 1.5rem; } h3 { font-size: 1.25rem; } /************************************** * Main Area */ header, nav.mainmenu, div.submenu, div.content, footer { clear: both; margin: 0 auto; max-width: 90%; padding: 0.25rem 1rem; } /************************************** * Main Area: Header */ header { margin: 0.5rem auto 0 auto; display: flex; flex-direction: row; align-items: center; flex-wrap: wrap; } div.logo { |
︙ | ︙ | |||
144 145 146 147 148 149 150 | } /************************************** * Main Area: Global Menu */ | | | | | | 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | } /************************************** * Main Area: Global Menu */ nav.mainmenu, div.submenu { background-color: #080; border-radius: 1rem 1rem 0 0; box-shadow: 3px 4px 1px #000; color: #000; font-weight: bold; font-size: 1.1rem; text-align: center; } nav.mainmenu { padding-top: 0.33rem; padding-bottom: 0.25rem; } div.submenu { border-top: 1px solid #0a0; border-radius: 0; display: block; } nav.mainmenu a, div.submenu a, div.submenu label { color: #000; padding: 0 0.75rem; text-decoration: none; } nav.mainmenu a:hover, div.submenu a:hover, div.submenu label:hover { color: #fff; text-shadow: 0px 0px 6px #0f0; } div.submenu * { margin: 0 0.5rem; vertical-align: middle; |
︙ | ︙ | |||
221 222 223 224 225 226 227 | stroke: white; } /************************************** * Main Area: Footer */ | | | | | | | | 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 | stroke: white; } /************************************** * Main Area: Footer */ footer { color: #ee0; font-size: 0.75rem; padding: 0; text-align: right; width: 75%; } footer div { background-color: #222; box-shadow: 3px 3px 1px #000; border-radius: 0 0 1rem 1rem; margin: 0 0 10px 0; padding: 0.25rem 0.75rem; } footer div.page-time { float: left; } footer div.fossil-info { float: right; } footer a, footer a:link, footer a:visited { color: #ee0; } footer a:hover { color: #fff; text-shadow: 0px 0px 6px #ee0; } /************************************** * Check-in |
︙ | ︙ | |||
571 572 573 574 575 576 577 | margin: 1.2rem auto 0.75rem auto; padding: 0.2rem; text-align: center; } div.sectionmenu { border-radius: 0 0 3rem 3rem; | | | 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 | margin: 1.2rem auto 0.75rem auto; padding: 0.2rem; text-align: center; } div.sectionmenu { border-radius: 0 0 3rem 3rem; margin-top: auto; width: 75%; } div.sectionmenu > a:link, div.sectionmenu > a:visited { color: #000; text-decoration: none; } |
︙ | ︙ | |||
1082 1083 1084 1085 1086 1087 1088 | /* format for report configuration errors */ blockquote.reportError { color: #f00; font-weight: bold; } /* format for artifact lines, no longer shunned */ p.noMoreShun { | | | | 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 | /* format for report configuration errors */ blockquote.reportError { color: #f00; font-weight: bold; } /* format for artifact lines, no longer shunned */ p.noMoreShun { color: yellow; } /* format for artifact lines being shunned */ p.shunned { color: yellow; } /* a broken hyperlink */ span.brokenlink { color: #f00; } /* List of files in a timeline */ ul.filelist { |
︙ | ︙ | |||
1162 1163 1164 1165 1166 1167 1168 | } body.branch .brlist > table > tbody > tr:hover:not(.selected), body.branch .brlist > table > tbody > tr.selected { background-color: #444; } | | | | 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 | } body.branch .brlist > table > tbody > tr:hover:not(.selected), body.branch .brlist > table > tbody > tr.selected { background-color: #444; } body.chat header, body.chat footer, body.chat nav.mainmenu, body.chat div.submenu, body.chat div.content { margin-left: 0.5em; margin-right: 0.5em; margin-top: auto/*eliminates unnecessary scrollbars*/; } body.chat.chat-only-mode div.content { max-width: revert; } body.chat #chat-user-list .chat-user{ color: white; } |
Changes to skins/xekri/footer.txt.
1 | </div> | | | | | | | | | | 1 2 3 4 5 6 7 8 9 | </div> <footer> <div class="page-time"> Generated in <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> <div class="fossil-info"> Fossil v$release_version $manifest_version </div> </footer> |
Changes to skins/xekri/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <header> <div class="logo"> <th1> ## ## NOTE: The purpose of this procedure is to take the base URL of the ## Fossil project and return the root of the entire web site using ## the same URI scheme as the base URL (e.g. http or https). ## |
︙ | ︙ | |||
67 68 69 70 71 72 73 | } </th1> <a href="$logourl"> <img src="$logo_image_url" border="0" alt="$project_name"> </a> </div> <div class="title">$<title></div> | | > | | | | | > | | | | | | | | | | | | | | | | | | | > | | | | | | | | | | | | | | | | > | 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | } </th1> <a href="$logourl"> <img src="$logo_image_url" border="0" alt="$project_name"> </a> </div> <div class="title">$<title></div> <div class="status"><nobr> <th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1> </nobr><small><div id="clock"></div></small></div> </header> <th1>html "<script nonce='$nonce'>"</th1> function updateClock(){ var e = document.getElementById("clock"); if(e){ var d = new Date(); function f(n) { return n < 10 ? '0' + n : n; } e.innerHTML = d.getUTCFullYear()+ '-' + f(d.getUTCMonth() + 1) + '-' + f(d.getUTCDate()) + ' ' + f(d.getUTCHours()) + ':' + f(d.getUTCMinutes()); setTimeout(updateClock,(60-d.getUTCSeconds())*1000); } } updateClock(); </script> <nav class="mainmenu" title="Main Menu"> <th1> set sitemap 0 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} { if {[string match $url\[/?#\]* /$current_page/]} { set class "active $class" } set url $home$url } html "<a href='$url' class='$class'>$name</a>\n" if {[string match */sitemap $url]} {set sitemap 1} } if {!$sitemap} { html "<a href='$home/sitemap'>...</a>\n" } </th1> </nav> |
Changes to src/ajax.c.
︙ | ︙ | |||
390 391 392 393 394 395 396 | */ void ajax_route_dispatcher(void){ const char * zName = P("name"); AjaxRoute routeName = {0,0,0,0}; const AjaxRoute * pRoute = 0; const AjaxRoute routes[] = { /* Keep these sorted by zName (for bsearch()) */ | | > > > > > > > > > | | 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | */ void ajax_route_dispatcher(void){ const char * zName = P("name"); AjaxRoute routeName = {0,0,0,0}; const AjaxRoute * pRoute = 0; const AjaxRoute routes[] = { /* Keep these sorted by zName (for bsearch()) */ {"preview-text", ajax_route_preview_text, 0, 1 /* Note that this does not require write permissions in the repo. ** It should arguably require write permissions but doing means ** that /chat does not work without checkin permissions: ** ** https://fossil-scm.org/forum/forumpost/ed4a762b3a557898 ** ** This particular route is used by /fileedit and /chat, whereas ** /wikiedit uses a simpler wiki-specific route. */ } }; if(zName==0 || zName[0]==0){ ajax_route_error(400,"Missing required [route] 'name' parameter."); return; } routeName.zName = zName; pRoute = (const AjaxRoute *)bsearch(&routeName, routes, count(routes), sizeof routes[0], cmp_ajax_route_name); if(pRoute==0){ ajax_route_error(404,"Ajax route not found."); return; }else if(0==ajax_route_bootstrap(pRoute->bWriteMode, pRoute->bPost)){ return; } pRoute->xCallback(); } |
Changes to src/alerts.c.
︙ | ︙ | |||
45 46 47 48 49 50 51 52 53 54 55 56 57 58 | @ -- to the USER entry. @ -- @ -- The ssub field is a string where each character indicates a particular @ -- type of event to subscribe to. Choices: @ -- a - Announcements @ -- c - Check-ins @ -- f - Forum posts @ -- n - New forum threads @ -- r - Replies to my own forum posts @ -- t - Ticket changes @ -- w - Wiki changes @ -- x - Edits to forum posts @ -- Probably different codes will be added in the future. In the future @ -- we might also add a separate table that allows subscribing to email | > | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | @ -- to the USER entry. @ -- @ -- The ssub field is a string where each character indicates a particular @ -- type of event to subscribe to. Choices: @ -- a - Announcements @ -- c - Check-ins @ -- f - Forum posts @ -- k - ** Special: Unsubscribed using /oneclickunsub @ -- n - New forum threads @ -- r - Replies to my own forum posts @ -- t - Ticket changes @ -- w - Wiki changes @ -- x - Edits to forum posts @ -- Probably different codes will be added in the future. In the future @ -- we might also add a separate table that allows subscribing to email |
︙ | ︙ | |||
84 85 86 87 88 89 90 | @ -- @ CREATE TABLE repository.pending_alert( @ eventid TEXT PRIMARY KEY, -- Object that changed @ sentSep BOOLEAN DEFAULT false, -- individual alert sent @ sentDigest BOOLEAN DEFAULT false, -- digest alert sent @ sentMod BOOLEAN DEFAULT false -- pending moderation alert sent @ ) WITHOUT ROWID; | | | 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | @ -- @ CREATE TABLE repository.pending_alert( @ eventid TEXT PRIMARY KEY, -- Object that changed @ sentSep BOOLEAN DEFAULT false, -- individual alert sent @ sentDigest BOOLEAN DEFAULT false, -- digest alert sent @ sentMod BOOLEAN DEFAULT false -- pending moderation alert sent @ ) WITHOUT ROWID; @ @ -- Obsolete table. No longer used. @ DROP TABLE IF EXISTS repository.alert_bounce; ; /* ** Return true if the email notification tables exist. */ |
︙ | ︙ | |||
873 874 875 876 877 878 879 | */ void email_header_to(Blob *pMsg, int *pnTo, char ***pazTo){ int nTo = 0; char **azTo = 0; Blob v; char *z, *zAddr; int i; | | | | 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 | */ void email_header_to(Blob *pMsg, int *pnTo, char ***pazTo){ int nTo = 0; char **azTo = 0; Blob v; char *z, *zAddr; int i; email_header_value(pMsg, "to", &v); z = blob_str(&v); for(i=0; z[i]; i++){ if( z[i]=='<' && (zAddr = email_copy_addr(&z[i+1],'>'))!=0 ){ azTo = fossil_realloc(azTo, sizeof(azTo[0])*(nTo+1) ); azTo[nTo++] = zAddr; } } *pnTo = nTo; *pazTo = azTo; } /* ** Free a list of To addresses obtained from a prior call to ** email_header_to() */ void email_header_to_free(int nTo, char **azTo){ int i; for(i=0; i<nTo; i++) fossil_free(azTo[i]); fossil_free(azTo); } |
︙ | ︙ | |||
913 914 915 916 917 918 919 | ** From: ** Date: ** Message-Id: ** Content-Type: ** Content-Transfer-Encoding: ** MIME-Version: ** Sender: | | | | 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 | ** From: ** Date: ** Message-Id: ** Content-Type: ** Content-Transfer-Encoding: ** MIME-Version: ** Sender: ** ** The caller maintains ownership of the input Blobs. This routine will ** read the Blobs and send them onward to the email system, but it will ** not free them. ** ** The Message-Id: field is added if there is not already a Message-Id ** in the pHdr parameter. ** ** If the zFromName argument is not NULL, then it should be a human-readable ** name or handle for the sender. In that case, "From:" becomes a made-up ** email address based on a hash of zFromName and the domain of email-self, ** and an additional "Sender:" field is inserted with the email-self ** address. Downstream software might use the Sender header to set ** the envelope-from address of the email. If zFromName is a NULL pointer, ** then the "From:" is set to the email-self value and Sender is ** omitted. */ void alert_send( AlertSender *p, /* Emailer context */ Blob *pHdr, /* Email header (incomplete) */ Blob *pBody, /* Email body */ |
︙ | ︙ | |||
1043 1044 1045 1046 1047 1048 1049 | ** the basename for hyperlinks included in email alert text. ** Omit the trailing "/". If the repository is not intended to be ** a long-running server and will not be sending email notifications, ** then leave this setting blank. */ /* ** SETTING: email-admin width=40 | | | 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 | ** the basename for hyperlinks included in email alert text. ** Omit the trailing "/". If the repository is not intended to be ** a long-running server and will not be sending email notifications, ** then leave this setting blank. */ /* ** SETTING: email-admin width=40 ** This is the email address for the human administrator for the system. ** Abuse and trouble reports and password reset requests are send here. */ /* ** SETTING: email-subname width=16 ** This is a short name used to identifies the repository in the Subject: ** line of email alerts. Traditionally this name is included in square ** brackets. Examples: "[fossil-src]", "[sqlite-src]". |
︙ | ︙ | |||
1078 1079 1080 1081 1082 1083 1084 | ** a subscription is less than email-renew-cutoff, then now new emails ** are sent to the subscriber. ** ** email-renew-warning is the time (in days since 1970-01-01) when the ** last batch of "your subscription is about to expire" emails were ** sent out. ** | | | | 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 | ** a subscription is less than email-renew-cutoff, then now new emails ** are sent to the subscriber. ** ** email-renew-warning is the time (in days since 1970-01-01) when the ** last batch of "your subscription is about to expire" emails were ** sent out. ** ** email-renew-cutoff is normally 7 days behind email-renew-warning. */ /* ** SETTING: email-send-method width=5 default=off sensitive ** Determine the method used to send email. Allowed values are ** "off", "relay", "pipe", "dir", "db", and "stdout". The "off" value ** means no email is ever sent. The "relay" value means emails are sent ** to an Mail Sending Agent using SMTP located at email-send-relayhost. ** The "pipe" value means email messages are piped into a command ** determined by the email-send-command setting. The "dir" value means ** emails are written to individual files in a directory determined ** by the email-send-dir setting. The "db" value means that emails ** are added to an SQLite database named by the* email-send-db setting. ** The "stdout" value writes email text to standard output, for debugging. */ /* |
︙ | ︙ | |||
1131 1132 1133 1134 1135 1136 1137 | ** SMTP server configured as a Mail Submission Agent listening on the ** designated host and port and all times. */ /* ** COMMAND: alerts* | | | 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 | ** SMTP server configured as a Mail Submission Agent listening on the ** designated host and port and all times. */ /* ** COMMAND: alerts* ** ** Usage: %fossil alerts SUBCOMMAND ARGS... ** ** Subcommands: ** ** pending Show all pending alerts. Useful for debugging. ** ** reset Hard reset of all email notification tables |
︙ | ︙ | |||
1457 1458 1459 1460 1461 1462 1463 | /* If we reach this point, all is well */ return 1; } /* ** Text of email message sent in order to confirm a subscription. */ | | | 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 | /* If we reach this point, all is well */ return 1; } /* ** Text of email message sent in order to confirm a subscription. */ static const char zConfirmMsg[] = @ Someone has signed you up for email alerts on the Fossil repository @ at %s. @ @ To confirm your subscription and begin receiving alerts, click on @ the following hyperlink: @ @ %s/alerts/%s |
︙ | ︙ | |||
1740 1741 1742 1743 1744 1745 1746 | } /* ** Either shutdown or completely delete a subscription entry given ** by the hex value zName. Then paint a webpage that explains that ** the entry has been removed. */ | | | > > | | | > > > > > > > > | 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 | } /* ** Either shutdown or completely delete a subscription entry given ** by the hex value zName. Then paint a webpage that explains that ** the entry has been removed. */ static void alert_unsubscribe(int sid, int bTotal){ const char *zEmail = 0; const char *zLogin = 0; int uid = 0; Stmt q; db_prepare(&q, "SELECT semail, suname FROM subscriber" " WHERE subscriberId=%d", sid); if( db_step(&q)==SQLITE_ROW ){ zEmail = db_column_text(&q, 0); zLogin = db_column_text(&q, 1); uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", zLogin); } style_set_current_feature("alerts"); if( zEmail==0 ){ style_header("Unsubscribe Fail"); @ <p>Unable to locate a subscriber with the requested key</p> }else{ db_unprotect(PROTECT_READONLY); if( bTotal ){ /* Completely delete the subscriber */ db_multi_exec( "DELETE FROM subscriber WHERE subscriberId=%d", sid ); }else{ /* Keep the subscriber, but turn off all notifications */ db_multi_exec( "UPDATE subscriber SET ssub='k', mtime=now() WHERE subscriberId=%d", sid ); } db_protect_pop(); style_header("Unsubscribed"); @ <p>The "%h(zEmail)" email address has been unsubscribed from all @ notifications. All subscription records for "%h(zEmail)" have @ been purged. No further emails will be sent to "%h(zEmail)".</p> if( uid && g.perm.Admin ){ @ <p>You may also want to @ <a href="%R/setup_uedit?id=%d(uid)">edit or delete |
︙ | ︙ | |||
1792 1793 1794 1795 1796 1797 1798 | ** email and clicks on the link in the email. When a ** compilete subscriberCode is seen on the name= query parameter, ** that constitutes verification of the email address. ** ** * The sid= query parameter contains an integer subscriberId. ** This only works for the administrator. It allows the ** administrator to edit any subscription. | | | 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 | ** email and clicks on the link in the email. When a ** compilete subscriberCode is seen on the name= query parameter, ** that constitutes verification of the email address. ** ** * The sid= query parameter contains an integer subscriberId. ** This only works for the administrator. It allows the ** administrator to edit any subscription. ** ** * The user is logged into an account other than "nobody" or ** "anonymous". In that case the notification settings ** associated with that account can be edited without needing ** to know the subscriber code. ** ** * The name= query parameter contains a 32-digit prefix of ** subscriber code. (Subscriber codes are normally 64 hex digits |
︙ | ︙ | |||
1922 1923 1924 1925 1926 1927 1928 | } if( P("delete")!=0 && cgi_csrf_safe(2) ){ if( !PB("dodelete") ){ eErr = 9; zErr = mprintf("Select this checkbox and press \"Unsubscribe\" again to" " unsubscribe"); }else{ | | | | 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 | } if( P("delete")!=0 && cgi_csrf_safe(2) ){ if( !PB("dodelete") ){ eErr = 9; zErr = mprintf("Select this checkbox and press \"Unsubscribe\" again to" " unsubscribe"); }else{ alert_unsubscribe(sid, 1); db_commit_transaction(); return; } } style_set_current_feature("alerts"); style_header("Update Subscription"); db_prepare(&q, "SELECT" " semail," /* 0 */ |
︙ | ︙ | |||
2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 | @ Ticket changes</label><br> } if( g.perm.RdWiki ){ @ <label><input type="checkbox" name="sw" %s(sw?"checked":"")>\ @ Wiki</label> } @ </td></tr> @ <tr> @ <td class="form_label">Delivery:</td> @ <td><select size="1" name="sdigest"> @ <option value="0" %s(sdigest?"":"selected")>Individual Emails</option> @ <option value="1" %s(sdigest?"selected":"")>Daily Digest</option> @ </select></td> @ </tr> | > > > > | 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 | @ Ticket changes</label><br> } if( g.perm.RdWiki ){ @ <label><input type="checkbox" name="sw" %s(sw?"checked":"")>\ @ Wiki</label> } @ </td></tr> if( strchr(ssub,'k')!=0 ){ @ <tr><td></td><td> ↑ @ Note: User did a one-click unsubscribe</td></tr> } @ <tr> @ <td class="form_label">Delivery:</td> @ <td><select size="1" name="sdigest"> @ <option value="0" %s(sdigest?"":"selected")>Individual Emails</option> @ <option value="1" %s(sdigest?"selected":"")>Daily Digest</option> @ </select></td> @ </tr> |
︙ | ︙ | |||
2181 2182 2183 2184 2185 2186 2187 | style_finish_page(); } /* This is the message that gets sent to describe how to change ** or modify a subscription */ | | > > > > | | | > > | 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 | style_finish_page(); } /* This is the message that gets sent to describe how to change ** or modify a subscription */ static const char zUnsubMsg[] = @ To changes your subscription settings at %s visit this link: @ @ %s/alerts/%s @ @ To completely unsubscribe from %s, visit the following link: @ @ %s/unsubscribe/%s ; /* ** WEBPAGE: unsubscribe ** WEBPAGE: oneclickunsub ** ** Users visit this page to be delisted from email alerts. ** ** If a valid subscriber code is supplied in the name= query parameter, ** then that subscriber is delisted. ** ** Otherwise, If the users is logged in, then they are redirected ** to the /alerts page where they have an unsubscribe button. ** ** Non-logged-in users with no name= query parameter are invited to enter ** an email address to which will be sent the unsubscribe link that ** contains the correct subscriber code. ** ** The /unsubscribe page requires comfirmation. The /oneclickunsub ** page unsubscribes immediately without any need to confirm. */ void unsubscribe_page(void){ const char *zName = P("name"); char *zErr = 0; int eErr = 0; unsigned int uSeed = 0; const char *zDecoded; char *zCaptcha = 0; int dx; int bSubmit; const char *zEAddr; char *zCode = 0; int sid = 0; if( zName==0 ) zName = P("scode"); /* If a valid subscriber code is supplied, then either present the user ** with a confirmation, or if already confirmed, unsubscribe immediately. */ if( zName && (sid = db_int(0, "SELECT subscriberId FROM subscriber" " WHERE subscriberCode=hextoblob(%Q)", zName))!=0 ){ char *zUnsubName = mprintf("confirm%04x", sid); if( P(zUnsubName)!=0 ){ alert_unsubscribe(sid, 1); }else if( sqlite3_strglob("*oneclick*",g.zPath)==0 ){ alert_unsubscribe(sid, 0); }else if( P("manage")!=0 ){ cgi_redirectf("%R/alerts/%s", zName); }else{ style_header("Unsubscribe"); form_begin(0, "%R/unsubscribe"); @ <input type="hidden" name="scode" value="%h(zName)"> @ <table border="0" cellpadding="10" width="100%%"> |
︙ | ︙ | |||
2311 2312 2313 2314 2315 2316 2317 | }else{ @ <p>An email has been sent to "%h(zEAddr)" that explains how to @ unsubscribe and/or modify your subscription settings</p> } alert_sender_free(pSender); style_finish_page(); return; | | | 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 | }else{ @ <p>An email has been sent to "%h(zEAddr)" that explains how to @ unsubscribe and/or modify your subscription settings</p> } alert_sender_free(pSender); style_finish_page(); return; } /* Non-logged-in users have to enter an email address to which is ** sent a message containing the unsubscribe link. */ style_header("Unsubscribe Request"); @ <p>Fill out the form below to request an email message that will @ explain how to unsubscribe and/or change your subscription settings.</p> |
︙ | ︙ | |||
2540 2541 2542 2543 2544 2545 2546 | } /* ** Compute a string that is appropriate for the EmailEvent.zPriors field ** for a particular forum post. ** ** This string is an encode list of sender names and rids for all ancestors | | | 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 | } /* ** Compute a string that is appropriate for the EmailEvent.zPriors field ** for a particular forum post. ** ** This string is an encode list of sender names and rids for all ancestors ** of the fpdi post - the post that fpid answer, the post that that parent ** post answers, and so forth back up to the root post. Duplicates sender ** names are omitted. ** ** The EmailEvent.zPriors field is used to screen events for people who ** only want to see replies to their own posts or to specific posts. */ static char *alert_compute_priors(int fpid){ |
︙ | ︙ | |||
2720 2721 2722 2723 2724 2725 2726 | zUuid = db_column_text(&q, 1); zTitle = db_column_text(&q, 3); if( p->needMod ){ blob_appendf(&p->hdr, "Subject: %s Pending Moderation: %s\r\n", zSub, zTitle); }else{ blob_appendf(&p->hdr, "Subject: %s %s\r\n", zSub, zTitle); | | | 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 | zUuid = db_column_text(&q, 1); zTitle = db_column_text(&q, 3); if( p->needMod ){ blob_appendf(&p->hdr, "Subject: %s Pending Moderation: %s\r\n", zSub, zTitle); }else{ blob_appendf(&p->hdr, "Subject: %s %s\r\n", zSub, zTitle); blob_appendf(&p->hdr, "Message-Id: <%.32s@%s>\r\n", zUuid, alert_hostname(zFrom)); zIrt = db_column_text(&q, 4); if( zIrt && zIrt[0] ){ blob_appendf(&p->hdr, "In-Reply-To: <%.32s@%s>\r\n", zIrt, alert_hostname(zFrom)); } } |
︙ | ︙ | |||
3087 3088 3089 3090 3091 3092 3093 | " ssub," /* 2 */ " fullcap(user.cap)," /* 3 */ " suname" /* 4 */ " FROM subscriber LEFT JOIN user ON (login=suname)" " WHERE sverified" " AND NOT sdonotcall" " AND sdigest IS %s" | | > | | | 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 | " ssub," /* 2 */ " fullcap(user.cap)," /* 3 */ " suname" /* 4 */ " FROM subscriber LEFT JOIN user ON (login=suname)" " WHERE sverified" " AND NOT sdonotcall" " AND sdigest IS %s" " AND coalesce(subscriber.lastContact*86400,subscriber.mtime)>=%d", zDigest/*safe-for-%s*/, db_get_int("email-renew-cutoff",0) ); while( db_step(&q)==SQLITE_ROW ){ const char *zCode = db_column_text(&q, 0); const char *zSub = db_column_text(&q, 2); const char *zEmail = db_column_text(&q, 1); const char *zCap = db_column_text(&q, 3); const char *zUser = db_column_text(&q, 4); int nHit = 0; for(p=pEvents; p; p=p->pNext){ if( strchr(zSub,p->type)==0 ){ if( p->type!='f' ) continue; if( strchr(zSub,'n')!=0 && (p->zPriors==0 || p->zPriors[0]==0) ){ /* New post: accepted */ }else if( strchr(zSub,'r')!=0 && zUser!=0 && alert_in_priors(zUser, p->zPriors) ){ /* A follow-up to a post written by the user: accept */ }else{ continue; } } if( p->needMod ){ /* For events that require moderator approval, only send an alert |
︙ | ︙ | |||
3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 | if( blob_size(&p->hdr)>0 ){ /* This alert should be sent as a separate email */ Blob fhdr, fbody; blob_init(&fhdr, 0, 0); blob_appendf(&fhdr, "To: <%s>\r\n", zEmail); blob_append(&fhdr, blob_buffer(&p->hdr), blob_size(&p->hdr)); blob_init(&fbody, blob_buffer(&p->txt), blob_size(&p->txt)); blob_appendf(&fbody, "\n-- \nUnsubscribe: %s/unsubscribe/%s\n", zUrl, zCode); /* blob_appendf(&fbody, "Subscription settings: %s/alerts/%s\n", ** zUrl, zCode); */ alert_send(pSender,&fhdr,&fbody,p->zFromName); nSent++; blob_reset(&fhdr); | > > > > | 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 | if( blob_size(&p->hdr)>0 ){ /* This alert should be sent as a separate email */ Blob fhdr, fbody; blob_init(&fhdr, 0, 0); blob_appendf(&fhdr, "To: <%s>\r\n", zEmail); blob_append(&fhdr, blob_buffer(&p->hdr), blob_size(&p->hdr)); blob_init(&fbody, blob_buffer(&p->txt), blob_size(&p->txt)); blob_appendf(&fhdr, "List-Unsubscribe: <%s/oneclickunsub/%s>\r\n", zUrl, zCode); blob_appendf(&fhdr, "List-Unsubscribe-Post: List-Unsubscribe=One-Click\r\n"); blob_appendf(&fbody, "\n-- \nUnsubscribe: %s/unsubscribe/%s\n", zUrl, zCode); /* blob_appendf(&fbody, "Subscription settings: %s/alerts/%s\n", ** zUrl, zCode); */ alert_send(pSender,&fhdr,&fbody,p->zFromName); nSent++; blob_reset(&fhdr); |
︙ | ︙ | |||
3172 3173 3174 3175 3176 3177 3178 | } nHit++; blob_append(&body, "\n", 1); blob_append(&body, blob_buffer(&p->txt), blob_size(&p->txt)); } } if( nHit==0 ) continue; | | | 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 | } nHit++; blob_append(&body, "\n", 1); blob_append(&body, blob_buffer(&p->txt), blob_size(&p->txt)); } } if( nHit==0 ) continue; blob_appendf(&hdr, "List-Unsubscribe: <%s/oneclickunsub/%s>\r\n", zUrl, zCode); blob_appendf(&hdr, "List-Unsubscribe-Post: List-Unsubscribe=One-Click\r\n"); blob_appendf(&body,"\n-- \nSubscription info: %s/alerts/%s\n", zUrl, zCode); alert_send(pSender,&hdr,&body,0); nSent++; blob_truncate(&hdr, 0); |
︙ | ︙ | |||
3195 3196 3197 3198 3199 3200 3201 | ** alerts that have been completely sent. */ db_multi_exec("DELETE FROM pending_alert WHERE sentDigest AND sentSep;"); /* Send renewal messages to subscribers whose subscriptions are about ** to expire. Only do this if: ** | | | 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 | ** alerts that have been completely sent. */ db_multi_exec("DELETE FROM pending_alert WHERE sentDigest AND sentSep;"); /* Send renewal messages to subscribers whose subscriptions are about ** to expire. Only do this if: ** ** (1) email-renew-interval is 14 or greater (or in other words if ** subscription expiration is enabled). ** ** (2) The SENDALERT_RENEWAL flag is set */ send_alert_expiration_warnings: if( (flags & SENDALERT_RENEWAL)!=0 && (iInterval = db_get_int("email-renew-interval",0))>=14 |
︙ | ︙ | |||
3224 3225 3226 3227 3228 3229 3230 | " AND length(sdigest)>0", iNewWarn, iOldWarn ); while( db_step(&q)==SQLITE_ROW ){ Blob hdr, body; blob_init(&hdr, 0, 0); blob_init(&body, 0, 0); | | | 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 | " AND length(sdigest)>0", iNewWarn, iOldWarn ); while( db_step(&q)==SQLITE_ROW ){ Blob hdr, body; blob_init(&hdr, 0, 0); blob_init(&body, 0, 0); alert_renewal_msg(&hdr, &body, db_column_text(&q,0), db_column_int(&q,1), db_column_text(&q,2), db_column_text(&q,3), zRepoName, zUrl); alert_send(pSender,&hdr,&body,0); blob_reset(&hdr); |
︙ | ︙ | |||
3294 3295 3296 3297 3298 3299 3300 | style_set_current_feature("alerts"); if( zAdminEmail==0 || zAdminEmail[0]==0 ){ style_header("Outbound Email Disabled"); @ <p>Outbound email is disabled on this repository style_finish_page(); return; } | | | 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 | style_set_current_feature("alerts"); if( zAdminEmail==0 || zAdminEmail[0]==0 ){ style_header("Outbound Email Disabled"); @ <p>Outbound email is disabled on this repository style_finish_page(); return; } if( P("submit")!=0 && P("subject")!=0 && P("msg")!=0 && P("from")!=0 && cgi_csrf_safe(2) && captcha_is_correct(0) ){ Blob hdr, body; |
︙ | ︙ |
Changes to src/allrepo.c.
︙ | ︙ | |||
29 30 31 32 33 34 35 | */ static void collect_argument(Blob *pExtra,const char *zArg,const char *zShort){ const char *z = find_option(zArg, zShort, 0); if( z!=0 ){ blob_appendf(pExtra, " %s", z); } } | | > > | | 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | */ static void collect_argument(Blob *pExtra,const char *zArg,const char *zShort){ const char *z = find_option(zArg, zShort, 0); if( z!=0 ){ blob_appendf(pExtra, " %s", z); } } static void collect_argument_value( Blob *pExtra, const char *zArg, const char *zShort ){ const char *zValue = find_option(zArg, zShort, 1); if( zValue ){ if( zValue[0] ){ blob_appendf(pExtra, " --%s %$", zArg, zValue); }else{ blob_appendf(pExtra, " --%s \"\"", zArg); } } |
︙ | ︙ | |||
104 105 106 107 108 109 110 | ** --verbose and --share-links options are supported. ** ** push Run a "push" on all repositories. Only the --verbose ** option is supported. ** ** rebuild Rebuild on all repositories. The command line options ** supported by the rebuild command itself, if any are | | | | 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | ** --verbose and --share-links options are supported. ** ** push Run a "push" on all repositories. Only the --verbose ** option is supported. ** ** rebuild Rebuild on all repositories. The command line options ** supported by the rebuild command itself, if any are ** present, are passed along verbatim. The --force option ** is not supported. ** ** remote Show remote hosts for all repositories. ** ** repack Look for extra compression in all repositories. ** ** sync Run a "sync" on all repositories. Only the --verbose ** and --unversioned and --share-links options are supported. |
︙ | ︙ | |||
130 131 132 133 134 135 136 | ** ** ui Run the "ui" command on all repositories. Like "server" ** but bind to the loopback TCP address only, enable ** the --localauth option and automatically launch a ** web-browser ** ** whatis Run the "whatis" command on all repositories. Only | | | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | ** ** ui Run the "ui" command on all repositories. Like "server" ** but bind to the loopback TCP address only, enable ** the --localauth option and automatically launch a ** web-browser ** ** whatis Run the "whatis" command on all repositories. Only ** show output for repositories that have a match. ** ** ** In addition, the following maintenance operations are supported: ** ** add Add all the repositories named to the set of repositories ** tracked by Fossil. Normally Fossil is able to keep up with ** this list by itself, but sometimes it can benefit from this |
︙ | ︙ | |||
208 209 210 211 212 213 214 | if( file_isdir(zDest, ExtFILE)!=1 ){ fossil_fatal("argument to \"fossil all backup\" must be a directory"); } blob_appendf(&extra, " %$", zDest); }else if( fossil_strcmp(zCmd, "clean")==0 ){ zCmd = "clean --chdir"; collect_argument(&extra, "allckouts",0); | | | | | | 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 | if( file_isdir(zDest, ExtFILE)!=1 ){ fossil_fatal("argument to \"fossil all backup\" must be a directory"); } blob_appendf(&extra, " %$", zDest); }else if( fossil_strcmp(zCmd, "clean")==0 ){ zCmd = "clean --chdir"; collect_argument(&extra, "allckouts",0); collect_argument_value(&extra, "case-sensitive", 0); collect_argument_value(&extra, "clean", 0); collect_argument(&extra, "dirsonly",0); collect_argument(&extra, "disable-undo",0); collect_argument(&extra, "dotfiles",0); collect_argument(&extra, "emptydirs",0); collect_argument(&extra, "force","f"); collect_argument_value(&extra, "ignore", 0); collect_argument_value(&extra, "keep", 0); collect_argument(&extra, "no-prompt",0); collect_argument(&extra, "temp",0); collect_argument(&extra, "verbose","v"); collect_argument(&extra, "whatif",0); useCheckouts = 1; }else if( fossil_strcmp(zCmd, "config")==0 ){ zCmd = "config -R"; |
︙ | ︙ | |||
245 246 247 248 249 250 251 | }else if( fossil_strcmp(zCmd, "extras")==0 ){ if( showFile ){ zCmd = "extras --chdir"; }else{ zCmd = "extras --header --chdir"; } collect_argument(&extra, "abs-paths",0); | | | | 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | }else if( fossil_strcmp(zCmd, "extras")==0 ){ if( showFile ){ zCmd = "extras --chdir"; }else{ zCmd = "extras --header --chdir"; } collect_argument(&extra, "abs-paths",0); collect_argument_value(&extra, "case-sensitive", 0); collect_argument(&extra, "dotfiles",0); collect_argument_value(&extra, "ignore", 0); collect_argument(&extra, "rel-paths",0); useCheckouts = 1; stopOnError = 0; quiet = 1; }else if( fossil_strcmp(zCmd, "git")==0 ){ if( g.argc<4 ){ usage("git (export|status)"); |
︙ | ︙ | |||
274 275 276 277 278 279 280 281 282 283 284 | collect_argument(&extra, "verbose","v"); }else if( fossil_strcmp(zCmd, "pull")==0 ){ zCmd = "pull -autourl -R"; collect_argument(&extra, "verbose","v"); collect_argument(&extra, "share-links",0); }else if( fossil_strcmp(zCmd, "rebuild")==0 ){ zCmd = "rebuild"; collect_argument(&extra, "cluster",0); collect_argument(&extra, "compress",0); collect_argument(&extra, "compress-only",0); collect_argument(&extra, "noverify",0); | > | | | 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 | collect_argument(&extra, "verbose","v"); }else if( fossil_strcmp(zCmd, "pull")==0 ){ zCmd = "pull -autourl -R"; collect_argument(&extra, "verbose","v"); collect_argument(&extra, "share-links",0); }else if( fossil_strcmp(zCmd, "rebuild")==0 ){ zCmd = "rebuild"; collect_argument(&extra, "analyze",0); collect_argument(&extra, "cluster",0); collect_argument(&extra, "compress",0); collect_argument(&extra, "compress-only",0); collect_argument(&extra, "noverify",0); collect_argument_value(&extra, "pagesize", 0); collect_argument(&extra, "vacuum",0); collect_argument(&extra, "deanalyze",0); /* Deprecated */ collect_argument(&extra, "analyze",0); collect_argument(&extra, "wal",0); collect_argument(&extra, "stats",0); collect_argument(&extra, "index",0); collect_argument(&extra, "noindex",0); collect_argument(&extra, "ifneeded", 0); }else if( fossil_strcmp(zCmd, "remote")==0 ){ |
︙ | ︙ | |||
412 413 414 415 416 417 418 419 420 421 422 423 424 425 | }else if( fossil_strcmp(zCmd, "cache")==0 ){ zCmd = "cache -R"; showLabel = 1; collect_argv(&extra, 3); }else if( fossil_strcmp(zCmd, "whatis")==0 ){ zCmd = "whatis -q -R"; quiet = 1; collect_argv(&extra, 3); }else{ fossil_fatal("\"all\" subcommand should be one of: " "add cache changes clean dbstat extras fts-config git ignore " "info list ls pull push rebuild remote " "server setting sync ui unset whatis"); } | > > | 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 | }else if( fossil_strcmp(zCmd, "cache")==0 ){ zCmd = "cache -R"; showLabel = 1; collect_argv(&extra, 3); }else if( fossil_strcmp(zCmd, "whatis")==0 ){ zCmd = "whatis -q -R"; quiet = 1; collect_argument(&extra, "file", "f"); collect_argument_value(&extra, "type", 0); collect_argv(&extra, 3); }else{ fossil_fatal("\"all\" subcommand should be one of: " "add cache changes clean dbstat extras fts-config git ignore " "info list ls pull push rebuild remote " "server setting sync ui unset whatis"); } |
︙ | ︙ |
Changes to src/attach.c.
︙ | ︙ | |||
748 749 750 751 752 753 754 | if( (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 ){ zBody = pWiki->zWiki; } if( zBody==0 ){ fossil_fatal("technote [%s] not found",zETime); } zTarget = db_text(0, | | > | 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 | if( (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 ){ zBody = pWiki->zWiki; } if( zBody==0 ){ fossil_fatal("technote [%s] not found",zETime); } zTarget = db_text(0, "SELECT substr(tagname,7) FROM tag " " WHERE tagid=(SELECT tagid FROM event WHERE objid='%d')", rid ); zFile = g.argv[3]; } blob_read_from_file(&content, zFile, ExtFILE); user_select(); attach_commit( |
︙ | ︙ |
Changes to src/backlink.c.
︙ | ︙ | |||
247 248 249 250 251 252 253 | void *opaque ){ Backlink *p = (Backlink*)opaque; char *zTarget = blob_buffer(target); int nTarget = blob_size(target); backlink_create(p, zTarget, nTarget); | | | 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | void *opaque ){ Backlink *p = (Backlink*)opaque; char *zTarget = blob_buffer(target); int nTarget = blob_size(target); backlink_create(p, zTarget, nTarget); return 1; } /* No-op routines for the rendering callbacks that we do not need */ static void mkdn_noop_prolog(Blob *b, void *v){ return; } static void (*mkdn_noop_epilog)(Blob*, void*) = mkdn_noop_prolog; static void mkdn_noop_footnotes(Blob *b1, const Blob *b2, void *v){ return; } static void mkdn_noop_blockcode(Blob *b1, Blob *b2, void *v){ return; } |
︙ | ︙ |
Changes to src/backoffice.c.
︙ | ︙ | |||
313 314 315 316 317 318 319 | ** we cannot prove that the process is dead, return true. */ static int backofficeProcessExists(sqlite3_uint64 pid){ #if defined(_WIN32) return pid>0 && backofficeWin32ProcessExists((DWORD)pid)!=0; #else return pid>0 && kill((pid_t)pid, 0)==0; | | | | 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | ** we cannot prove that the process is dead, return true. */ static int backofficeProcessExists(sqlite3_uint64 pid){ #if defined(_WIN32) return pid>0 && backofficeWin32ProcessExists((DWORD)pid)!=0; #else return pid>0 && kill((pid_t)pid, 0)==0; #endif } /* ** Check to see if the process identified by pid has finished. If ** we cannot prove that the process is still running, return true. */ static int backofficeProcessDone(sqlite3_uint64 pid){ #if defined(_WIN32) return pid<=0 || backofficeWin32ProcessExists((DWORD)pid)==0; #else return pid<=0 || kill((pid_t)pid, 0)!=0; #endif } /* ** Return a process id number for the current process */ static sqlite3_uint64 backofficeProcessId(void){ return (sqlite3_uint64)GETPID(); |
︙ | ︙ | |||
673 674 675 676 677 678 679 | ** This might be done by a cron job or similar to make sure backoffice ** processing happens periodically. Or, the --poll option can be used ** to run this command as a daemon that will periodically invoke backoffice ** on a collection of repositories. ** ** If only a single repository is named and --poll is omitted, then the ** backoffice work is done in-process. But if there are multiple repositories | | | 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 | ** This might be done by a cron job or similar to make sure backoffice ** processing happens periodically. Or, the --poll option can be used ** to run this command as a daemon that will periodically invoke backoffice ** on a collection of repositories. ** ** If only a single repository is named and --poll is omitted, then the ** backoffice work is done in-process. But if there are multiple repositories ** or if --poll is used, a separate sub-process is started for each poll of ** each repository. ** ** Standard options: ** ** --debug Show what this command is doing ** ** --logfile FILE Append a log of backoffice actions onto FILE |
︙ | ︙ |
Changes to src/bag.c.
︙ | ︙ | |||
72 73 74 75 76 77 78 | free(p->a); bag_init(p); } /* ** The hash function */ | | | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | free(p->a); bag_init(p); } /* ** The hash function */ #define bag_hash(i) (((u64)(i))*101) /* ** Change the size of the hash table on a bag so that ** it contains N slots ** ** Completely reconstruct the hash table from scratch. Deleted ** entries (indicated by a -1) are removed. When finished, it |
︙ | ︙ |
Changes to src/blob.c.
︙ | ︙ | |||
1549 1550 1551 1552 1553 1554 1555 | z[--j] = z[i]; } } } /* ** ASCII (for reference): | | | | | | | | | | | | 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 | z[--j] = z[i]; } } } /* ** ASCII (for reference): ** x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf ** 0x ^` ^a ^b ^c ^d ^e ^f ^g \b \t \n () \f \r ^n ^o ** 1x ^p ^q ^r ^s ^t ^u ^v ^w ^x ^y ^z ^{ ^| ^} ^~ ^ ** 2x () ! " # $ % & ' ( ) * + , - . / ** 3x 0 1 2 3 4 5 6 7 8 9 : ; < = > ? ** 4x @ A B C D E F G H I J K L M N O ** 5x P Q R S T U V W X Y Z [ \ ] ^ _ ** 6x ` a b c d e f g h i j k l m n o ** 7x p q r s t u v w x y z { | } ~ ^_ */ /* ** Meanings for bytes in a filename: ** ** 0 Ordinary character. No encoding required ** 1 Needs to be escaped ** 2 Illegal character. Do not allow in a filename ** 3 First byte of a 2-byte UTF-8 ** 4 First byte of a 3-byte UTF-8 ** 5 First byte of a 4-byte UTF-8 */ static const char aSafeChar[256] = { #ifdef _WIN32 /* Windows ** Prohibit: all control characters, including tab, \r and \n. ** Escape: (space) " # $ % & ' ( ) * ; < > ? [ ] ^ ` { | } */ /* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, /* 0x */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, /* 1x */ 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 2x */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, /* 3x */ |
︙ | ︙ | |||
1663 1664 1665 1666 1667 1668 1669 | blob_token(pBlob, &bad); fossil_fatal("the [%s] argument to the \"%s\" command contains " "an illegal UTF-8 character", zIn, blob_str(&bad)); } i += x-2; } | | | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 | blob_token(pBlob, &bad); fossil_fatal("the [%s] argument to the \"%s\" command contains " "an illegal UTF-8 character", zIn, blob_str(&bad)); } i += x-2; } } } /* Separate from the previous argument by a space */ if( n>0 && !fossil_isspace(z[n-1]) ){ blob_append_char(pBlob, ' '); } |
︙ | ︙ | |||
1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 | blob_append_char(pBlob, '\\'); }else if( zIn[0]=='/' ){ blob_append_char(pBlob, '.'); } for(i=0; (c = (unsigned char)zIn[i])!=0; i++){ blob_append_char(pBlob, (char)c); if( c=='"' ) blob_append_char(pBlob, '"'); } blob_append_char(pBlob, '"'); #else /* Quoting strategy for unix: ** If the name does not contain ', then surround the whole thing ** with '...'. If there is one or more ' characters within the ** name, then put \ before each special character. | > > | 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 | blob_append_char(pBlob, '\\'); }else if( zIn[0]=='/' ){ blob_append_char(pBlob, '.'); } for(i=0; (c = (unsigned char)zIn[i])!=0; i++){ blob_append_char(pBlob, (char)c); if( c=='"' ) blob_append_char(pBlob, '"'); if( c=='\\' ) blob_append_char(pBlob, '\\'); if( c=='%' && isFilename ) blob_append(pBlob, "%cd:~,%", 7); } blob_append_char(pBlob, '"'); #else /* Quoting strategy for unix: ** If the name does not contain ', then surround the whole thing ** with '...'. If there is one or more ' characters within the ** name, then put \ before each special character. |
︙ | ︙ | |||
1793 1794 1795 1796 1797 1798 1799 | } #ifdef _WIN32 if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='\\' ) zArg += 2; #else if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='/' ) zArg += 2; #endif if( strcmp(zBuf, zArg)!=0 ){ | | | 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 | } #ifdef _WIN32 if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='\\' ) zArg += 2; #else if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='/' ) zArg += 2; #endif if( strcmp(zBuf, zArg)!=0 ){ fossil_fatal("argument disagree: \"%s\" (%s) versus \"%s\"", zBuf, g.argv[i-1], zArg); } continue; }else if( fossil_strcmp(zArg, "--fuzz")==0 && i+1<g.argc ){ int n = atoi(g.argv[++i]); int j; for(j=0; j<n; j++){ |
︙ | ︙ |
Changes to src/branch.c.
︙ | ︙ | |||
304 305 306 307 308 309 310 | const char *zUser ){ Blob sql; blob_init(&sql, 0, 0); brlist_create_temp_table(); /* Ignore nLimitMRU if no chronological sort requested. */ if( (brFlags & BRL_ORDERBY_MTIME)==0 ) nLimitMRU = 0; | | < | | 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 | const char *zUser ){ Blob sql; blob_init(&sql, 0, 0); brlist_create_temp_table(); /* Ignore nLimitMRU if no chronological sort requested. */ if( (brFlags & BRL_ORDERBY_MTIME)==0 ) nLimitMRU = 0; /* Negative values for nLimitMRU also mean "no limit". */ if( nLimitMRU<0 ) nLimitMRU = 0; /* OUTER QUERY */ blob_append_sql(&sql,"SELECT name, isprivate, mergeto,"); if( brFlags & BRL_LIST_USERS ){ blob_append_sql(&sql, " (SELECT group_concat(user) FROM (" " SELECT DISTINCT * FROM (" " SELECT coalesce(euser,user) AS user" |
︙ | ︙ | |||
339 340 341 342 343 344 345 | blob_append_sql(&sql, "SELECT name, isprivate, mtime, mergeto FROM tmp_brlist WHERE 1" ); break; } case BRL_OPEN_ONLY: { blob_append_sql(&sql, | | > | 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | blob_append_sql(&sql, "SELECT name, isprivate, mtime, mergeto FROM tmp_brlist WHERE 1" ); break; } case BRL_OPEN_ONLY: { blob_append_sql(&sql, "SELECT name, isprivate, mtime, mergeto FROM tmp_brlist " " WHERE NOT isclosed" ); break; } } if( brFlags & BRL_PRIVATE ) blob_append_sql(&sql, " AND isprivate"); if( brFlags & BRL_MERGED ) blob_append_sql(&sql, " AND mergeto IS NOT NULL"); if( zBrNameGlob ) blob_append_sql(&sql, " AND (name GLOB %Q)", zBrNameGlob); |
︙ | ︙ | |||
771 772 773 774 775 776 777 | int isPriv = db_column_int(&q, 1)==1; const char *zMergeTo = db_column_text(&q, 2); int isCur = zCurrent!=0 && fossil_strcmp(zCurrent,zBr)==0; const char *zUsers = db_column_text(&q, 3); if( (brFlags & BRL_MERGED) && fossil_strcmp(zCurrent,zMergeTo)!=0 ){ continue; } | | | 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 | int isPriv = db_column_int(&q, 1)==1; const char *zMergeTo = db_column_text(&q, 2); int isCur = zCurrent!=0 && fossil_strcmp(zCurrent,zBr)==0; const char *zUsers = db_column_text(&q, 3); if( (brFlags & BRL_MERGED) && fossil_strcmp(zCurrent,zMergeTo)!=0 ){ continue; } if( (brFlags & BRL_UNMERGED) && (fossil_strcmp(zCurrent,zMergeTo)==0 || isCur) ){ continue; } blob_appendf(&txt, "%s%s%s", ( (brFlags & BRL_PRIVATE) ? " " : ( isPriv ? "#" : " ") ), (isCur ? "* " : " "), zBr); if( nUsers ){ |
︙ | ︙ | |||
804 805 806 807 808 809 810 | blob_reset(&txt); } db_finalize(&q); }else if( strncmp(zCmd,"new",n)==0 ){ branch_new(); }else if( strncmp(zCmd,"close",5)==0 ){ if(g.argc<4){ | | | | | | 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 | blob_reset(&txt); } db_finalize(&q); }else if( strncmp(zCmd,"new",n)==0 ){ branch_new(); }else if( strncmp(zCmd,"close",5)==0 ){ if(g.argc<4){ usage("close branch-name(s)..."); } branch_cmd_close(3, 1); }else if( strncmp(zCmd,"reopen",6)==0 ){ if(g.argc<4){ usage("reopen branch-name(s)..."); } branch_cmd_close(3, 0); }else if( strncmp(zCmd,"hide",4)==0 ){ if(g.argc<4){ usage("hide branch-name(s)..."); } branch_cmd_hide(3,1); }else if( strncmp(zCmd,"unhide",6)==0 ){ if(g.argc<4){ usage("unhide branch-name(s)..."); } branch_cmd_hide(3,0); }else{ fossil_fatal("branch subcommand should be one of: " "close current hide info list ls lsh new reopen unhide"); } } |
︙ | ︙ | |||
885 886 887 888 889 890 891 | } } if( zBgClr && zBgClr[0] && show_colors ){ @ <tr style="background-color:%s(zBgClr)"> }else{ @ <tr> } | | | 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 | } } if( zBgClr && zBgClr[0] && show_colors ){ @ <tr style="background-color:%s(zBgClr)"> }else{ @ <tr> } @ <td>%z(href("%R/timeline?r=%T",zBranch))%h(zBranch)</a><input @ type="checkbox" disabled="disabled"/></td> @ <td data-sortkey="%016llx(iMtime)">%s(zAge)</td> @ <td>%d(nCkin)</td> fossil_free(zAge); @ <td>%s(isClosed?"closed":"")</td> if( zMergeTo ){ @ <td>merged into |
︙ | ︙ |
Changes to src/browse.c.
︙ | ︙ | |||
356 357 358 359 360 361 362 | /* Generate a multi-column table listing the contents of zD[] ** directory. */ mxLen = db_int(12, "SELECT max(length(x)) FROM localfiles /*scan*/"); if( mxLen<12 ) mxLen = 12; mxLen += (mxLen+9)/10; | | | 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 | /* Generate a multi-column table listing the contents of zD[] ** directory. */ mxLen = db_int(12, "SELECT max(length(x)) FROM localfiles /*scan*/"); if( mxLen<12 ) mxLen = 12; mxLen += (mxLen+9)/10; db_prepare(&q, "SELECT x, u FROM localfiles ORDER BY x COLLATE uintnocase /*scan*/"); @ <div class="columns files" style="columns: %d(mxLen)ex auto"> @ <ul class="browser"> while( db_step(&q)==SQLITE_ROW ){ const char *zFN; zFN = db_column_text(&q, 0); if( zFN[0]=='/' ){ |
︙ | ︙ | |||
469 470 471 472 473 474 475 | FileTreeNode *pSibling; /* Next element in the same subdirectory */ FileTreeNode *pChild; /* List of child nodes */ FileTreeNode *pLastChild; /* Last child on the pChild list */ char *zName; /* Name of this entry. The "tail" */ char *zFullName; /* Full pathname of this entry */ char *zUuid; /* Artifact hash of this file. May be NULL. */ double mtime; /* Modification time for this entry */ | | > | 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 | FileTreeNode *pSibling; /* Next element in the same subdirectory */ FileTreeNode *pChild; /* List of child nodes */ FileTreeNode *pLastChild; /* Last child on the pChild list */ char *zName; /* Name of this entry. The "tail" */ char *zFullName; /* Full pathname of this entry */ char *zUuid; /* Artifact hash of this file. May be NULL. */ double mtime; /* Modification time for this entry */ double sortBy; /* Either mtime or size, depending on desired sort order */ int iSize; /* Size for this entry */ unsigned nFullName; /* Length of zFullName */ unsigned iLevel; /* Levels of parent directories */ }; /* ** A complete file hierarchy |
︙ | ︙ | |||
506 507 508 509 510 511 512 | const char *zUuid, /* Hash of the file. Might be NULL. */ double mtime, /* Modification time for this entry */ int size, /* Size for this entry */ int sortOrder /* 0: filename, 1: mtime, 2: size */ ){ int i; FileTreeNode *pParent; /* Parent (directory) of the next node to insert */ | | | 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 | const char *zUuid, /* Hash of the file. Might be NULL. */ double mtime, /* Modification time for this entry */ int size, /* Size for this entry */ int sortOrder /* 0: filename, 1: mtime, 2: size */ ){ int i; FileTreeNode *pParent; /* Parent (directory) of the next node to insert */ /* Make pParent point to the most recent ancestor of zPath, or ** NULL if there are no prior entires that are a container for zPath. */ pParent = pTree->pLast; while( pParent!=0 && ( strncmp(pParent->zFullName, zPath, pParent->nFullName)!=0 || zPath[pParent->nFullName]!='/' ) |
︙ | ︙ |
Changes to src/builtin.c.
︙ | ︙ | |||
519 520 521 522 523 524 525 | builtinVtab_cursor *pCur = (builtinVtab_cursor*)cur; return pCur->iRowid>count(aBuiltinFiles); } /* ** This method is called to "rewind" the builtinVtab_cursor object back ** to the first row of output. This method is always called at least | | | | 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 | builtinVtab_cursor *pCur = (builtinVtab_cursor*)cur; return pCur->iRowid>count(aBuiltinFiles); } /* ** This method is called to "rewind" the builtinVtab_cursor object back ** to the first row of output. This method is always called at least ** once prior to any call to builtinVtabColumn() or builtinVtabRowid() or ** builtinVtabEof(). */ static int builtinVtabFilter( sqlite3_vtab_cursor *pVtabCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **argv ){ builtinVtab_cursor *pCur = (builtinVtab_cursor *)pVtabCursor; pCur->iRowid = 1; return SQLITE_OK; } |
︙ | ︙ | |||
548 549 550 551 552 553 554 | ){ pIdxInfo->estimatedCost = (double)count(aBuiltinFiles); pIdxInfo->estimatedRows = count(aBuiltinFiles); return SQLITE_OK; } /* | | | 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 | ){ pIdxInfo->estimatedCost = (double)count(aBuiltinFiles); pIdxInfo->estimatedRows = count(aBuiltinFiles); return SQLITE_OK; } /* ** This following structure defines all the methods for the ** virtual table. */ static sqlite3_module builtinVtabModule = { /* iVersion */ 0, /* xCreate */ 0, /* The builtin vtab is eponymous and read-only */ /* xConnect */ builtinVtabConnect, /* xBestIndex */ builtinVtabBestIndex, |
︙ | ︙ | |||
575 576 577 578 579 580 581 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, | | > | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, /* xShadowName */ 0, /* xIntegrity */ 0 }; /* ** Register the builtin virtual table */ int builtin_vtab_register(sqlite3 *db){ |
︙ | ︙ | |||
813 814 815 816 817 818 819 | ** per-page basis. In this case, all arguments are ignored! ** ** This function has an internal mapping of the dependencies for each ** of the known fossil.XYZ.js modules and ensures that the ** dependencies also get queued (recursively) and that each module is ** queued only once. ** | | | 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 | ** per-page basis. In this case, all arguments are ignored! ** ** This function has an internal mapping of the dependencies for each ** of the known fossil.XYZ.js modules and ensures that the ** dependencies also get queued (recursively) and that each module is ** queued only once. ** ** If passed a name which is not a base fossil module name then it ** will fail fatally! ** ** DO NOT use this for loading fossil.page.*.js: use ** builtin_request_js() for those. ** ** If the current JS delivery mode is *not* JS_BUNDLED then this ** function queues up a request for each given module and its known |
︙ | ︙ |
Changes to src/cache.c.
︙ | ︙ | |||
311 312 313 314 315 316 317 | sqlite3_exec(db, "DELETE FROM cache; DELETE FROM blob; VACUUM;",0,0,0); sqlite3_close(db); fossil_print("cache cleared\n"); }else{ fossil_print("nothing to clear; cache does not exist\n"); } }else if( strncmp(zCmd, "list", nCmd)==0 | | | 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | sqlite3_exec(db, "DELETE FROM cache; DELETE FROM blob; VACUUM;",0,0,0); sqlite3_close(db); fossil_print("cache cleared\n"); }else{ fossil_print("nothing to clear; cache does not exist\n"); } }else if( strncmp(zCmd, "list", nCmd)==0 || strncmp(zCmd, "ls", nCmd)==0 || strncmp(zCmd, "status", nCmd)==0 ){ db = cacheOpen(0); if( db==0 ){ fossil_print("cache does not exist\n"); }else{ int nEntry = 0; |
︙ | ︙ | |||
430 431 432 433 434 435 436 | @ hit-count: %d(sqlite3_column_int(pStmt,2)) @ last-access: %s(sqlite3_column_text(pStmt,3)) \ if( zHash ){ @ %z(href("%R/timeline?c=%S",zHash))check-in</a>\ fossil_free(zHash); } @ </p></li> | | | 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 | @ hit-count: %d(sqlite3_column_int(pStmt,2)) @ last-access: %s(sqlite3_column_text(pStmt,3)) \ if( zHash ){ @ %z(href("%R/timeline?c=%S",zHash))check-in</a>\ fossil_free(zHash); } @ </p></li> } sqlite3_finalize(pStmt); @ </ol> } zDbName = cacheName(); bigSizeName(sizeof(zBuf), zBuf, file_size(zDbName, ExtFILE)); @ <p> |
︙ | ︙ |
Changes to src/capabilities.c.
︙ | ︙ | |||
399 400 401 402 403 404 405 | @ <th>Unversioned Content</th></tr> while( db_step(&q)==SQLITE_ROW ){ const char *zId = db_column_text(&q, 0); const char *zCap = db_column_text(&q, 1); int n = db_column_int(&q, 3); int eType; static const char *const azType[] = { "off", "read", "write" }; | | | 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 | @ <th>Unversioned Content</th></tr> while( db_step(&q)==SQLITE_ROW ){ const char *zId = db_column_text(&q, 0); const char *zCap = db_column_text(&q, 1); int n = db_column_int(&q, 3); int eType; static const char *const azType[] = { "off", "read", "write" }; static const char *const azClass[] = { "capsumOff", "capsumRead", "capsumWrite" }; if( n==0 ) continue; /* Code */ if( db_column_int(&q,2)<10 ){ @ <tr><th align="right"><tt>"%h(zId)"</tt></th> |
︙ | ︙ |
Changes to src/cgi.c.
︙ | ︙ | |||
35 36 37 38 39 40 41 | ** So, even though the name of this file implies that it only deals with ** CGI, in fact, the code in this file is used to interpret webpage requests ** received by a variety of means, and to generate well-formatted replies ** to those requests. ** ** The code in this file abstracts the web-request so that downstream ** modules that generate the body of the reply (based on the requested page) | | | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | ** So, even though the name of this file implies that it only deals with ** CGI, in fact, the code in this file is used to interpret webpage requests ** received by a variety of means, and to generate well-formatted replies ** to those requests. ** ** The code in this file abstracts the web-request so that downstream ** modules that generate the body of the reply (based on the requested page) ** do not need to know if the request is coming from CGI, direct HTTP, ** SCGI, or some other means. ** ** This module gathers information about web page request into a key/value ** store. Keys and values come from: ** ** * Query parameters ** * POST parameter |
︙ | ︙ | |||
479 480 481 482 483 484 485 | if( iReplyStatus<=0 ){ iReplyStatus = 200; zReplyStatus = "OK"; } if( g.fullHttpReply ){ if( rangeEnd>0 | | | 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 | if( iReplyStatus<=0 ){ iReplyStatus = 200; zReplyStatus = "OK"; } if( g.fullHttpReply ){ if( rangeEnd>0 && iReplyStatus==200 && fossil_strcmp(P("REQUEST_METHOD"),"GET")==0 ){ iReplyStatus = 206; zReplyStatus = "Partial Content"; } blob_appendf(&hdr, "HTTP/1.0 %d %s\r\n", iReplyStatus, zReplyStatus); blob_appendf(&hdr, "Date: %s\r\n", cgi_rfc822_datestamp(time(0))); |
︙ | ︙ | |||
560 561 562 563 564 565 566 | blob_appendf(&hdr, "Content-Encoding: gzip\r\n"); blob_appendf(&hdr, "Vary: Accept-Encoding\r\n"); } total_size = blob_size(&cgiContent[0]) + blob_size(&cgiContent[1]); if( iReplyStatus==206 ){ blob_appendf(&hdr, "Content-Range: bytes %d-%d/%d\r\n", rangeStart, rangeEnd-1, total_size); | | | 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 | blob_appendf(&hdr, "Content-Encoding: gzip\r\n"); blob_appendf(&hdr, "Vary: Accept-Encoding\r\n"); } total_size = blob_size(&cgiContent[0]) + blob_size(&cgiContent[1]); if( iReplyStatus==206 ){ blob_appendf(&hdr, "Content-Range: bytes %d-%d/%d\r\n", rangeStart, rangeEnd-1, total_size); total_size = rangeEnd - rangeStart; } blob_appendf(&hdr, "Content-Length: %d\r\n", total_size); }else{ total_size = 0; } blob_appendf(&hdr, "\r\n"); cgi_fwrite(blob_buffer(&hdr), blob_size(&hdr)); |
︙ | ︙ | |||
1254 1255 1256 1257 1258 1259 1260 | char * z = (char*)P("QUERY_STRING"); if( z ){ ++rc; z = fossil_strdup(z); add_param_list(z, '&'); z = (char*)P("skin"); if( z ){ | | | 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 | char * z = (char*)P("QUERY_STRING"); if( z ){ ++rc; z = fossil_strdup(z); add_param_list(z, '&'); z = (char*)P("skin"); if( z ){ char *zErr = skin_use_alternative(z, 2, SKIN_FROM_QPARAM); ++rc; if( !zErr && P("once")==0 ){ cookie_write_parameter("skin","skin",z); /* Per /chat discussion, passing ?skin=... without "once" ** implies the "udc" argument, so we force that into the ** environment here. */ cgi_set_parameter_nocopy("udc", "1", 1); |
︙ | ︙ | |||
1309 1310 1311 1312 1313 1314 1315 | ** / \ ** https://fossil-scm.org/forum/info/12736b30c072551a?t=c ** \___/ \____________/\____/\____________________/ \_/ ** | | | | | ** | HTTP_HOST | PATH_INFO QUERY_STRING ** | | ** REQUEST_SCHEMA SCRIPT_NAME | | | 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 | ** / \ ** https://fossil-scm.org/forum/info/12736b30c072551a?t=c ** \___/ \____________/\____/\____________________/ \_/ ** | | | | | ** | HTTP_HOST | PATH_INFO QUERY_STRING ** | | ** REQUEST_SCHEMA SCRIPT_NAME ** */ void cgi_init(void){ char *z; const char *zType; char *zSemi; int len; const char *zRequestUri = cgi_parameter("REQUEST_URI",0); |
︙ | ︙ | |||
1346 1347 1348 1349 1350 1351 1352 | zScriptName = fossil_strndup(zRequestUri,(int)(z-zRequestUri)); cgi_set_parameter("SCRIPT_NAME", zScriptName); } #ifdef _WIN32 /* The Microsoft IIS web server does not define REQUEST_URI, instead it uses ** PATH_INFO for virtually the same purpose. Define REQUEST_URI the same as | | | 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 | zScriptName = fossil_strndup(zRequestUri,(int)(z-zRequestUri)); cgi_set_parameter("SCRIPT_NAME", zScriptName); } #ifdef _WIN32 /* The Microsoft IIS web server does not define REQUEST_URI, instead it uses ** PATH_INFO for virtually the same purpose. Define REQUEST_URI the same as ** PATH_INFO and redefine PATH_INFO with SCRIPT_NAME removed from the ** beginning. */ if( zServerSoftware && strstr(zServerSoftware, "Microsoft-IIS") ){ int i, j; cgi_set_parameter("REQUEST_URI", zPathInfo); for(i=0; zPathInfo[i]==zScriptName[i] && zPathInfo[i]; i++){} for(j=i; zPathInfo[j] && zPathInfo[j]!='?'; j++){} zPathInfo = fossil_strndup(zPathInfo+i, j-i); |
︙ | ︙ | |||
1405 1406 1407 1408 1409 1410 1411 | #endif z = (char*)P("HTTP_COOKIE"); if( z ){ z = fossil_strdup(z); add_param_list(z, ';'); z = (char*)cookie_value("skin",0); if(z){ | | | 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 | #endif z = (char*)P("HTTP_COOKIE"); if( z ){ z = fossil_strdup(z); add_param_list(z, ';'); z = (char*)cookie_value("skin",0); if(z){ skin_use_alternative(z, 2, SKIN_FROM_COOKIE); } } cgi_setup_query_string(); z = (char*)P("REMOTE_ADDR"); if( z ){ |
︙ | ︙ |
Changes to src/chat.c.
︙ | ︙ | |||
32 33 34 35 36 37 38 | ** * Chat content lives in a single repository. It is never synced. ** Content expires and is deleted after a set interval (a week or so). ** ** Notification is accomplished using the "hanging GET" or "long poll" design ** in which a GET request is issued but the server does not send a reply until ** new content arrives. Newer Web Sockets and Server Sent Event protocols are ** more elegant, but are not compatible with CGI, and would thus complicate | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | ** * Chat content lives in a single repository. It is never synced. ** Content expires and is deleted after a set interval (a week or so). ** ** Notification is accomplished using the "hanging GET" or "long poll" design ** in which a GET request is issued but the server does not send a reply until ** new content arrives. Newer Web Sockets and Server Sent Event protocols are ** more elegant, but are not compatible with CGI, and would thus complicate ** configuration. */ #include "config.h" #include <assert.h> #include "chat.h" /* ** Outputs JS code to initialize a list of chat alert audio files for |
︙ | ︙ | |||
317 318 319 320 321 322 323 | " ORDER BY msgid LIMIT 1"); if( rAge>mxDays ){ msgid = db_int(0, "SELECT msgid FROM chat" " ORDER BY msgid DESC LIMIT 1 OFFSET %d", mxCnt); if( msgid>0 ){ Stmt s; db_multi_exec("PRAGMA secure_delete=ON;"); | | | 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | " ORDER BY msgid LIMIT 1"); if( rAge>mxDays ){ msgid = db_int(0, "SELECT msgid FROM chat" " ORDER BY msgid DESC LIMIT 1 OFFSET %d", mxCnt); if( msgid>0 ){ Stmt s; db_multi_exec("PRAGMA secure_delete=ON;"); db_prepare(&s, "DELETE FROM chat WHERE mtime<julianday('now')-:mxage" " AND msgid<%d", msgid); db_bind_double(&s, ":mxage", mxDays); db_step(&s); db_finalize(&s); } } |
︙ | ︙ | |||
691 692 693 694 695 696 697 | } sqlite3_sleep(iDelay); nDelay--; } } /* Exit by "break" */ db_finalize(&q1); blob_append(&json, "\n]}", 3); cgi_set_content(&json); | | | 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | } sqlite3_sleep(iDelay); nDelay--; } } /* Exit by "break" */ db_finalize(&q1); blob_append(&json, "\n]}", 3); cgi_set_content(&json); return; } /* ** WEBPAGE: chat-fetch-one hidden loadavg-exempt ** ** /chat-fetch-one/N ** |
︙ | ︙ | |||
724 725 726 727 728 729 730 | if( !g.perm.Chat ) { chat_emit_permissions_error(0); return; } zChatUser = db_get("chat-timeline-user",0); chat_create_tables(); cgi_set_content_type("application/json"); | | | 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 | if( !g.perm.Chat ) { chat_emit_permissions_error(0); return; } zChatUser = db_get("chat-timeline-user",0); chat_create_tables(); cgi_set_content_type("application/json"); db_prepare(&q, "SELECT datetime(mtime), xfrom, xmsg, octet_length(file)," " fname, fmime, lmtime" " FROM chat WHERE msgid=%d AND mdel IS NULL", msgid); if(SQLITE_ROW==db_step(&q)){ const char *zDate = db_column_text(&q, 0); const char *zFrom = db_column_text(&q, 1); |
︙ | ︙ | |||
767 768 769 770 771 772 773 | fossil_free(zMsg); } if( nByte==0 ){ blob_appendf(&json, "\"fsize\":0"); }else{ blob_appendf(&json, "\"fsize\":%d,\"fname\":%!j,\"fmime\":%!j", nByte, zFName, zFMime); | | | 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 | fossil_free(zMsg); } if( nByte==0 ){ blob_appendf(&json, "\"fsize\":0"); }else{ blob_appendf(&json, "\"fsize\":%d,\"fname\":%!j,\"fmime\":%!j", nByte, zFName, zFMime); } blob_append(&json,"}",1); cgi_set_content(&json); }else{ ajax_route_error(404,"Chat message #%d not found.", msgid); } db_finalize(&q); } |
︙ | ︙ | |||
955 956 957 958 959 960 961 | sqlite3_value **argv ){ const char *zType = (const char*)sqlite3_value_text(argv[0]); int rid = sqlite3_value_int(argv[1]); const char *zUser = (const char*)sqlite3_value_text(argv[2]); const char *zMsg = (const char*)sqlite3_value_text(argv[3]); char *zRes = 0; | | | 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 | sqlite3_value **argv ){ const char *zType = (const char*)sqlite3_value_text(argv[0]); int rid = sqlite3_value_int(argv[1]); const char *zUser = (const char*)sqlite3_value_text(argv[2]); const char *zMsg = (const char*)sqlite3_value_text(argv[3]); char *zRes = 0; if( zType==0 || zUser==0 || zMsg==0 ) return; if( zType[0]=='c' ){ /* Check-ins */ char *zBranch; char *zUuid; zBranch = db_text(0, |
︙ | ︙ | |||
1215 1216 1217 1218 1219 1220 1221 | blob_appendf(&reqUri, "/chat-backup?msgid=%d", msgid); if( g.url.user && g.url.user[0] ){ zObs = obscure(g.url.user); blob_appendf(&reqUri, "&resid=%t", zObs); fossil_free(zObs); } zPw = g.url.passwd; | | > > > > > > > > | 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 | blob_appendf(&reqUri, "/chat-backup?msgid=%d", msgid); if( g.url.user && g.url.user[0] ){ zObs = obscure(g.url.user); blob_appendf(&reqUri, "&resid=%t", zObs); fossil_free(zObs); } zPw = g.url.passwd; if( zPw==0 && isDefaultUrl ){ zPw = unobscure(db_get("last-sync-pw", 0)); if( zPw==0 ){ /* Can happen if "remember password" is not used. */ g.url.flags |= URL_PROMPT_PW; url_prompt_for_password(); zPw = g.url.passwd; } } if( zPw && zPw[0] ){ zObs = obscure(zPw); blob_appendf(&reqUri, "&token=%t", zObs); fossil_free(zObs); } g.url.path = blob_str(&reqUri); if( bDebug ){ |
︙ | ︙ |
Changes to src/checkin.c.
︙ | ︙ | |||
1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 | "#\n%.78c\n" "# The following diff is excluded from the commit message:\n#\n", '#' ); diff_options(&DCfg, 0, 1); DCfg.diffFlags |= DIFF_VERBOSE; if( g.aCommitFile ){ FileDirList *diffFiles; int i; for(i=0; g.aCommitFile[i]!=0; ++i){} diffFiles = fossil_malloc_zero((i+1) * sizeof(*diffFiles)); for(i=0; g.aCommitFile[i]!=0; ++i){ | > > > > > > > > | < > > > > > > > | 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 | "#\n%.78c\n" "# The following diff is excluded from the commit message:\n#\n", '#' ); diff_options(&DCfg, 0, 1); DCfg.diffFlags |= DIFF_VERBOSE; if( g.aCommitFile ){ Stmt q; Blob sql = BLOB_INITIALIZER; FileDirList *diffFiles; int i; for(i=0; g.aCommitFile[i]!=0; ++i){} diffFiles = fossil_malloc_zero((i+1) * sizeof(*diffFiles)); for(i=0; g.aCommitFile[i]!=0; ++i){ blob_append_sql(&sql, "SELECT pathname, deleted, rid WHERE id=%d", g.aCommitFile[i]); db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); assert( db_step(&q)==SQLITE_ROW ); diffFiles[i].zName = fossil_strdup(db_column_text(&q, 0)); DCfg.diffFlags &= (~DIFF_FILE_MASK); if( db_column_int(&q, 1) ){ DCfg.diffFlags |= DIFF_FILE_DELETED; }else if( db_column_int(&q, 2)==0 ){ DCfg.diffFlags |= DIFF_FILE_ADDED; } db_finalize(&q); if( fossil_strcmp(diffFiles[i].zName, "." )==0 ){ diffFiles[0].zName[0] = '.'; diffFiles[0].zName[1] = 0; break; } diffFiles[i].nName = strlen(diffFiles[i].zName); diffFiles[i].nUsed = 0; |
︙ | ︙ | |||
2519 2520 2521 2522 2523 2524 2525 | "use --override-lock", g.ckinLockFail); }else{ fossil_fatal("Would fork. \"update\" first or use --branch or " "--allow-fork."); } } | | | | 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 | "use --override-lock", g.ckinLockFail); }else{ fossil_fatal("Would fork. \"update\" first or use --branch or " "--allow-fork."); } } /* ** Do not allow a commit against a closed leaf unless the commit ** ends up on a different branch. */ if( /* parent check-in has the "closed" tag... */ leaf_is_closed(vid) /* ... and the new check-in has no --branch option or the --branch ** option does not actually change the branch */ && (sCiInfo.zBranch==0 || db_exists("SELECT 1 FROM tagxref" " WHERE tagid=%d AND rid=%d AND tagtype>0" " AND value=%Q", TAG_BRANCH, vid, sCiInfo.zBranch)) ){ fossil_fatal("cannot commit against a closed leaf"); } /* Always exit the loop on the second pass */ if( bRecheck ) break; /* Get the check-in comment. This might involve prompting the ** user for the check-in comment, in which case we should resync ** to renew the check-in lock and repeat the checks for conflicts. */ if( zComment ){ blob_zero(&comment); blob_append(&comment, zComment, -1); |
︙ | ︙ |
Changes to src/clone.c.
︙ | ︙ | |||
194 195 196 197 198 199 200 | g.argv[2]); } zRepo = mprintf("./%s.fossil", zBase); if( zWorkDir==0 ){ zWorkDir = mprintf("./%s", zBase); } fossil_free(zBase); | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | g.argv[2]); } zRepo = mprintf("./%s.fossil", zBase); if( zWorkDir==0 ){ zWorkDir = mprintf("./%s", zBase); } fossil_free(zBase); } if( -1 != file_size(zRepo, ExtFILE) ){ fossil_fatal("file already exists: %s", zRepo); } /* Fail before clone if open will fail because inside an open check-out */ if( zWorkDir!=0 && zWorkDir[0]!=0 && !noOpen ){ if( db_open_local_v2(0, allowNested) ){ fossil_fatal("there is already an open tree at %s", g.zLocalRoot); |
︙ | ︙ | |||
258 259 260 261 262 263 264 | "DELETE FROM config WHERE name='project-code';" ); db_protect_pop(); url_enable_proxy(0); clone_ssh_db_set_options(); url_get_password_if_needed(); g.xlinkClusterOnly = 1; | | | 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 | "DELETE FROM config WHERE name='project-code';" ); db_protect_pop(); url_enable_proxy(0); clone_ssh_db_set_options(); url_get_password_if_needed(); g.xlinkClusterOnly = 1; nErr = client_sync(syncFlags,CONFIGSET_ALL,0,0,0); g.xlinkClusterOnly = 0; verify_cancel(); db_end_transaction(0); db_close(1); if( nErr ){ file_delete(zRepo); fossil_fatal("server returned an error - clone aborted"); |
︙ | ︙ |
Changes to src/comformat.c.
︙ | ︙ | |||
272 273 274 275 276 277 278 | if( maxChars<useChars ){ zBuf[iBuf++] = ' '; break; } }else if( wordBreak && fossil_isspace(c) ){ int distUTF8; int nextIndex = comment_next_space(zLine, index, &distUTF8); | | | 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | if( maxChars<useChars ){ zBuf[iBuf++] = ' '; break; } }else if( wordBreak && fossil_isspace(c) ){ int distUTF8; int nextIndex = comment_next_space(zLine, index, &distUTF8); if( nextIndex<=0 || distUTF8>=maxChars ){ break; } charCnt++; }else{ charCnt++; } assert( c!='\n' || charCnt==0 ); |
︙ | ︙ |
Changes to src/configure.c.
︙ | ︙ | |||
91 92 93 94 95 96 97 98 99 100 101 102 103 104 | } aConfig[] = { { "css", CONFIGSET_CSS }, { "header", CONFIGSET_SKIN }, { "mainmenu", CONFIGSET_SKIN }, { "footer", CONFIGSET_SKIN }, { "details", CONFIGSET_SKIN }, { "js", CONFIGSET_SKIN }, { "logo-mimetype", CONFIGSET_SKIN }, { "logo-image", CONFIGSET_SKIN }, { "background-mimetype", CONFIGSET_SKIN }, { "background-image", CONFIGSET_SKIN }, { "icon-mimetype", CONFIGSET_SKIN }, { "icon-image", CONFIGSET_SKIN }, { "timeline-block-markup", CONFIGSET_SKIN }, | > | 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | } aConfig[] = { { "css", CONFIGSET_CSS }, { "header", CONFIGSET_SKIN }, { "mainmenu", CONFIGSET_SKIN }, { "footer", CONFIGSET_SKIN }, { "details", CONFIGSET_SKIN }, { "js", CONFIGSET_SKIN }, { "default-skin", CONFIGSET_SKIN }, { "logo-mimetype", CONFIGSET_SKIN }, { "logo-image", CONFIGSET_SKIN }, { "background-mimetype", CONFIGSET_SKIN }, { "background-image", CONFIGSET_SKIN }, { "icon-mimetype", CONFIGSET_SKIN }, { "icon-image", CONFIGSET_SKIN }, { "timeline-block-markup", CONFIGSET_SKIN }, |
︙ | ︙ | |||
868 869 870 871 872 873 874 | } url_parse(zServer, URL_PROMPT_PW|URL_USE_CONFIG); if( g.url.protocol==0 ) fossil_fatal("no server URL specified"); user_select(); url_enable_proxy("via proxy: "); if( overwriteFlag ) mask |= CONFIGSET_OVERWRITE; if( strncmp(zMethod, "push", n)==0 ){ | | | | | 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 | } url_parse(zServer, URL_PROMPT_PW|URL_USE_CONFIG); if( g.url.protocol==0 ) fossil_fatal("no server URL specified"); user_select(); url_enable_proxy("via proxy: "); if( overwriteFlag ) mask |= CONFIGSET_OVERWRITE; if( strncmp(zMethod, "push", n)==0 ){ client_sync(0,0,(unsigned)mask,0,0); }else if( strncmp(zMethod, "pull", n)==0 ){ if( overwriteFlag ) db_unprotect(PROTECT_USER); client_sync(0,(unsigned)mask,0,0,0); if( overwriteFlag ) db_protect_pop(); }else{ client_sync(0,(unsigned)mask,(unsigned)mask,0,0); } }else if( strncmp(zMethod, "reset", n)==0 ){ int mask, i; char *zBackup; if( g.argc!=4 ) usage("reset AREA"); mask = configure_name_to_mask(g.argv[3], 1); |
︙ | ︙ |
Changes to src/cookies.c.
︙ | ︙ | |||
211 212 213 214 215 216 217 | assert( zPName!=0 ); cookie_parse(); for(i=0; i<cookies.nParam && strcmp(zPName,cookies.aParam[i].zPName); i++){} return i<cookies.nParam ? cookies.aParam[i].zPValue : zDefault; } /* | > > > > > > | > > > > | > > | | > > | > > > | 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | assert( zPName!=0 ); cookie_parse(); for(i=0; i<cookies.nParam && strcmp(zPName,cookies.aParam[i].zPName); i++){} return i<cookies.nParam ? cookies.aParam[i].zPValue : zDefault; } /* ** WEBPAGE: cookies ** ** Show all cookies associated with Fossil. This shows the text of the ** login cookie and is hence dangerous if an adversary is looking over ** your shoulder and is able to read and reproduce that cookie. ** ** WEBPAGE: fdscookie ** ** Show the current display settings contained in the ** "fossil_display_settings" cookie. */ void cookie_page(void){ int i; int nCookie = 0; const char *zName = 0; const char *zValue = 0; int isQP = 0; int bFDSonly = strstr(g.zPath, "fdscookie")!=0; cookie_parse(); if( bFDSonly ){ style_header("Display Preferences Cookie"); }else{ style_header("All Cookies"); } @ <form method="POST"> @ <ol> for(i=0; cgi_param_info(i, &zName, &zValue, &isQP); i++){ char *zDel; if( isQP ) continue; if( fossil_isupper(zName[0]) ) continue; if( bFDSonly && strcmp(zName, "fossil_display_settings")!=0 ) continue; zDel = mprintf("del%s",zName); if( P(zDel)!=0 ){ cgi_set_cookie(zName, "", 0, -1); cgi_redirect(g.zPath); } nCookie++; @ <li><p><b>%h(zName)</b>: %h(zValue) @ <input type="submit" name="%h(zDel)" value="Delete"> if( fossil_strcmp(zName, DISPLAY_SETTINGS_COOKIE)==0 && cookies.nParam>0 ){ int j; @ <ul> for(j=0; j<cookies.nParam; j++){ @ <li>%h(cookies.aParam[j].zPName): "%h(cookies.aParam[j].zPValue)" } @ </ul> } fossil_free(zDel); } @ </ol> @ </form> if( nCookie==0 ){ if( bFDSonly ){ @ <p><i>Your browser is not holding a "fossil_display_setting" cookie @ for this website</i></p> }else{ @ <p><i>Your browser is not holding any cookies for this website</i></p> } } style_finish_page(); } |
Changes to src/db.c.
︙ | ︙ | |||
170 171 172 173 174 175 176 | void *pAuthArg; /* Argument to the authorizer */ const char *zAuthName; /* Name of the authorizer */ int bProtectTriggers; /* True if protection triggers already exist */ int nProtect; /* Slots of aProtect used */ unsigned aProtect[12]; /* Saved values of protectMask */ } db = { PROTECT_USER|PROTECT_CONFIG|PROTECT_BASELINE, /* protectMask */ | | | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | void *pAuthArg; /* Argument to the authorizer */ const char *zAuthName; /* Name of the authorizer */ int bProtectTriggers; /* True if protection triggers already exist */ int nProtect; /* Slots of aProtect used */ unsigned aProtect[12]; /* Saved values of protectMask */ } db = { PROTECT_USER|PROTECT_CONFIG|PROTECT_BASELINE, /* protectMask */ 0, 0, 0, 0, 0, 0, 0, {{0}}, {0}, {0}, 0, 0, 0, 0, 0, 0, 0, 0, 0, {0}}; /* ** Arrange for the given file to be deleted on a failure. */ void db_delete_on_failure(const char *zFilename){ assert( db.nDeleteOnFail<count(db.azDeleteOnFail) ); if( zFilename==0 ) return; |
︙ | ︙ | |||
455 456 457 458 459 460 461 | ** be compromised by an attack. */ void db_protect_only(unsigned flags){ if( db.nProtect>=count(db.aProtect)-2 ){ fossil_panic("too many db_protect() calls"); } db.aProtect[db.nProtect++] = db.protectMask; | | | 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 | ** be compromised by an attack. */ void db_protect_only(unsigned flags){ if( db.nProtect>=count(db.aProtect)-2 ){ fossil_panic("too many db_protect() calls"); } db.aProtect[db.nProtect++] = db.protectMask; if( (flags & PROTECT_SENSITIVE)!=0 && db.bProtectTriggers==0 && g.repositoryOpen ){ /* Create the triggers needed to protect sensitive settings from ** being created or modified the first time that PROTECT_SENSITIVE ** is enabled. Deleting a sensitive setting is harmless, so there ** is not trigger to block deletes. After being created once, the |
︙ | ︙ | |||
1555 1556 1557 1558 1559 1560 1561 | sqlite3_create_function(db, "protected_setting", 1, SQLITE_UTF8, 0, db_protected_setting_func, 0, 0); sqlite3_create_function(db, "win_reserved", 1, SQLITE_UTF8, 0, db_win_reserved_func,0,0); sqlite3_create_function(db, "url_nouser", 1, SQLITE_UTF8, 0, url_nouser_func,0,0); sqlite3_create_function(db, "chat_msg_from_event", 4, | | | | 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 | sqlite3_create_function(db, "protected_setting", 1, SQLITE_UTF8, 0, db_protected_setting_func, 0, 0); sqlite3_create_function(db, "win_reserved", 1, SQLITE_UTF8, 0, db_win_reserved_func,0,0); sqlite3_create_function(db, "url_nouser", 1, SQLITE_UTF8, 0, url_nouser_func,0,0); sqlite3_create_function(db, "chat_msg_from_event", 4, SQLITE_UTF8 | SQLITE_INNOCUOUS, 0, chat_msg_from_event, 0, 0); } #if USE_SEE /* ** This is a pointer to the saved database encryption key string. */ static char *zSavedKey = 0; |
︙ | ︙ | |||
2487 2488 2489 2490 2491 2492 2493 | db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } | | | 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 | db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } /* The design of the check-out database changed on 2019-01-19 adding the mhash ** column to vfile and vmerge and changing the UNIQUE index on vmerge into ** a PRIMARY KEY that includes the new mhash column. However, we must have ** the repository database at hand in order to do the migration, so that ** step is deferred. */ return 1; } |
︙ | ︙ | |||
2604 2605 2606 2607 2608 2609 2610 | sqlite3_stmt *pStmt = 0; sz = file_size(zDbName, ExtFILE); if( sz<16834 ) return 0; db = db_open(zDbName); if( !db ) return 0; if( !g.zVfsName && sz%512 ) return 0; | | | 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 | sqlite3_stmt *pStmt = 0; sz = file_size(zDbName, ExtFILE); if( sz<16834 ) return 0; db = db_open(zDbName); if( !db ) return 0; if( !g.zVfsName && sz%512 ) return 0; rc = sqlite3_prepare_v2(db, "SELECT count(*) FROM sqlite_schema" " WHERE name COLLATE nocase IN" "('blob','delta','rcvfrom','user','config','mlink','plink');", -1, &pStmt, 0); if( rc ) goto is_repo_end; rc = sqlite3_step(pStmt); if( rc!=SQLITE_ROW ) goto is_repo_end; |
︙ | ︙ | |||
3714 3715 3716 3717 3718 3719 3720 | z = fossil_strdup(pSetting->def); }else{ z = fossil_strdup(zDefault); } } return z; } | | > | 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 | z = fossil_strdup(pSetting->def); }else{ z = fossil_strdup(zDefault); } } return z; } char *db_get_mtime(const char *zName, const char *zFormat, const char *zDefault){ char *z = 0; if( g.repositoryOpen ){ z = db_text(0, "SELECT mtime FROM config WHERE name=%Q", zName); } if( z==0 ){ z = fossil_strdup(zDefault); }else if( zFormat!=0 ){ |
︙ | ︙ | |||
4019 4020 4021 4022 4023 4024 4025 | if( !g.localOpen ) return; zName = db_repository_filename(); } file_canonical_name(zName, &full, 0); (void)filename_collation(); /* Initialize before connection swap */ db_swap_connections(); zRepoSetting = mprintf("repo:%q", blob_str(&full)); | | | 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 | if( !g.localOpen ) return; zName = db_repository_filename(); } file_canonical_name(zName, &full, 0); (void)filename_collation(); /* Initialize before connection swap */ db_swap_connections(); zRepoSetting = mprintf("repo:%q", blob_str(&full)); db_unprotect(PROTECT_CONFIG); db_multi_exec( "DELETE FROM global_config WHERE name %s = %Q;", filename_collation(), zRepoSetting ); db_multi_exec( "INSERT OR IGNORE INTO global_config(name,value)" |
︙ | ︙ | |||
4096 4097 4098 4099 4100 4101 4102 | ** "new-name.fossil". ** ** Options: ** --empty Initialize check-out as being empty, but still connected ** with the local repository. If you commit this check-out, ** it will become a new "initial" commit in the repository. ** -f|--force Continue with the open even if the working directory is | | < < | 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 | ** "new-name.fossil". ** ** Options: ** --empty Initialize check-out as being empty, but still connected ** with the local repository. If you commit this check-out, ** it will become a new "initial" commit in the repository. ** -f|--force Continue with the open even if the working directory is ** not empty, or if auto-sync fails. ** --force-missing Force opening a repository with missing content ** -k|--keep Only modify the manifest file(s) ** --nested Allow opening a repository inside an opened check-out ** --nosync Do not auto-sync the repository prior to opening even ** if the autosync setting is on. ** --repodir DIR If REPOSITORY is a URI that will be cloned, store ** the clone in DIR rather than in "." ** --setmtime Set timestamps of all files to match their SCM-side ** times (the timestamp of the last check-in which modified ** them). ** --verbose If passed a URI then this flag is passed on to the clone ** operation, otherwise it has no effect ** --workdir DIR Use DIR as the working directory instead of ".". The DIR ** directory is created if it does not exist. ** ** See also: [[close]], [[clone]] */ |
︙ | ︙ | |||
4185 4186 4187 4188 4189 4190 4191 | if( keepFlag==0 && bForce==0 && (nLocal = file_directory_size(".", 0, 1))>0 && (nLocal>1 || isUri || !file_in_cwd(zRepo)) ){ fossil_fatal("directory %s is not empty\n" "use the -f (--force) option to override\n" | | | 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 | if( keepFlag==0 && bForce==0 && (nLocal = file_directory_size(".", 0, 1))>0 && (nLocal>1 || isUri || !file_in_cwd(zRepo)) ){ fossil_fatal("directory %s is not empty\n" "use the -f (--force) option to override\n" "or the -k (--keep) option to keep local files unchanged", file_getcwd(0,0)); } if( db_open_local_v2(0, allowNested) ){ fossil_fatal("there is already an open tree at %s", g.zLocalRoot); } |
︙ | ︙ | |||
4391 4392 4393 4394 4395 4396 4397 | ** ** When the admin-log setting is enabled, configuration changes are recorded ** in the "admin_log" table of the repository. */ /* ** SETTING: allow-symlinks boolean default=off sensitive ** | | | 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 | ** ** When the admin-log setting is enabled, configuration changes are recorded ** in the "admin_log" table of the repository. */ /* ** SETTING: allow-symlinks boolean default=off sensitive ** ** When allow-symlinks is OFF, Fossil does not see symbolic links ** (a.k.a "symlinks") on disk as a separate class of object. Instead Fossil ** sees the object that the symlink points to. Fossil will only manage files ** and directories, not symlinks. When a symlink is added to a repository, ** the object that the symlink points to is added, not the symlink itself. ** ** When allow-symlinks is ON, Fossil sees symlinks on disk as a separate ** object class that is distinct from files and directories. When a symlink |
︙ | ︙ | |||
4449 4450 4451 4452 4453 4454 4455 | ** When the auto-hyperlink setting is 1, the javascript that runs to set ** the href= attributes of hyperlinks delays by this many milliseconds ** after the page load. Suggested values: 50 to 200. */ /* ** SETTING: auto-hyperlink-mouseover boolean default=off ** | | | 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 | ** When the auto-hyperlink setting is 1, the javascript that runs to set ** the href= attributes of hyperlinks delays by this many milliseconds ** after the page load. Suggested values: 50 to 200. */ /* ** SETTING: auto-hyperlink-mouseover boolean default=off ** ** When the auto-hyperlink setting is 1 and this setting is on, the ** javascript that runs to set the href= attributes of hyperlinks waits ** until either a mousedown or mousemove event is seen. This helps ** to distinguish real users from robots. For maximum robot defense, ** the recommended setting is ON. */ /* ** SETTING: auto-shun boolean default=on |
︙ | ︙ | |||
4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 | ** off,commit=pullonly Do not autosync, except do a pull before each ** "commit", presumably to avoid undesirable ** forks. ** ** The syntax is a comma-separated list of VALUE and COMMAND=VALUE entries. ** A plain VALUE entry is the default that is used if no COMMAND matches. ** Otherwise, the VALUE of the matching command is used. */ /* ** SETTING: autosync-tries width=16 default=1 ** If autosync is enabled setting this to a value greater ** than zero will cause autosync to try no more than this ** number of attempts if there is a sync failure. */ | > > > | 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 | ** off,commit=pullonly Do not autosync, except do a pull before each ** "commit", presumably to avoid undesirable ** forks. ** ** The syntax is a comma-separated list of VALUE and COMMAND=VALUE entries. ** A plain VALUE entry is the default that is used if no COMMAND matches. ** Otherwise, the VALUE of the matching command is used. ** ** The "all" value is special in that it applies to the "sync" command in ** addition to "commit", "merge", "open", and "update". */ /* ** SETTING: autosync-tries width=16 default=1 ** If autosync is enabled setting this to a value greater ** than zero will cause autosync to try no more than this ** number of attempts if there is a sync failure. */ |
︙ | ︙ | |||
4670 4671 4672 4673 4674 4675 4676 | ** Note that /fileedit cannot edit binary files, so the list should not ** contain any globs for, e.g., images or PDFs. */ /* ** SETTING: forbid-delta-manifests boolean default=off ** If enabled on a client, new delta manifests are prohibited on ** commits. If enabled on a server, whenever a client attempts | | | 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 | ** Note that /fileedit cannot edit binary files, so the list should not ** contain any globs for, e.g., images or PDFs. */ /* ** SETTING: forbid-delta-manifests boolean default=off ** If enabled on a client, new delta manifests are prohibited on ** commits. If enabled on a server, whenever a client attempts ** to obtain a check-in lock during auto-sync, the server will ** send the "pragma avoid-delta-manifests" statement in its reply, ** which will cause the client to avoid generating a delta ** manifest. */ /* ** SETTING: forum-close-policy boolean default=off ** If true, forum moderators may close/re-open forum posts, and reply |
︙ | ︙ | |||
5011 5012 5013 5014 5015 5016 5017 | ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. */ /* ** SETTING: large-file-size width=10 default=200000000 ** Fossil considers any file whose size is greater than this value ** to be a "large file". Fossil might issue warnings if you try to | | | 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 | ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. */ /* ** SETTING: large-file-size width=10 default=200000000 ** Fossil considers any file whose size is greater than this value ** to be a "large file". Fossil might issue warnings if you try to ** "add" or "commit" a "large file". Set this value to 0 or less ** to disable all such warnings. */ /* ** Look up a control setting by its name. Return a pointer to the Setting ** object, or NULL if there is no such setting. ** |
︙ | ︙ | |||
5236 5237 5238 5239 5240 5241 5242 | ** optimization. FILENAME can also be the configuration database file ** (~/.fossil or ~/.config/fossil.db) or a local .fslckout or _FOSSIL_ file. ** ** The purpose of this command is for testing the WITHOUT ROWID capabilities ** of SQLite. There is no big advantage to using WITHOUT ROWID in Fossil. ** ** Options: | | | 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 5251 5252 | ** optimization. FILENAME can also be the configuration database file ** (~/.fossil or ~/.config/fossil.db) or a local .fslckout or _FOSSIL_ file. ** ** The purpose of this command is for testing the WITHOUT ROWID capabilities ** of SQLite. There is no big advantage to using WITHOUT ROWID in Fossil. ** ** Options: ** -n|--dry-run No changes. Just print what would happen. */ void test_without_rowid(void){ int i, j; Stmt q; Blob allSql; int dryRun = find_option("dry-run", "n", 0)!=0; for(i=2; i<g.argc; i++){ |
︙ | ︙ |
Changes to src/default.css.
︙ | ︙ | |||
494 495 496 497 498 499 500 | padding: 0; width: 125px; text-align: center; border-collapse: collapse; border-spacing: 0; } table.report { | < | 494 495 496 497 498 499 500 501 502 503 504 505 506 507 | padding: 0; width: 125px; text-align: center; border-collapse: collapse; border-spacing: 0; } table.report { border: 1px solid #999; margin: 1em 0 1em 0; cursor: pointer; } td.rpteditex { border-width: thin; border-color: #000000; |
︙ | ︙ | |||
580 581 582 583 584 585 586 | line-height: 1.275/*for mobile: forum post e6f4ee7de98b55c0*/; text-size-adjust: none /* ^^^ attempt to keep mobile from inflating some text */; } table.diff pre > ins, table.diff pre > del { /* Fill platform-dependent color gaps caused by | | | 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 | line-height: 1.275/*for mobile: forum post e6f4ee7de98b55c0*/; text-size-adjust: none /* ^^^ attempt to keep mobile from inflating some text */; } table.diff pre > ins, table.diff pre > del { /* Fill platform-dependent color gaps caused by inflated line-height */ padding: 0.062em 0 0.062em 0; } table.diff pre > ins > *, table.diff pre > del > *{ /* Avoid odd-looking color swatches in conjunction with (table.diff pre > ins/del) padding */ padding: inherit; |
︙ | ︙ | |||
616 617 618 619 620 621 622 | } tr.diffskip.jchunk:hover { /*background-color: rgba(127,127,127,0.5); cursor: pointer;*/ } tr.diffskip > td.chunkctrl { text-align: left; | < | 615 616 617 618 619 620 621 622 623 624 625 626 627 628 | } tr.diffskip.jchunk:hover { /*background-color: rgba(127,127,127,0.5); cursor: pointer;*/ } tr.diffskip > td.chunkctrl { text-align: left; } tr.diffskip > td.chunkctrl > div { display: flex; align-items: center; } tr.diffskip > td.chunkctrl > div > span.error { padding: 0.25em 0.5em; |
︙ | ︙ | |||
1292 1293 1294 1295 1296 1297 1298 | margin: 0; } .flex-container.child-gap-small > * { margin: 0.25em; } #fossil-status-bar { display: block; | < | 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 | margin: 0; } .flex-container.child-gap-small > * { margin: 0.25em; } #fossil-status-bar { display: block; border-width: 1px; border-style: inset; border-color: inherit; min-height: 1.5em; font-size: 1.2em; padding: 0.2em; margin: 0.25em 0; |
︙ | ︙ | |||
1383 1384 1385 1386 1387 1388 1389 | table.numbered-lines { width: 100%; table-layout: fixed /* required to keep ultra-wide code from exceeding window width, and instead force a scrollbar on them. */; } table.numbered-lines > tbody > tr { | < | 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 | table.numbered-lines { width: 100%; table-layout: fixed /* required to keep ultra-wide code from exceeding window width, and instead force a scrollbar on them. */; } table.numbered-lines > tbody > tr { line-height: 1.35; white-space: pre; } table.numbered-lines > tbody > tr > td { font-family: inherit; font-size: inherit; line-height: inherit; |
︙ | ︙ | |||
1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 | color: black; } blockquote.file-content { /* file content block in the /file page */ margin: 0 1em; } /** Circular "help" buttons intended to be placed to the right of another element and hold text text for it. These typically get initialized automatically at page startup via fossil.popupwidget.js, and can be manually initialized/created | > > > > > > > > > > > > > > > > > > > > > | 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 | color: black; } blockquote.file-content { /* file content block in the /file page */ margin: 0 1em; } /* Generic sidebar styling inherited by skins that don't make their own * arrangements. */ .markdown blockquote, p.blockquote, .sidebar { background-color: rgba(0, 0, 0, 0.05); border-left: 3px solid #777; padding: 0.1em 1em; } .sidebar { /* Generic form that can be applied to any block element. */ font-size: 90%; } div.sidebar { /* Special exception for div-type sidebars, where there is no p * wrapper inside to give us the extra padding we want. */ padding: 1em; } div.sidebar:not(.no-label):before { content: "Sidebar: "; font-weight: bold; } /** Circular "help" buttons intended to be placed to the right of another element and hold text text for it. These typically get initialized automatically at page startup via fossil.popupwidget.js, and can be manually initialized/created |
︙ | ︙ | |||
1759 1760 1761 1762 1763 1764 1765 | body.branch .submenu > a.timeline-link { display: none; } body.branch .submenu > a.timeline-link.selected { display: inline; } | > > > > > > > | > > > > | < | 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 | body.branch .submenu > a.timeline-link { display: none; } body.branch .submenu > a.timeline-link.selected { display: inline; } /* Candidate fonts for various forms of monospaced text. Collected here * to avoid repeating this long list of fonts. */ code, kbd, pre, samp, tt, var, div.markdown ol.footnotes > li.fn-joined > sup.fn-joined, table.numbered-lines > tbody > tr, tr.diffskip > td.chunkctrl, #fossil-status-bar, .monospace { font-family: Source Code Pro, Menlo, Monaco, Consolas, Andale Mono, Ubuntu Mono, Deja Vu Sans Mono, Letter Gothic, Letter Gothic Std, Prestige Elite Std, Courier, Courier New, monospace; } div.markdown > ol.footnotes { font-size: 90%; } div.markdown > ol.footnotes > li { margin-bottom: 0.5em; } div.markdown ol.footnotes > li.fn-joined > sup.fn-joined { color: gray; } div.markdown ol.footnotes > li.fn-joined > sup.fn-joined::after { content: "(joined from multiple locations) "; } div.markdown ol.footnotes > li.fn-misreference { margin-top: 0.75em; margin-bottom: 0.75em; |
︙ | ︙ | |||
1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 | } /* Objects in the "desktoponly" class are invisible on mobile */ @media screen and (max-width: 600px) { .desktoponly { display: none; } } /* Objects in the "wideonly" class are invisible only on wide-screen desktops */ @media screen and (max-width: 1200px) { .wideonly { display: none; } } | > > > > > > > > | 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 | } /* Objects in the "desktoponly" class are invisible on mobile */ @media screen and (max-width: 600px) { .desktoponly { display: none; } } /* Float sidebars to the right of the main content only if there's room. */ @media screen and (min-width: 600px) { .sidebar { float: right; max-width: 33%; margin-left: 1em; } } /* Objects in the "wideonly" class are invisible only on wide-screen desktops */ @media screen and (max-width: 1200px) { .wideonly { display: none; } } |
Changes to src/deltafunc.c.
︙ | ︙ | |||
484 485 486 487 488 489 490 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, | | > | 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, /* xShadowName */ 0, /* xIntegrity */ 0 }; /* ** Invoke this routine to register the various delta functions. */ int deltafunc_init(sqlite3 *db){ int rc = SQLITE_OK; |
︙ | ︙ |
Changes to src/diff.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ** ** This file contains code used to compute a "diff" between two ** text files. */ #include "config.h" #include "diff.h" #include <assert.h> #if INTERFACE /* ** Flag parameters to the text_diff() routine used to control the formatting ** of the diff output. */ | > | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | ** ** This file contains code used to compute a "diff" between two ** text files. */ #include "config.h" #include "diff.h" #include <assert.h> #include <errno.h> #if INTERFACE /* ** Flag parameters to the text_diff() routine used to control the formatting ** of the diff output. */ |
︙ | ︙ | |||
46 47 48 49 50 51 52 53 54 55 56 57 58 59 | #define DIFF_BROWSER 0x00008000 /* The --browser option */ #define DIFF_JSON 0x00010000 /* JSON output */ #define DIFF_DEBUG 0x00020000 /* Debugging diff output */ #define DIFF_RAW 0x00040000 /* Raw triples - for debugging */ #define DIFF_TCL 0x00080000 /* For the --tk option */ #define DIFF_INCBINARY 0x00100000 /* The --diff-binary option */ #define DIFF_SHOW_VERS 0x00200000 /* Show compared versions */ /* ** These error messages are shared in multiple locations. They are defined ** here for consistency. */ #define DIFF_CANNOT_COMPUTE_BINARY \ "cannot compute difference between binary files\n" | > > > > > > > > | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | #define DIFF_BROWSER 0x00008000 /* The --browser option */ #define DIFF_JSON 0x00010000 /* JSON output */ #define DIFF_DEBUG 0x00020000 /* Debugging diff output */ #define DIFF_RAW 0x00040000 /* Raw triples - for debugging */ #define DIFF_TCL 0x00080000 /* For the --tk option */ #define DIFF_INCBINARY 0x00100000 /* The --diff-binary option */ #define DIFF_SHOW_VERS 0x00200000 /* Show compared versions */ #define DIFF_DARKMODE 0x00400000 /* Use dark mode for HTML */ /* ** Per file information that may influence output. */ #define DIFF_FILE_ADDED 0x40000000 /* Added or rename destination */ #define DIFF_FILE_DELETED 0x80000000 /* Deleted or rename source */ #define DIFF_FILE_MASK 0xc0000000 /* Used for clearing file flags */ /* ** These error messages are shared in multiple locations. They are defined ** here for consistency. */ #define DIFF_CANNOT_COMPUTE_BINARY \ "cannot compute difference between binary files\n" |
︙ | ︙ | |||
80 81 82 83 84 85 86 | ** Conceptually, this object is as an encoding of the command-line options ** for the "fossil diff" command. That is not a precise description, though, ** because not all diff operations are started from the command-line. But ** the idea is sound. ** ** Information encoded by this object includes but is not limited to: ** | | | | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 | ** Conceptually, this object is as an encoding of the command-line options ** for the "fossil diff" command. That is not a precise description, though, ** because not all diff operations are started from the command-line. But ** the idea is sound. ** ** Information encoded by this object includes but is not limited to: ** ** * The desired output format (unified vs. side-by-side, ** TCL, JSON, HTML vs. plain-text). ** ** * Number of lines of context surrounding each difference block ** ** * Width of output columns for text side-by-side diffop */ struct DiffConfig { u64 diffFlags; /* Diff flags */ int nContext; /* Number of lines of context */ int wColumn; /* Column width in -y mode */ u32 nFile; /* Number of files diffed so far */ const char *zDiffCmd; /* External diff command to use instead of builtin */ |
︙ | ︙ | |||
420 421 422 423 424 425 426 | A = p->aFrom; B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ | | | 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 | A = p->aFrom; B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ for(nr=1; 3*nr<mxr && R[r+nr*3]>0 && R[r+nr*3]<(int)nContext*2; nr++){} /* printf("r=%d nr=%d\n", r, nr); */ /* For the current block comprising nr triples, figure out ** how many lines of A and B are to be displayed */ if( R[r]>nContext ){ na = nb = nContext; |
︙ | ︙ | |||
905 906 907 908 909 910 911 | /* ** This is an abstract superclass for an object that accepts difference ** lines and formats them for display. Subclasses of this object format ** the diff output in different ways. ** ** To subclass, create an instance of the DiffBuilder object and fill ** in appropriate method implementations. | | | 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 | /* ** This is an abstract superclass for an object that accepts difference ** lines and formats them for display. Subclasses of this object format ** the diff output in different ways. ** ** To subclass, create an instance of the DiffBuilder object and fill ** in appropriate method implementations. */ typedef struct DiffBuilder DiffBuilder; struct DiffBuilder { void (*xSkip)(DiffBuilder*, unsigned int, int); void (*xCommon)(DiffBuilder*,const DLine*); void (*xInsert)(DiffBuilder*,const DLine*); void (*xDelete)(DiffBuilder*,const DLine*); void (*xReplace)(DiffBuilder*,const DLine*,const DLine*); |
︙ | ︙ | |||
1090 1091 1092 1093 1094 1095 1096 | blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ' '); | | | 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 | blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pY->z + chng.a[i].iStart2, chng.a[i].iLen2); } if( x<pX->n ){ blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, pX->n - x); } blob_append_char(p->pOut, '\n'); |
︙ | ︙ | |||
1176 1177 1178 1179 1180 1181 1182 | } blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ','); | | | 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 | } blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pY->z + chng.a[i].iStart2, chng.a[i].iLen2); } blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pX->z + x, pX->n - x); blob_append(p->pOut, "],\n",3); } static void dfjsonEnd(DiffBuilder *p){ |
︙ | ︙ | |||
1258 1259 1260 1261 1262 1263 1264 | /* "+" marks for the separator on inserted lines */ for(i=0; i<p->nPending; i++) blob_append(&p->aCol[1], "+\n", 2); /* Text of the inserted lines */ blob_append(&p->aCol[2], "<ins>", 5); blob_append_xfer(&p->aCol[2], &p->aCol[4]); blob_append(&p->aCol[2], "</ins>", 6); | | | 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 | /* "+" marks for the separator on inserted lines */ for(i=0; i<p->nPending; i++) blob_append(&p->aCol[1], "+\n", 2); /* Text of the inserted lines */ blob_append(&p->aCol[2], "<ins>", 5); blob_append_xfer(&p->aCol[2], &p->aCol[4]); blob_append(&p->aCol[2], "</ins>", 6); p->nPending = 0; } static void dfunifiedFinishRow(DiffBuilder *p){ dfunifiedFinishDelete(p); dfunifiedFinishInsert(p); if( blob_size(&p->aCol[0])==0 ) return; blob_append(p->pOut, "</pre></td><td class=\"diffln difflnr\"><pre>\n", -1); |
︙ | ︙ | |||
1995 1996 1997 1998 1999 2000 2001 | aBig = aRight; nBig = nRight; } iDivBig = nBig/2; iDivSmall = nSmall/2; if( pCfg->diffFlags & DIFF_DEBUG ){ | | | 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 | aBig = aRight; nBig = nRight; } iDivBig = nBig/2; iDivSmall = nSmall/2; if( pCfg->diffFlags & DIFF_DEBUG ){ fossil_print(" Divide at [%.*s]\n", aBig[iDivBig].n, aBig[iDivBig].z); } bestScore = 10000; for(i=0; i<nSmall; i++){ score = match_dline(aBig+iDivBig, aSmall+i) + abs(i-nSmall/2)*2; if( score<bestScore ){ |
︙ | ︙ | |||
2221 2222 2223 2224 2225 2226 2227 | B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ | | | 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 | B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ for(nr=1; 3*nr<mxr && R[r+nr*3]>0 && R[r+nr*3]<(int)nContext*2; nr++){} /* If there is a regex, skip this block (generate no diff output) ** if the regex matches or does not match both insert and delete. ** Only display the block if one side matches but the other side does ** not. */ if( pCfg->pRe ){ |
︙ | ︙ | |||
3147 3148 3149 3150 3151 3152 3153 | } /* Undocumented and unsupported flags used for development ** debugging and analysis: */ if( find_option("debug",0,0)!=0 ) diffFlags |= DIFF_DEBUG; if( find_option("raw",0,0)!=0 ) diffFlags |= DIFF_RAW; } | | > > > | | > > | 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 | } /* Undocumented and unsupported flags used for development ** debugging and analysis: */ if( find_option("debug",0,0)!=0 ) diffFlags |= DIFF_DEBUG; if( find_option("raw",0,0)!=0 ) diffFlags |= DIFF_RAW; } if( (z = find_option("context","c",1))!=0 ){ char *zEnd; f = (int)strtol(z, &zEnd, 10); if( zEnd[0]==0 && errno!=ERANGE ){ pCfg->nContext = f; diffFlags |= DIFF_CONTEXT_EX; } } if( (z = find_option("width","W",1))!=0 && (f = atoi(z))>0 ){ pCfg->wColumn = f; } if( find_option("linenum","n",0)!=0 ) diffFlags |= DIFF_LINENO; if( find_option("noopt",0,0)!=0 ) diffFlags |= DIFF_NOOPT; if( find_option("numstat",0,0)!=0 ) diffFlags |= DIFF_NUMSTAT; if( find_option("versions","h",0)!=0 ) diffFlags |= DIFF_SHOW_VERS; if( find_option("dark",0,0)!=0 ) diffFlags |= DIFF_DARKMODE; if( find_option("invert",0,0)!=0 ) diffFlags |= DIFF_INVERT; if( find_option("brief",0,0)!=0 ) diffFlags |= DIFF_BRIEF; if( find_option("internal","i",0)==0 && (diffFlags & (DIFF_HTML|DIFF_TCL|DIFF_DEBUG|DIFF_JSON))==0 ){ pCfg->zDiffCmd = find_option("command", 0, 1); if( pCfg->zDiffCmd==0 ) pCfg->zDiffCmd = diff_command_external(isGDiff); |
︙ | ︙ | |||
3482 3483 3484 3485 3486 3487 3488 | } p->nVers++; cnt++; } if( p->nVers==0 ){ if( zRevision ){ | | > | 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 | } p->nVers++; cnt++; } if( p->nVers==0 ){ if( zRevision ){ fossil_fatal("file %s does not exist in check-in %s", zFilename, zRevision); }else{ fossil_fatal("no history for file: %s", zFilename); } } db_finalize(&q); db_end_transaction(0); |
︙ | ︙ |
Changes to src/diff.tcl.
1 2 3 4 5 6 7 8 9 10 11 12 | # The "diff --tk" command outputs prepends a "set fossilcmd {...}" line # to this file, then runs this file using "tclsh" in order to display the # graphical diff in a separate window. A typical "set fossilcmd" line # looks like this: # # set fossilcmd {| "./fossil" diff --html -y -i -v} # # This header comment is stripped off by the "mkbuiltin.c" program. # set prog { package require Tk | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | # The "diff --tk" command outputs prepends a "set fossilcmd {...}" line # to this file, then runs this file using "tclsh" in order to display the # graphical diff in a separate window. A typical "set fossilcmd" line # looks like this: # # set fossilcmd {| "./fossil" diff --html -y -i -v} # # This header comment is stripped off by the "mkbuiltin.c" program. # set prog { package require Tk array set CFG_light { TITLE {Fossil Diff} LN_COL_BG #dddddd LN_COL_FG #444444 TXT_COL_BG #ffffff TXT_COL_FG #000000 MKR_COL_BG #444444 MKR_COL_FG #dddddd |
︙ | ︙ | |||
30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ERR_FG #ee0000 PADX 5 WIDTH 80 HEIGHT 45 LB_HEIGHT 25 } if {![namespace exists ttk]} { interp alias {} ::ttk::scrollbar {} ::scrollbar interp alias {} ::ttk::menubutton {} ::menubutton } proc dehtml {x} { set x [regsub -all {<[^>]*>} $x {}] | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | ERR_FG #ee0000 PADX 5 WIDTH 80 HEIGHT 45 LB_HEIGHT 25 } array set CFG_dark { TITLE {Fossil Diff} LN_COL_BG #dddddd LN_COL_FG #444444 TXT_COL_BG #3f3f3f TXT_COL_FG #dcdccc MKR_COL_BG #444444 MKR_COL_FG #dddddd CHNG_BG #6a6afc ADD_BG #57934c RM_BG #ef6767 HR_FG #444444 HR_PAD_TOP 4 HR_PAD_BTM 8 FN_BG #5e5e5e FN_FG #ffffff FN_PAD 5 ERR_FG #ee0000 PADX 5 WIDTH 80 HEIGHT 45 LB_HEIGHT 25 } array set CFG_arr { 0 CFG_light 1 CFG_dark } array set CFG [array get $CFG_arr($darkmode)] if {![namespace exists ttk]} { interp alias {} ::ttk::scrollbar {} ::scrollbar interp alias {} ::ttk::menubutton {} ::menubutton } proc dehtml {x} { set x [regsub -all {<[^>]*>} $x {}] |
︙ | ︙ |
Changes to src/diffcmd.c.
︙ | ︙ | |||
111 112 113 114 115 116 117 | } return 0; } /* ** Print details about the compared versions - possibly the working directory ** or the undo buffer. For check-ins, show hash and commit time. | | | 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 | } return 0; } /* ** Print details about the compared versions - possibly the working directory ** or the undo buffer. For check-ins, show hash and commit time. ** ** This is intended primarily to go into the "header garbage" that is ignored ** by patch(1). ** ** zFrom and zTo are interpreted as symbolic version names, unless they ** start with '(', in which case they are printed directly. */ void diff_print_versions(const char *zFrom, const char *zTo, DiffConfig *pCfg){ |
︙ | ︙ | |||
159 160 161 162 163 164 165 166 167 168 169 170 171 172 | void diff_print_filenames( const char *zLeft, /* Name of the left file */ const char *zRight, /* Name of the right file */ DiffConfig *pCfg, /* Diff configuration */ Blob *pOut /* Write to this blob, or stdout of this is NULL */ ){ u64 diffFlags = pCfg->diffFlags; if( diffFlags & (DIFF_BRIEF|DIFF_RAW) ){ /* no-op */ }else if( diffFlags & DIFF_DEBUG ){ blob_appendf(pOut, "FILE-LEFT %s\nFILE-RIGHT %s\n", zLeft, zRight); }else if( diffFlags & DIFF_WEBPAGE ){ if( fossil_strcmp(zLeft,zRight)==0 ){ blob_appendf(pOut,"<h1>%h</h1>\n", zLeft); | > > > | 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | void diff_print_filenames( const char *zLeft, /* Name of the left file */ const char *zRight, /* Name of the right file */ DiffConfig *pCfg, /* Diff configuration */ Blob *pOut /* Write to this blob, or stdout of this is NULL */ ){ u64 diffFlags = pCfg->diffFlags; /* Standardize on /dev/null, regardless of platform. */ if( pCfg->diffFlags & DIFF_FILE_ADDED ) zLeft = "/dev/null"; if( pCfg->diffFlags & DIFF_FILE_DELETED ) zRight = "/dev/null"; if( diffFlags & (DIFF_BRIEF|DIFF_RAW) ){ /* no-op */ }else if( diffFlags & DIFF_DEBUG ){ blob_appendf(pOut, "FILE-LEFT %s\nFILE-RIGHT %s\n", zLeft, zRight); }else if( diffFlags & DIFF_WEBPAGE ){ if( fossil_strcmp(zLeft,zRight)==0 ){ blob_appendf(pOut,"<h1>%h</h1>\n", zLeft); |
︙ | ︙ | |||
211 212 213 214 215 216 217 | }else{ blob_appendf(pOut, "--- %s\n+++ %s\n", zLeft, zRight); } } /* | | | | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | }else{ blob_appendf(pOut, "--- %s\n+++ %s\n", zLeft, zRight); } } /* ** Default header texts for diff with --webpage */ static const char zWebpageHdr[] = @ <!DOCTYPE html> @ <html> @ <head> @ <meta charset="UTF-8"> @ <style> @ body { @ background-color: white; |
︙ | ︙ | |||
308 309 310 311 312 313 314 | @ font-weight: bold; @ } @ td.difftxt ins > ins.edit { @ background-color: #c0c0ff; @ text-decoration: none; @ font-weight: bold; @ } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | @ font-weight: bold; @ } @ td.difftxt ins > ins.edit { @ background-color: #c0c0ff; @ text-decoration: none; @ font-weight: bold; @ } @ @media (prefers-color-scheme: dark) { @ body { @ background-color: #353535; @ color: #ffffff; @ } @ td.diffln ins { @ background-color: #559855; @ color: #000000; @ } @ td.diffln del { @ background-color: #cc5555; @ color: #000000; @ } @ td.difftxt del { @ background-color: #f9cfcf; @ color: #000000; @ } @ td.difftxt del > del { @ background-color: #cc5555; @ color: #000000; @ } @ td.difftxt ins { @ background-color: #a2dbb2; @ color: #000000; @ } @ td.difftxt ins > ins { @ background-color: #559855; @ } @ } @ @ </style> @ </head> @ <body> ; static const char zWebpageHdrDark[] = @ <!DOCTYPE html> @ <html> @ <head> @ <meta charset="UTF-8"> @ <style> @ body { @ background-color: #353535; @ color: #ffffff; @ } @ h1 { @ font-size: 150%; @ } @ @ table.diff { @ width: 100%; @ border-spacing: 0; @ border: 1px solid black; @ line-height: inherit; @ font-size: inherit; @ } @ table.diff td { @ vertical-align: top; @ line-height: inherit; @ font-size: inherit; @ } @ table.diff pre { @ margin: 0 0 0 0; @ line-height: inherit; @ font-size: inherit; @ } @ td.diffln { @ width: 1px; @ text-align: right; @ padding: 0 1em 0 0; @ } @ td.difflne { @ padding-bottom: 0.4em; @ } @ td.diffsep { @ width: 1px; @ padding: 0 0.3em 0 1em; @ line-height: inherit; @ font-size: inherit; @ } @ td.diffsep pre { @ line-height: inherit; @ font-size: inherit; @ } @ td.difftxt pre { @ overflow-x: auto; @ } @ td.diffln ins { @ background-color: #559855; @ color: #000000; @ text-decoration: none; @ line-height: inherit; @ font-size: inherit; @ } @ td.diffln del { @ background-color: #cc5555; @ color: #000000; @ text-decoration: none; @ line-height: inherit; @ font-size: inherit; @ } @ td.difftxt del { @ background-color: #f9cfcf; @ color: #000000; @ text-decoration: none; @ line-height: inherit; @ font-size: inherit; @ } @ td.difftxt del > del { @ background-color: #cc5555; @ color: #000000; @ text-decoration: none; @ font-weight: bold; @ } @ td.difftxt del > del.edit { @ background-color: #c0c0ff; @ text-decoration: none; @ font-weight: bold; @ } @ td.difftxt ins { @ background-color: #a2dbb2; @ color: #000000; @ text-decoration: none; @ line-height: inherit; @ font-size: inherit; @ } @ td.difftxt ins > ins { @ background-color: #559855; @ text-decoration: none; @ font-weight: bold; @ } @ td.difftxt ins > ins.edit { @ background-color: #c0c0ff; @ text-decoration: none; @ font-weight: bold; @ } @ @ </style> @ </head> @ <body> ; const char zWebpageEnd[] = @ </body> @ </html> ; /* ** State variables used by the --browser option for diff. These must ** be static variables, not elements of DiffConfig, since they are |
︙ | ︙ | |||
376 377 378 379 380 381 382 | #ifndef _WIN32 signal(SIGINT, diff_www_interrupt); #else SetConsoleCtrlHandler(diff_console_ctrl_handler, TRUE); #endif } if( (pCfg->diffFlags & DIFF_WEBPAGE)!=0 ){ | > | | | 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 | #ifndef _WIN32 signal(SIGINT, diff_www_interrupt); #else SetConsoleCtrlHandler(diff_console_ctrl_handler, TRUE); #endif } if( (pCfg->diffFlags & DIFF_WEBPAGE)!=0 ){ fossil_print("%s",(pCfg->diffFlags & DIFF_DARKMODE)!=0 ? zWebpageHdrDark : zWebpageHdr); fflush(stdout); } } /* Do any final output required by a diff and complete the diff ** process. ** ** For --browser and --webpage, output any javascript required by ** the diff. (Currently JS is only needed for side-by-side diffs). ** ** For --browser, close the connection to the temporary file, then ** launch a web browser to view the file. After a delay ** of FOSSIL_BROWSER_DIFF_DELAY milliseconds, delete the temp file. */ void diff_end(DiffConfig *pCfg, int nErr){ |
︙ | ︙ | |||
436 437 438 439 440 441 442 | if( pCfg->zDiffCmd==0 ){ Blob out; /* Diff output text */ Blob file2; /* Content of zFile2 */ const char *zName2; /* Name of zFile2 for display */ /* Read content of zFile2 into memory */ blob_zero(&file2); | | | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 | if( pCfg->zDiffCmd==0 ){ Blob out; /* Diff output text */ Blob file2; /* Content of zFile2 */ const char *zName2; /* Name of zFile2 for display */ /* Read content of zFile2 into memory */ blob_zero(&file2); if( pCfg->diffFlags & DIFF_FILE_DELETED || file_size(zFile2, ExtFILE)<0 ){ zName2 = NULL_DEVICE; }else{ blob_read_from_file(&file2, zFile2, ExtFILE); zName2 = zName; } /* Compute and output the differences */ |
︙ | ︙ | |||
467 468 469 470 471 472 473 474 475 476 477 478 479 480 | } /* Release memory resources */ blob_reset(&file2); }else{ Blob nameFile1; /* Name of temporary file to old pFile1 content */ Blob cmd; /* Text of command to run */ if( (pCfg->diffFlags & DIFF_INCBINARY)==0 ){ Blob file2; if( looks_like_binary(pFile1) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); return; } | > | 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 | } /* Release memory resources */ blob_reset(&file2); }else{ Blob nameFile1; /* Name of temporary file to old pFile1 content */ Blob cmd; /* Text of command to run */ int useTempfile = 1; if( (pCfg->diffFlags & DIFF_INCBINARY)==0 ){ Blob file2; if( looks_like_binary(pFile1) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); return; } |
︙ | ︙ | |||
498 499 500 501 502 503 504 | } blob_reset(&file2); } /* Construct a temporary file to hold pFile1 based on the name of ** zFile2 */ file_tempname(&nameFile1, zFile2, "orig"); | > > > > > > > > > | | | 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 | } blob_reset(&file2); } /* Construct a temporary file to hold pFile1 based on the name of ** zFile2 */ file_tempname(&nameFile1, zFile2, "orig"); #if !defined(_WIN32) /* On Unix, use /dev/null for added or deleted files. */ if( pCfg->diffFlags & DIFF_FILE_ADDED ){ blob_init(&nameFile1, NULL_DEVICE, -1); useTempfile = 0; }else if( pCfg->diffFlags & DIFF_FILE_DELETED ){ zFile2 = NULL_DEVICE; } #endif if( useTempfile ) blob_write_to_file(pFile1, blob_str(&nameFile1)); /* Construct the external diff command */ blob_zero(&cmd); blob_append(&cmd, pCfg->zDiffCmd, -1); if( pCfg->diffFlags & DIFF_INVERT ){ blob_append_escaped_arg(&cmd, zFile2, 1); blob_append_escaped_arg(&cmd, blob_str(&nameFile1), 1); }else{ blob_append_escaped_arg(&cmd, blob_str(&nameFile1), 1); blob_append_escaped_arg(&cmd, zFile2, 1); } /* Run the external diff command */ fossil_system(blob_str(&cmd)); /* Delete the temporary file and clean up memory used */ if( useTempfile ) file_delete(blob_str(&nameFile1)); blob_reset(&nameFile1); blob_reset(&cmd); } } /* ** Show the difference between two files, both in memory. |
︙ | ︙ | |||
559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 | /* Release memory resources */ blob_reset(&out); }else{ Blob cmd; Blob temp1; Blob temp2; if( (pCfg->diffFlags & DIFF_INCBINARY)==0 ){ if( looks_like_binary(pFile1) || looks_like_binary(pFile2) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); return; } if( pCfg->zBinGlob ){ Glob *pBinary = glob_create(pCfg->zBinGlob); if( glob_match(pBinary, zName) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); glob_free(pBinary); return; } glob_free(pBinary); } } | > > | > > > > > > > > > > | | | | | 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 | /* Release memory resources */ blob_reset(&out); }else{ Blob cmd; Blob temp1; Blob temp2; int useTempfile1 = 1; int useTempfile2 = 1; if( (pCfg->diffFlags & DIFF_INCBINARY)==0 ){ if( looks_like_binary(pFile1) || looks_like_binary(pFile2) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); return; } if( pCfg->zBinGlob ){ Glob *pBinary = glob_create(pCfg->zBinGlob); if( glob_match(pBinary, zName) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); glob_free(pBinary); return; } glob_free(pBinary); } } /* Construct temporary file names */ file_tempname(&temp1, zName, "before"); file_tempname(&temp2, zName, "after"); #if !defined(_WIN32) /* On Unix, use /dev/null for added or deleted files. */ if( pCfg->diffFlags & DIFF_FILE_ADDED ){ useTempfile1 = 0; blob_init(&temp1, NULL_DEVICE, -1); }else if( pCfg->diffFlags & DIFF_FILE_DELETED ){ useTempfile2 = 0; blob_init(&temp2, NULL_DEVICE, -1); } #endif if( useTempfile1 ) blob_write_to_file(pFile1, blob_str(&temp1)); if( useTempfile2 ) blob_write_to_file(pFile2, blob_str(&temp2)); /* Construct the external diff command */ blob_zero(&cmd); blob_append(&cmd, pCfg->zDiffCmd, -1); blob_append_escaped_arg(&cmd, blob_str(&temp1), 1); blob_append_escaped_arg(&cmd, blob_str(&temp2), 1); /* Run the external diff command */ fossil_system(blob_str(&cmd)); /* Delete the temporary file and clean up memory used */ if( useTempfile1 ) file_delete(blob_str(&temp1)); if( useTempfile2 ) file_delete(blob_str(&temp2)); blob_reset(&temp1); blob_reset(&temp2); blob_reset(&cmd); } } |
︙ | ︙ | |||
712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 | blob_zero(&fname); file_relative_name(zPathname, &fname, 1); }else{ blob_set(&fname, g.zLocalRoot); blob_append(&fname, zPathname, -1); } zFullName = blob_str(&fname); if( isDeleted ){ if( !isNumStat ){ fossil_print("DELETED %s\n", zPathname); } if( !asNewFile ){ showDiff = 0; zFullName = NULL_DEVICE; } }else if( file_access(zFullName, F_OK) ){ if( !isNumStat ){ fossil_print("MISSING %s\n", zPathname); } if( !asNewFile ){ showDiff = 0; } }else if( isNew ){ if( !isNumStat ){ fossil_print("ADDED %s\n", zPathname); } srcid = 0; if( !asNewFile ){ showDiff = 0; } }else if( isChnged==3 ){ if( !isNumStat ){ fossil_print("ADDED_BY_MERGE %s\n", zPathname); } srcid = 0; if( !asNewFile ){ showDiff = 0; } }else if( isChnged==5 ){ if( !isNumStat ){ fossil_print("ADDED_BY_INTEGRATE %s\n", zPathname); } srcid = 0; if( !asNewFile ){ showDiff = 0; } } if( showDiff ){ Blob content; if( !isLink != !file_islink(zFullName) ){ diff_print_index(zPathname, pCfg, 0); diff_print_filenames(zPathname, zPathname, pCfg, 0); fossil_print("%s",DIFF_CANNOT_COMPUTE_SYMLINK); continue; } if( srcid>0 ){ content_get(srcid, &content); }else{ blob_zero(&content); } | > > > > > > > | > | 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 | blob_zero(&fname); file_relative_name(zPathname, &fname, 1); }else{ blob_set(&fname, g.zLocalRoot); blob_append(&fname, zPathname, -1); } zFullName = blob_str(&fname); pCfg->diffFlags &= (~DIFF_FILE_MASK); if( isDeleted ){ if( !isNumStat ){ fossil_print("DELETED %s\n", zPathname); } pCfg->diffFlags |= DIFF_FILE_DELETED; if( !asNewFile ){ showDiff = 0; zFullName = NULL_DEVICE; } }else if( file_access(zFullName, F_OK) ){ if( !isNumStat ){ fossil_print("MISSING %s\n", zPathname); } if( !asNewFile ){ showDiff = 0; } }else if( isNew ){ if( !isNumStat ){ fossil_print("ADDED %s\n", zPathname); } pCfg->diffFlags |= DIFF_FILE_ADDED; srcid = 0; if( !asNewFile ){ showDiff = 0; } }else if( isChnged==3 ){ if( !isNumStat ){ fossil_print("ADDED_BY_MERGE %s\n", zPathname); } pCfg->diffFlags |= DIFF_FILE_ADDED; srcid = 0; if( !asNewFile ){ showDiff = 0; } }else if( isChnged==5 ){ if( !isNumStat ){ fossil_print("ADDED_BY_INTEGRATE %s\n", zPathname); } pCfg->diffFlags |= DIFF_FILE_ADDED; srcid = 0; if( !asNewFile ){ showDiff = 0; } } if( showDiff ){ Blob content; if( !isLink != !file_islink(zFullName) ){ diff_print_index(zPathname, pCfg, 0); diff_print_filenames(zPathname, zPathname, pCfg, 0); fossil_print("%s",DIFF_CANNOT_COMPUTE_SYMLINK); continue; } if( srcid>0 ){ content_get(srcid, &content); }else{ blob_zero(&content); } if( isChnged==0 || pCfg->diffFlags & DIFF_FILE_DELETED || !file_same_as_blob(&content, zFullName) ){ diff_print_index(zPathname, pCfg, pOut); diff_file(&content, zFullName, zPathname, pCfg, pOut); } blob_reset(&content); } blob_reset(&fname); } |
︙ | ︙ | |||
776 777 778 779 780 781 782 | ){ Stmt q; Blob content; db_prepare(&q, "SELECT pathname, content FROM undo"); blob_init(&content, 0, 0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions("(undo)", "(workdir)", pCfg); | | | 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 | ){ Stmt q; Blob content; db_prepare(&q, "SELECT pathname, content FROM undo"); blob_init(&content, 0, 0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions("(undo)", "(workdir)", pCfg); } while( db_step(&q)==SQLITE_ROW ){ char *zFullName; const char *zFile = (const char*)db_column_text(&q, 0); if( !file_dir_match(pFileDir, zFile) ) continue; zFullName = mprintf("%s%s", g.zLocalRoot, zFile); db_column_blob(&q, 1, &content); diff_file(&content, zFullName, zFile, pCfg, 0); |
︙ | ︙ | |||
863 864 865 866 867 868 869 | manifest_file_rewind(pFrom); pFromFile = manifest_file_next(pFrom,0); pTo = manifest_get_by_name(zTo, 0); manifest_file_rewind(pTo); pToFile = manifest_file_next(pTo,0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions(zFrom, zTo, pCfg); | | > > > | 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 | manifest_file_rewind(pFrom); pFromFile = manifest_file_next(pFrom,0); pTo = manifest_get_by_name(zTo, 0); manifest_file_rewind(pTo); pToFile = manifest_file_next(pTo,0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions(zFrom, zTo, pCfg); } while( pFromFile || pToFile ){ int cmp; if( pFromFile==0 ){ cmp = +1; }else if( pToFile==0 ){ cmp = -1; }else{ cmp = fossil_strcmp(pFromFile->zName, pToFile->zName); } pCfg->diffFlags &= (~DIFF_FILE_MASK); if( cmp<0 ){ if( file_dir_match(pFileDir, pFromFile->zName) ){ if( (pCfg->diffFlags & (DIFF_NUMSTAT|DIFF_HTML))==0 ){ fossil_print("DELETED %s\n", pFromFile->zName); } pCfg->diffFlags |= DIFF_FILE_DELETED; if( asNewFlag ){ diff_manifest_entry(pFromFile, 0, pCfg); } } pFromFile = manifest_file_next(pFrom,0); }else if( cmp>0 ){ if( file_dir_match(pFileDir, pToFile->zName) ){ if( (pCfg->diffFlags & (DIFF_NUMSTAT|DIFF_HTML|DIFF_TCL|DIFF_JSON))==0 ){ fossil_print("ADDED %s\n", pToFile->zName); } pCfg->diffFlags |= DIFF_FILE_ADDED; if( asNewFlag ){ diff_manifest_entry(0, pToFile, pCfg); } } pToFile = manifest_file_next(pTo,0); }else if( fossil_strcmp(pFromFile->zUuid, pToFile->zUuid)==0 ){ /* No changes */ |
︙ | ︙ | |||
954 955 956 957 958 959 960 961 962 963 964 965 966 967 | */ void diff_tk(const char *zSubCmd, int firstArg){ int i; Blob script; const char *zTempFile = 0; char *zCmd; const char *zTclsh; blob_zero(&script); blob_appendf(&script, "set fossilcmd {| \"%/\" %s -tcl -i -v", g.nameOfExe, zSubCmd); find_option("tcl",0,0); find_option("html",0,0); find_option("side-by-side","y",0); find_option("internal","i",0); | > | 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 | */ void diff_tk(const char *zSubCmd, int firstArg){ int i; Blob script; const char *zTempFile = 0; char *zCmd; const char *zTclsh; int bDarkMode = find_option("dark",0,0)!=0; blob_zero(&script); blob_appendf(&script, "set fossilcmd {| \"%/\" %s -tcl -i -v", g.nameOfExe, zSubCmd); find_option("tcl",0,0); find_option("html",0,0); find_option("side-by-side","y",0); find_option("internal","i",0); |
︙ | ︙ | |||
980 981 982 983 984 985 986 | blob_appendf(&script, " {%/}", z); }else{ int j; blob_append(&script, " ", 1); for(j=0; z[j]; j++) blob_appendf(&script, "\\%03o", (unsigned char)z[j]); } } | > | | 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 | blob_appendf(&script, " {%/}", z); }else{ int j; blob_append(&script, " ", 1); for(j=0; z[j]; j++) blob_appendf(&script, "\\%03o", (unsigned char)z[j]); } } blob_appendf(&script, "}\nset darkmode %d\n", bDarkMode); blob_appendf(&script, "%s", builtin_file("diff.tcl", 0)); if( zTempFile ){ blob_write_to_file(&script, zTempFile); fossil_print("To see diff, run: %s \"%s\"\n", zTclsh, zTempFile); }else{ #if defined(FOSSIL_ENABLE_TCL) Th_FossilInit(TH_INIT_DEFAULT); if( evaluateTclWithEvents(g.interp, &g.tcl, blob_str(&script), |
︙ | ︙ | |||
1033 1034 1035 1036 1037 1038 1039 | ** out. Or if the FILE arguments are omitted, show all unsaved changes ** currently in the working check-out. ** ** The default output format is a "unified patch" (the same as the ** output of "diff -u" on most unix systems). Many alternative formats ** are available. A few of the more useful alternatives: ** | | | 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 | ** out. Or if the FILE arguments are omitted, show all unsaved changes ** currently in the working check-out. ** ** The default output format is a "unified patch" (the same as the ** output of "diff -u" on most unix systems). Many alternative formats ** are available. A few of the more useful alternatives: ** ** --tk Pop up a Tcl/Tk-based GUI to show the diff ** --by Show a side-by-side diff in the default web browser ** -b Show a linear diff in the default web browser ** -y Show a text side-by-side diff ** --webpage Format output as HTML ** --webpage -y HTML output in the side-by-side format ** ** The "--from VERSION" option is used to specify the source check-in |
︙ | ︙ | |||
1074 1075 1076 1077 1078 1079 1080 | ** as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** -b|--browser Show the diff output in a web-browser ** --by Shorthand for "--browser -y" ** -ci|--checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program. Overrides "diff-command" | | | > > > | | | 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 | ** as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** -b|--browser Show the diff output in a web-browser ** --by Shorthand for "--browser -y" ** -ci|--checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program. Overrides "diff-command" ** -c|--context N Show N lines of context around each change, ** with negative N meaning show all content ** --dark Use dark mode for the Tcl/Tk-based GUI and HTML ** --diff-binary BOOL Include binary files with external commands ** --exec-abs-paths Force absolute path names on external commands ** --exec-rel-paths Force relative path names on external commands ** -r|--from VERSION Select VERSION as source for the diff ** -w|--ignore-all-space Ignore white space when comparing lines ** -i|--internal Use internal diff logic ** --invert Invert the diff ** --json Output formatted as JSON ** -n|--linenum Show line numbers ** -N|--new-file Alias for --verbose ** --numstat Show only the number of added and deleted lines ** -y|--side-by-side Side-by-side diff ** --strip-trailing-cr Strip trailing CR ** --tcl Tcl-formated output used internally by --tk ** --tclsh PATH Tcl/Tk shell used for --tk (default: "tclsh") ** --tk Launch a Tcl/Tk GUI for display ** --to VERSION Select VERSION as target for the diff ** --undo Diff against the "undo" buffer ** --unified Unified diff ** -v|--verbose Output complete text of added or deleted files ** -h|--versions Show compared versions in the diff header ** --webpage Format output as a stand-alone HTML webpage |
︙ | ︙ | |||
1204 1205 1206 1207 1208 1209 1210 | } fossil_free(pFileDir[i].zName); } fossil_free(pFileDir); } diff_end(&DCfg, 0); if ( DCfg.diffFlags & DIFF_NUMSTAT ){ | | | 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 | } fossil_free(pFileDir[i].zName); } fossil_free(pFileDir); } diff_end(&DCfg, 0); if ( DCfg.diffFlags & DIFF_NUMSTAT ){ fossil_print("%10d %10d TOTAL over %d changed files\n", g.diffCnt[1], g.diffCnt[2], g.diffCnt[0]); } } /* ** WEBPAGE: vpatch ** URL: /vpatch?from=FROM&to=TO |
︙ | ︙ |
Changes to src/dispatch.c.
︙ | ︙ | |||
451 452 453 454 455 456 457 | aIndent[iLevel] = nIndent; azEnd[iLevel] = zEndUL; if( wantP ){ blob_append(pHtml,"<p>", 3); wantP = 0; } blob_append(pHtml, "<ul>\n", 5); | | | 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | aIndent[iLevel] = nIndent; azEnd[iLevel] = zEndUL; if( wantP ){ blob_append(pHtml,"<p>", 3); wantP = 0; } blob_append(pHtml, "<ul>\n", 5); }else if( isDT || zHelp[nIndent]=='-' || hasGap(zHelp+nIndent,i-nIndent) ){ iLevel++; aIndent[iLevel] = nIndent; azEnd[iLevel] = zEndDL; wantP = 0; blob_append(pHtml, "<blockquote><dl>\n", -1); |
︙ | ︙ | |||
545 546 547 548 549 550 551 | if( c=='[' && (x = help_is_link(zHelp+i, 100000))!=0 ){ if( i>0 ) blob_append(pText, zHelp, i); zHelp += i+2; blob_append(pText, zHelp, x-3); zHelp += x-1; i = -1; continue; | | | | 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 | if( c=='[' && (x = help_is_link(zHelp+i, 100000))!=0 ){ if( i>0 ) blob_append(pText, zHelp, i); zHelp += i+2; blob_append(pText, zHelp, x-3); zHelp += x-1; i = -1; continue; } } if( i>0 ){ blob_append(pText, zHelp, i); } } /* ** Display help for all commands based on provided flags. */ static void display_all_help(int mask, int useHtml, int rawOut){ int i; |
︙ | ︙ | |||
633 634 635 636 637 638 639 | ** ** Show help text for commands and pages. Useful for proof-reading. ** Defaults to just the CLI commands. Specify --www to see only the ** web pages, or --everything to see both commands and pages. ** ** Options: ** -a|--aliases Show aliases | | | 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 | ** ** Show help text for commands and pages. Useful for proof-reading. ** Defaults to just the CLI commands. Specify --www to see only the ** web pages, or --everything to see both commands and pages. ** ** Options: ** -a|--aliases Show aliases ** -e|--everything Show all commands and pages. Omit aliases to ** avoid duplicates. ** -h|--html Transform output to HTML ** -o|--options Show global options ** -r|--raw No output formatting ** -s|--settings Show settings ** -t|--test Include test- commands ** -w|--www Show WWW pages |
︙ | ︙ | |||
659 660 661 662 663 664 665 | CMDFLAG_ALIAS | CMDFLAG_SETTING | CMDFLAG_TEST; } if( find_option("settings","s",0) ){ mask = CMDFLAG_SETTING; } if( find_option("aliases","a",0) ){ mask = CMDFLAG_ALIAS; | | | 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 | CMDFLAG_ALIAS | CMDFLAG_SETTING | CMDFLAG_TEST; } if( find_option("settings","s",0) ){ mask = CMDFLAG_SETTING; } if( find_option("aliases","a",0) ){ mask = CMDFLAG_ALIAS; } if( find_option("test","t",0) ){ mask |= CMDFLAG_TEST; } display_all_help(mask, useHtml, rawOut); } /* |
︙ | ︙ | |||
766 767 768 769 770 771 772 | iLast = FOSSIL_FIRST_CMD-1; }else{ iFirst = FOSSIL_FIRST_CMD; iLast = MX_COMMAND-1; } while( n<nArray ){ | | | 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 | iLast = FOSSIL_FIRST_CMD-1; }else{ iFirst = FOSSIL_FIRST_CMD; iLast = MX_COMMAND-1; } while( n<nArray ){ bestScore = mxScore; for(i=iFirst; i<=iLast; i++){ m = edit_distance(zIn, aCommand[i].zName); if( m<mnScore ) continue; if( m==mnScore ){ azArray[n++] = aCommand[i].zName; if( n>=nArray ) return n; }else if( m<bestScore ){ |
︙ | ︙ | |||
895 896 897 898 899 900 901 | @ <li><a href="%R/help?cmd=%s(z)">%s(zBoldOn)%s(z)%s(zBoldOff)</a> /* Output aliases */ if( occHelp[aCommand[i].iHelp] > 1 ){ int j; int aliases[MX_HELP_DUP], nAliases=0; for(j=0; j<occHelp[aCommand[i].iHelp]; j++){ if( bktHelp[aCommand[i].iHelp][j] != i ){ | | > | 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 | @ <li><a href="%R/help?cmd=%s(z)">%s(zBoldOn)%s(z)%s(zBoldOff)</a> /* Output aliases */ if( occHelp[aCommand[i].iHelp] > 1 ){ int j; int aliases[MX_HELP_DUP], nAliases=0; for(j=0; j<occHelp[aCommand[i].iHelp]; j++){ if( bktHelp[aCommand[i].iHelp][j] != i ){ if( aCommand[bktHelp[aCommand[i].iHelp][j]].eCmdFlags & CMDFLAG_ALIAS ){ aliases[nAliases++] = bktHelp[aCommand[i].iHelp][j]; } } } if( nAliases>0 ){ int k; @(\ |
︙ | ︙ | |||
985 986 987 988 989 990 991 | style_set_current_feature("test"); style_header("All Help Text"); @ <dl> /* Fill in help string buckets */ for(i=0; i<MX_COMMAND; i++){ if(aCommand[i].eCmdFlags & CMDFLAG_HIDDEN) continue; bktHelp[aCommand[i].iHelp][occHelp[aCommand[i].iHelp]++] = i; | | | 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 | style_set_current_feature("test"); style_header("All Help Text"); @ <dl> /* Fill in help string buckets */ for(i=0; i<MX_COMMAND; i++){ if(aCommand[i].eCmdFlags & CMDFLAG_HIDDEN) continue; bktHelp[aCommand[i].iHelp][occHelp[aCommand[i].iHelp]++] = i; } for(i=0; i<MX_COMMAND; i++){ const char *zDesc; unsigned int e = aCommand[i].eCmdFlags; if( e & CMDFLAG_1ST_TIER ){ zDesc = "1st tier command"; }else if( e & CMDFLAG_2ND_TIER ){ zDesc = "2nd tier command"; |
︙ | ︙ | |||
1037 1038 1039 1040 1041 1042 1043 | }else if( e & CMDFLAG_WEBPAGE ){ if( e & CMDFLAG_RAWCONTENT ){ zDesc = "raw-content web page"; }else{ zDesc = "web page"; } } | | | 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 | }else if( e & CMDFLAG_WEBPAGE ){ if( e & CMDFLAG_RAWCONTENT ){ zDesc = "raw-content web page"; }else{ zDesc = "web page"; } } @ <dt><big><b>%s(aCommand[bktHelp[aCommand[i].iHelp][j]].zName)</b> @</big> (%s(zDesc))</dt> } @ <p><dd> help_to_html(aCommand[i].zHelp, cgi_output_blob()); @ </dd><p> occHelp[aCommand[i].iHelp] = 0; |
︙ | ︙ | |||
1116 1117 1118 1119 1120 1121 1122 | /* ** Documentation on universal command-line options. */ /* @-comment: # */ static const char zOptions[] = @ Command-line options common to all commands: | | | | | 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 | /* ** Documentation on universal command-line options. */ /* @-comment: # */ static const char zOptions[] = @ Command-line options common to all commands: @ @ --args FILENAME Read additional arguments and options from FILENAME @ --case-sensitive BOOL Set case sensitivity for file names @ --cgitrace Active CGI tracing @ --chdir PATH Change to PATH before performing any operations @ --comfmtflags VALUE Set comment formatting flags to VALUE @ --comment-format VALUE Alias for --comfmtflags @ --errorlog FILENAME Log errors to FILENAME @ --help Show help on the command rather than running it @ --httptrace Trace outbound HTTP requests @ --localtime Display times using the local timezone @ --nocgi Do not act as CGI @ --no-th-hook Do not run TH1 hooks @ --quiet Reduce the amount of output @ --sqlstats Show SQL usage statistics when done |
︙ | ︙ | |||
1485 1486 1487 1488 1489 1490 1491 | helptextVtab_cursor *pCur = (helptextVtab_cursor*)cur; return pCur->iRowid>=MX_COMMAND; } /* ** This method is called to "rewind" the helptextVtab_cursor object back ** to the first row of output. This method is always called at least | | | | 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 | helptextVtab_cursor *pCur = (helptextVtab_cursor*)cur; return pCur->iRowid>=MX_COMMAND; } /* ** This method is called to "rewind" the helptextVtab_cursor object back ** to the first row of output. This method is always called at least ** once prior to any call to helptextVtabColumn() or helptextVtabRowid() or ** helptextVtabEof(). */ static int helptextVtabFilter( sqlite3_vtab_cursor *pVtabCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **argv ){ helptextVtab_cursor *pCur = (helptextVtab_cursor *)pVtabCursor; pCur->iRowid = 1; return SQLITE_OK; } |
︙ | ︙ | |||
1514 1515 1516 1517 1518 1519 1520 | ){ pIdxInfo->estimatedCost = (double)MX_COMMAND; pIdxInfo->estimatedRows = MX_COMMAND; return SQLITE_OK; } /* | | | 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 | ){ pIdxInfo->estimatedCost = (double)MX_COMMAND; pIdxInfo->estimatedRows = MX_COMMAND; return SQLITE_OK; } /* ** This following structure defines all the methods for the ** virtual table. */ static sqlite3_module helptextVtabModule = { /* iVersion */ 0, /* xCreate */ 0, /* Helptext is eponymous and read-only */ /* xConnect */ helptextVtabConnect, /* xBestIndex */ helptextVtabBestIndex, |
︙ | ︙ | |||
1541 1542 1543 1544 1545 1546 1547 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, | | > | 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, /* xShadowName */ 0, /* xIntegrity */ 0 }; /* ** Register the helptext virtual table */ int helptext_vtab_register(sqlite3 *db){ int rc = sqlite3_create_module(db, "helptext", &helptextVtabModule, 0); return rc; } /* End of the helptext virtual table ******************************************************************************/ |
Changes to src/doc.c.
︙ | ︙ | |||
339 340 341 342 343 344 345 | static char * zList = 0; static char const * zEnd = 0; static int once = 0; char * z; int tokenizerState /* 0=expecting a key, 1=skip next token, ** 2=accept next token */; if(once==0){ | | | 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | static char * zList = 0; static char const * zEnd = 0; static int once = 0; char * z; int tokenizerState /* 0=expecting a key, 1=skip next token, ** 2=accept next token */; if(once==0){ once = 1; zList = db_get("mimetypes",0); if(zList==0){ return 0; } /* Transform zList to simplify the main loop: replace non-newline spaces with NUL bytes. */ zEnd = zList + strlen(zList); |
︙ | ︙ | |||
727 728 729 730 731 732 733 | ** Transfer content to the output. During the transfer, when text of ** the following form is seen: ** ** href="$ROOT/..." ** action="$ROOT/..." ** href=".../doc/$CURRENT/..." ** | | | | 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 | ** Transfer content to the output. During the transfer, when text of ** the following form is seen: ** ** href="$ROOT/..." ** action="$ROOT/..." ** href=".../doc/$CURRENT/..." ** ** Convert $ROOT to the root URI of the repository, and $CURRENT to the ** version number of the /doc/ document currently being displayed (if any). ** Allow ' in place of " and any case for href or action. ** ** Efforts are made to limit this translation to cases where the text is ** fully contained with an HTML markup element. */ void convert_href_and_output(Blob *pIn){ int i, base; int n = blob_size(pIn); |
︙ | ︙ | |||
828 829 830 831 832 833 834 | convert_href_and_output(pBody); if( !isPopup ){ document_emit_js(); style_finish_page(); } }else if( fossil_strcmp(zMime, "text/x-pikchr")==0 ){ style_adunit_config(ADUNIT_RIGHT_OK); | | | | 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 | convert_href_and_output(pBody); if( !isPopup ){ document_emit_js(); style_finish_page(); } }else if( fossil_strcmp(zMime, "text/x-pikchr")==0 ){ style_adunit_config(ADUNIT_RIGHT_OK); if( !isPopup ) style_header("%s", zDefaultTitle); wiki_render_by_mimetype(pBody, zMime); if( !isPopup ) style_finish_page(); #ifdef FOSSIL_ENABLE_TH1_DOCS }else if( Th_AreDocsEnabled() && fossil_strcmp(zMime, "application/x-th1")==0 ){ int raw = P("raw")!=0; if( !raw ){ Blob tail; blob_zero(&tail); |
︙ | ︙ | |||
1209 1210 1211 1212 1213 1214 1215 | ** ** The intended use case here is to supply an icon for the "fossil ui" ** command. For a permanent website, the recommended process is for ** the admin to set up a project-specific icon and reference that icon ** in the HTML header using a line like: ** ** <link rel="icon" href="URL-FOR-YOUR-ICON" type="MIMETYPE"/> | | | 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 | ** ** The intended use case here is to supply an icon for the "fossil ui" ** command. For a permanent website, the recommended process is for ** the admin to set up a project-specific icon and reference that icon ** in the HTML header using a line like: ** ** <link rel="icon" href="URL-FOR-YOUR-ICON" type="MIMETYPE"/> ** */ void favicon_page(void){ Blob icon; char *zMime; etag_check(ETAG_CONFIG, 0); zMime = db_get("icon-mimetype", "image/gif"); |
︙ | ︙ |
Changes to src/etag.c.
︙ | ︙ | |||
98 99 100 101 102 103 104 | char zBuf[50]; assert( zETag[0]==0 ); /* Only call this routine once! */ if( etagCancelled ) return; /* By default, ETagged URLs never expire since the ETag will change * when the content changes. Approximate this policy as 10 years. */ | | | | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | char zBuf[50]; assert( zETag[0]==0 ); /* Only call this routine once! */ if( etagCancelled ) return; /* By default, ETagged URLs never expire since the ETag will change * when the content changes. Approximate this policy as 10 years. */ iMaxAge = 10 * 365 * 24 * 60 * 60; md5sum_init(); /* Always include the executable ID as part of the hash */ md5sum_step_text("exe-id: ", -1); md5sum_step_text(fossil_exe_id(), -1); md5sum_step_text("\n", 1); if( (eFlags & ETAG_HASH)!=0 && zHash ){ md5sum_step_text("hash: ", -1); md5sum_step_text(zHash, -1); md5sum_step_text("\n", 1); iMaxAge = 0; } if( eFlags & ETAG_DATA ){ |
︙ | ︙ | |||
208 209 210 211 212 213 214 | /* Check to see the If-Modified-Since constraint is satisfied */ zIfModifiedSince = P("HTTP_IF_MODIFIED_SINCE"); if( zIfModifiedSince==0 ) return; x = cgi_rfc822_parsedate(zIfModifiedSince); if( x<mtime ) return; | | | 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | /* Check to see the If-Modified-Since constraint is satisfied */ zIfModifiedSince = P("HTTP_IF_MODIFIED_SINCE"); if( zIfModifiedSince==0 ) return; x = cgi_rfc822_parsedate(zIfModifiedSince); if( x<mtime ) return; #if 0 /* If the Fossil executable is more recent than If-Modified-Since, ** go ahead and regenerate the resource. */ if( file_mtime(g.nameOfExe, ExtFILE)>x ) return; #endif /* If we reach this point, it means that the resource has not changed ** and that we should generate a 304 Not Modified reply */ |
︙ | ︙ | |||
242 243 244 245 246 247 248 | /* Return the last-modified time in seconds since 1970. Or return 0 if ** there is no last-modified time. */ sqlite3_int64 etag_mtime(void){ return iEtagMtime; } | | | 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 | /* Return the last-modified time in seconds since 1970. Or return 0 if ** there is no last-modified time. */ sqlite3_int64 etag_mtime(void){ return iEtagMtime; } /* ** COMMAND: test-etag ** ** Usage: fossil test-etag -key KEY-NUMBER -hash HASH ** ** Generate an etag given a KEY-NUMBER and/or a HASH. ** ** KEY-NUMBER is some combination of: |
︙ | ︙ |
Changes to src/export.c.
︙ | ︙ | |||
447 448 449 450 451 452 453 | }while( (rid = bag_next(vers, rid))!=0 ); } } } /* This is the original header command (and hence documentation) for ** the "fossil export" command: | | | 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 | }while( (rid = bag_next(vers, rid))!=0 ); } } } /* This is the original header command (and hence documentation) for ** the "fossil export" command: ** ** Usage: %fossil export --git ?OPTIONS? ?REPOSITORY? ** ** Write an export of all check-ins to standard output. The export is ** written in the git-fast-export file format assuming the --git option is ** provided. The git-fast-export format is currently the only VCS ** interchange format supported, though other formats may be added in ** the future. |
︙ | ︙ | |||
1002 1003 1004 1005 1006 1007 1008 | db_bind_int(&sIns, ":isfile", isFile!=0); db_step(&sIns); db_reset(&sIns); return mprintf(":%d", db_last_insert_rowid()); } /* This is the SHA3-256 hash of an empty file */ | | | 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 | db_bind_int(&sIns, ":isfile", isFile!=0); db_step(&sIns); db_reset(&sIns); return mprintf(":%d", db_last_insert_rowid()); } /* This is the SHA3-256 hash of an empty file */ static const char zEmptySha3[] = "a7ffc6f8bf1ed76651c14756a061d662f580ff4de43b49fa82d80a4b80f8434a"; /* ** Export a single file named by zUuid. ** ** Return 0 on success and non-zero on any failure. ** |
︙ | ︙ | |||
1035 1036 1037 1038 1039 1040 1041 | }else{ rc = content_get(rid, &data); if( rc==0 ){ if( bPhantomOk ){ blob_init(&data, 0, 0); gitmirror_message(VERB_EXTRA, "missing file: %s\n", zUuid); zUuid = zEmptySha3; | | | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 | }else{ rc = content_get(rid, &data); if( rc==0 ){ if( bPhantomOk ){ blob_init(&data, 0, 0); gitmirror_message(VERB_EXTRA, "missing file: %s\n", zUuid); zUuid = zEmptySha3; }else{ return 1; } } } zMark = gitmirror_find_mark(zUuid, 1, 1); if( zMark[0]==':' ){ fprintf(xCmd, "blob\nmark %s\ndata %d\n", zMark, blob_size(&data)); |
︙ | ︙ | |||
1348 1349 1350 1351 1352 1353 1354 | int i; zCmd = "git symbolic-ref --short HEAD"; gitmirror_message(VERB_NORMAL, "%s\n", zCmd); xCmd = popen(zCmd, "r"); if( xCmd==0 ){ fossil_fatal("git command failed: %s", zCmd); } | | | | 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 | int i; zCmd = "git symbolic-ref --short HEAD"; gitmirror_message(VERB_NORMAL, "%s\n", zCmd); xCmd = popen(zCmd, "r"); if( xCmd==0 ){ fossil_fatal("git command failed: %s", zCmd); } z = fgets(zLine, sizeof(zLine), xCmd); pclose(xCmd); if( z==0 ){ fossil_fatal("no output from \"%s\"", zCmd); } for(i=0; z[i] && !fossil_isspace(z[i]); i++){} z[i] = 0; zMainBr = fossil_strdup(z); } return zMainBr; } /* ** Implementation of the "fossil git export" command. */ void gitmirror_export_command(void){ const char *zLimit; /* Text of the --limit flag */ int nLimit = 0x7fffffff; /* Numeric value of the --limit flag */ |
︙ | ︙ | |||
1434 1435 1436 1437 1438 1439 1440 | /* Make sure GIT has been initialized */ z = mprintf("%s/.git", zMirror); if( !file_isdir(z, ExtFILE) ){ zMainBr = gitmirror_init(zMirror, zMainBr); bNeedRepack = 1; } fossil_free(z); | | | 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 | /* Make sure GIT has been initialized */ z = mprintf("%s/.git", zMirror); if( !file_isdir(z, ExtFILE) ){ zMainBr = gitmirror_init(zMirror, zMainBr); bNeedRepack = 1; } fossil_free(z); /* Make sure the .mirror_state subdirectory exists */ z = mprintf("%s/.mirror_state", zMirror); rc = file_mkdir(z, ExtFILE, 0); if( rc ) fossil_fatal("cannot create directory \"%s\"", z); fossil_free(z); /* Attach the .mirror_state/db database */ |
︙ | ︙ | |||
1741 1742 1743 1744 1745 1746 1747 | char *zSql; int bQuiet = 0; int bByAll = 0; /* Undocumented option meaning this command was invoked ** from "fossil all" and should modify output accordingly */ db_find_and_open_repository(0, 0); bQuiet = find_option("quiet","q",0)!=0; | | | 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 | char *zSql; int bQuiet = 0; int bByAll = 0; /* Undocumented option meaning this command was invoked ** from "fossil all" and should modify output accordingly */ db_find_and_open_repository(0, 0); bQuiet = find_option("quiet","q",0)!=0; bByAll = find_option("by-all",0,0)!=0; verify_all_options(); zMirror = db_get("last-git-export-repo", 0); if( zMirror==0 ){ if( bQuiet ) return; if( bByAll ) return; fossil_print("Git mirror: none\n"); return; |
︙ | ︙ | |||
1854 1855 1856 1857 1858 1859 1860 | ** mapped into this name. "master" is used if ** this option is omitted. ** -q|--quiet Reduce output. Repeat for even less output. ** -v|--verbose More output ** ** > fossil git import MIRROR ** | | | 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 | ** mapped into this name. "master" is used if ** this option is omitted. ** -q|--quiet Reduce output. Repeat for even less output. ** -v|--verbose More output ** ** > fossil git import MIRROR ** ** TBD... ** ** > fossil git status ** ** Show the status of the current Git mirror, if there is one. ** ** -q|--quiet No output if there is nothing to report */ |
︙ | ︙ |
Changes to src/file.c.
︙ | ︙ | |||
2243 2244 2245 2246 2247 2248 2249 | /* ** Return non-NULL if zFilename contains pathname elements that ** are reserved on Windows. The returned string is the disallowed ** path element. */ const char *file_is_win_reserved(const char *zPath){ | | | 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 | /* ** Return non-NULL if zFilename contains pathname elements that ** are reserved on Windows. The returned string is the disallowed ** path element. */ const char *file_is_win_reserved(const char *zPath){ static const char *const azRes[] = { "CON","PRN","AUX","NUL","COM","LPT" }; static char zReturn[5]; int i; while( zPath[0] ){ for(i=0; i<count(azRes); i++){ if( sqlite3_strnicmp(zPath, azRes[i], 3)==0 && ((i>=4 && fossil_isdigit(zPath[3]) && (zPath[4]=='/' || zPath[4]=='.' || zPath[4]==0)) |
︙ | ︙ |
Changes to src/fileedit.c.
︙ | ︙ | |||
434 435 436 437 438 439 440 | ** pCI's ownership is not modified. ** ** This function validates pCI's state and fails if any validation ** fails. ** ** On error, returns false (0) and, if pErr is not NULL, writes a ** diagnostic message there. | | | 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 | ** pCI's ownership is not modified. ** ** This function validates pCI's state and fails if any validation ** fails. ** ** On error, returns false (0) and, if pErr is not NULL, writes a ** diagnostic message there. ** ** Returns true on success. If pRid is not NULL, the RID of the ** resulting manifest is written to *pRid. ** ** The check-in process is largely influenced by pCI->flags, and that ** must be populated before calling this. See the fossil_cimini_flags ** enum for the docs for each flag. */ |
︙ | ︙ | |||
571 572 573 574 575 576 577 | && blob_size(&pCI->fileContent)>0 ){ /* Convert to the requested EOL style. Note that this inherently ** runs a risk of breaking content, e.g. string literals which ** contain embedded newlines. Note that HTML5 specifies that ** form-submitted TEXTAREA content gets normalized to CRLF-style: ** | | | 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 | && blob_size(&pCI->fileContent)>0 ){ /* Convert to the requested EOL style. Note that this inherently ** runs a risk of breaking content, e.g. string literals which ** contain embedded newlines. Note that HTML5 specifies that ** form-submitted TEXTAREA content gets normalized to CRLF-style: ** ** https://html.spec.whatwg.org/#the-textarea-element */ const int pseudoBinary = LOOK_LONG | LOOK_NUL; const int lookFlags = LOOK_CRLF | LOOK_LONE_LF | pseudoBinary; const int lookNew = looks_like_utf8( &pCI->fileContent, lookFlags ); if(!(pseudoBinary & lookNew)){ int rehash = 0; /*fossil_print("lookNew=%08x\n",lookNew);*/ |
︙ | ︙ | |||
979 980 981 982 983 984 985 | char ** zRevUuid, int * pVid, const char * zFilename, int * frid){ char * zFileUuid = 0; /* file content UUID */ const int checkFile = zFilename!=0 || frid!=0; int vid = 0; | | | 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 | char ** zRevUuid, int * pVid, const char * zFilename, int * frid){ char * zFileUuid = 0; /* file content UUID */ const int checkFile = zFilename!=0 || frid!=0; int vid = 0; if(checkFile && !fileedit_ajax_check_filename(zFilename)){ return 0; } vid = symbolic_name_to_rid(zRev, "ci"); if(0==vid){ ajax_route_error(404,"Cannot resolve name as a check-in: %s", zRev); |
︙ | ︙ | |||
1174 1175 1176 1177 1178 1179 1180 | ** ** Intended to be used only by /filepage and /filepage_commit. */ static int fileedit_setup_cimi_from_p(CheckinMiniInfo * p, Blob * pErr, int * bIsMissingArg){ char * zFileUuid = 0; /* UUID of file content */ const char * zFlag; /* generic flag */ | | | 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 | ** ** Intended to be used only by /filepage and /filepage_commit. */ static int fileedit_setup_cimi_from_p(CheckinMiniInfo * p, Blob * pErr, int * bIsMissingArg){ char * zFileUuid = 0; /* UUID of file content */ const char * zFlag; /* generic flag */ int rc = 0, vid = 0, frid = 0; /* result code, check-in/file rids */ #define fail(EXPR) blob_appendf EXPR; goto end_fail zFlag = PD("filename",P("fn")); if(zFlag==0 || !*zFlag){ rc = 400; if(bIsMissingArg){ *bIsMissingArg = 1; |
︙ | ︙ | |||
1369 1370 1371 1372 1373 1374 1375 | if(i++){ CX(","); } CX("%!j", zFilename); } } db_finalize(&q); | | | 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 | if(i++){ CX(","); } CX("%!j", zFilename); } } db_finalize(&q); CX("]}"); } /* ** AJAX route /fileedit?ajax=filelist ** ** Fetches a JSON-format list of leaves and/or filenames for use in ** creating a file selection list in /fileedit. It has different modes |
︙ | ︙ | |||
1425 1426 1427 1428 1429 1430 1431 | } } /* ** AJAX route /fileedit?ajax=commit ** ** Required query parameters: | | | | 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 | } } /* ** AJAX route /fileedit?ajax=commit ** ** Required query parameters: ** ** filename=FILENAME ** checkin=Parent check-in UUID ** content=text ** comment=non-empty text ** ** Optional query parameters: ** ** comment_mimetype=text (NOT currently honored) ** ** dry_run=int (1 or 0) ** ** include_manifest=int (1 or 0), whether to include ** the generated manifest in the response. ** ** ** User must have Write permissions to use this page. ** ** Responds with JSON (with some state repeated ** from the input in order to avoid certain race conditions ** client-side): ** |
︙ | ︙ | |||
1575 1576 1577 1578 1579 1580 1581 | ** use of the name parameter. ** ** Which additional parameters are used by each distinct ajax route ** is an internal implementation detail and may change with any ** given build of this code. An unknown "name" value triggers an ** error, as documented for ajax_route_error(). */ | | | 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 | ** use of the name parameter. ** ** Which additional parameters are used by each distinct ajax route ** is an internal implementation detail and may change with any ** given build of this code. An unknown "name" value triggers an ** error, as documented for ajax_route_error(). */ /* Allow no access to this page without check-in privilege */ login_check_credentials(); if( !g.perm.Write ){ if(zAjax!=0){ ajax_route_error(403, "Write permissions required."); }else{ login_needed(g.anon.Write); |
︙ | ︙ | |||
1668 1669 1670 1671 1672 1673 1674 | ** have a common, page-specific container we can filter our CSS ** selectors, but we do have the BODY, which we can decorate with ** whatever CSS we wish... */ style_script_begin(__FILE__,__LINE__); CX("document.body.classList.add('fileedit');\n"); style_script_end(); | | | 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 | ** have a common, page-specific container we can filter our CSS ** selectors, but we do have the BODY, which we can decorate with ** whatever CSS we wish... */ style_script_begin(__FILE__,__LINE__); CX("document.body.classList.add('fileedit');\n"); style_script_end(); /* Status bar */ CX("<div id='fossil-status-bar' " "title='Status message area. Double-click to clear them.'>" "Status messages will go here.</div>\n" /* will be moved into the tab container via JS */); CX("<div id='fileedit-edit-status'>" |
︙ | ︙ | |||
1696 1697 1698 1699 1700 1701 1702 | "data-tab-parent='fileedit-tabs' " "data-tab-label='File Selection' " "class='hidden'" ">"); CX("<div id='fileedit-file-selector'></div>"); CX("</div>"/*#fileedit-tab-fileselect*/); } | | > | > | 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 | "data-tab-parent='fileedit-tabs' " "data-tab-label='File Selection' " "class='hidden'" ">"); CX("<div id='fileedit-file-selector'></div>"); CX("</div>"/*#fileedit-tab-fileselect*/); } /******* Content tab *******/ { CX("<div id='fileedit-tab-content' " "data-tab-parent='fileedit-tabs' " "data-tab-label='File Content' " "class='hidden'" ">"); CX("<div class='fileedit-options flex-container " "flex-row child-gap-small'>"); CX("<div class='input-with-label'>" "<button class='fileedit-content-reload confirmer' " ">Discard & Reload</button>" "<div class='help-buttonlet'>" "Reload the file from the server, discarding " "any local edits. To help avoid accidental loss of " "edits, it requires confirmation (a second click) within " "a few seconds or it will not reload." "</div>" "</div>"); style_select_list_int("select-font-size", "editor_font_size", "Editor font size", NULL/*tooltip*/, 100, "100%", 100, "125%", 125, "150%", 150, "175%", 175, "200%", 200, NULL); wikiedit_emit_toggle_preview(); CX("</div>"); CX("<div class='flex-container flex-column stretch'>"); CX("<textarea name='content' id='fileedit-content-editor' " "class='fileedit' rows='25'>"); CX("</textarea>"); CX("</div>"/*textarea wrapper*/); CX("</div>"/*#tab-file-content*/); |
︙ | ︙ | |||
1935 1936 1937 1938 1939 1940 1941 | */ style_select_list_str("comment-mimetype", "comment_mimetype", "Comment style:", "Specify how fossil will interpret the " "comment string.", NULL, "Fossil", "text/x-fossil-wiki", | | | 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 | */ style_select_list_str("comment-mimetype", "comment_mimetype", "Comment style:", "Specify how fossil will interpret the " "comment string.", NULL, "Fossil", "text/x-fossil-wiki", "Markdown", "text/x-markdown", "Plain text", "text/plain", NULL); CX("</div>\n"); } CX("<div class='fileedit-hint flex-container flex-row'>" "(Warning: switching from multi- to single-line mode will " "strip out all newlines!)</div>"); |
︙ | ︙ |
Changes to src/finfo.c.
︙ | ︙ | |||
565 566 567 568 569 570 571 | if( ridTo ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", ridTo); zLink = href("%R/info/%!S", zUuid); blob_appendf(&title, " and check-in %z%S</a>", zLink, zUuid); fossil_free(zUuid); } }else if( ridCi ){ | | | 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | if( ridTo ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", ridTo); zLink = href("%R/info/%!S", zUuid); blob_appendf(&title, " and check-in %z%S</a>", zLink, zUuid); fossil_free(zUuid); } }else if( ridCi ){ blob_appendf(&title, "History of file "); hyperlinked_path(zFilename, &title, 0, "tree", "", LINKPATH_FILE); if( fShowId ) blob_appendf(&title, " (%d)", fnid); blob_appendf(&title, " at check-in %z%h</a>", href("%R/info?name=%t",zCI), zCI); }else{ blob_appendf(&title, "History for "); hyperlinked_path(zFilename, &title, 0, "tree", "", LINKPATH_FILE); |
︙ | ︙ |
Changes to src/foci.c.
︙ | ︙ | |||
266 267 268 269 270 271 272 | 0, /* xCommit */ 0, /* xRollback */ 0, /* xFindFunction */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ | | > | 266 267 268 269 270 271 272 273 274 275 276 277 278 | 0, /* xCommit */ 0, /* xRollback */ 0, /* xFindFunction */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, /* xShadowName */ 0 /* xIntegrity */ }; sqlite3_create_module(db, "files_of_checkin", &foci_module, 0); return SQLITE_OK; } |
Changes to src/fossil.page.chat.js.
1 2 | /** This file contains the client-side implementation of fossil's /chat | | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | /** This file contains the client-side implementation of fossil's /chat application. */ window.fossil.onPageLoad(function(){ const F = window.fossil, D = F.dom; const E1 = function(selector){ const e = document.querySelector(selector); if(!e) throw new Error("missing required DOM element: "+selector); return e; }; /** Returns true if e is entirely within the bounds of the window's viewport. */ const isEntirelyInViewport = function(e) { const rect = e.getBoundingClientRect(); return ( rect.top >= 0 && |
︙ | ︙ | |||
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | let dbg = document.querySelector('#debugMsg'); if(dbg){ /* This can inadvertently influence our flexbox layouts, so move it out of the way. */ D.append(document.body,dbg); } })(); const ForceResizeKludge = (function(){ /* Workaround for Safari mayhem regarding use of vh CSS units.... We tried to use vh units to set the content area size for the chat layout, but Safari chokes on that, so we calculate that height here: 85% when in "normal" mode and 95% in chat-only mode. Larger than ~95% is too big for Firefox on Android, causing the input area to move off-screen. While we're here, we also use this to cap the max-height of the input field so that pasting huge text does not scroll the upper area of the input widget off-screen. */ | > > > > > > > > | < < < < < | | | 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | let dbg = document.querySelector('#debugMsg'); if(dbg){ /* This can inadvertently influence our flexbox layouts, so move it out of the way. */ D.append(document.body,dbg); } })(); const GetFramingElements = function() { return document.querySelectorAll([ "body > header", "body > nav.mainmenu", "body > footer", "#debugMsg" ].join(',')); }; const ForceResizeKludge = (function(){ /* Workaround for Safari mayhem regarding use of vh CSS units.... We tried to use vh units to set the content area size for the chat layout, but Safari chokes on that, so we calculate that height here: 85% when in "normal" mode and 95% in chat-only mode. Larger than ~95% is too big for Firefox on Android, causing the input area to move off-screen. While we're here, we also use this to cap the max-height of the input field so that pasting huge text does not scroll the upper area of the input widget off-screen. */ const elemsToCount = GetFramingElements(); const contentArea = E1('div.content'); const bcl = document.body.classList; const resized = function f(){ if(f.$disabled) return; const wh = window.innerHeight, com = bcl.contains('chat-only-mode'); var ht; var extra = 0; if(com){ ht = wh; }else{ elemsToCount.forEach((e)=>e ? extra += D.effectiveHeight(e) : false); ht = wh - extra; } f.chat.e.inputX.style.maxHeight = (ht/2)+"px"; /* ^^^^ this is a middle ground between having no size cap on the input field and having a fixed arbitrary cap. */; contentArea.style.height = contentArea.style.maxHeight = [ "calc(", (ht>=100 ? ht : 100), "px", " - 0.65em"/*fudge value*/,")" /* ^^^^ hypothetically not needed, but both Chrome/FF on Linux will force scrollbars on the body if this value is too small; current value is empirically selected. */ ].join(''); if(false){ console.debug("resized.",wh, extra, ht, window.getComputedStyle(contentArea).maxHeight, contentArea); console.debug("Set input max height to: ", f.chat.e.inputX.style.maxHeight); |
︙ | ︙ | |||
321 322 323 324 325 326 327 | "chat-only" mode. That mode hides the page's header and footer, leaving only the chat application visible to the user. */ chatOnlyMode: function f(yes){ if(undefined === f.elemsToToggle){ f.elemsToToggle = []; | < < < < < < | | 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | "chat-only" mode. That mode hides the page's header and footer, leaving only the chat application visible to the user. */ chatOnlyMode: function f(yes){ if(undefined === f.elemsToToggle){ f.elemsToToggle = []; GetFramingElements().forEach((e)=>f.elemsToToggle.push(e)); } if(!arguments.length) yes = true; if(yes === this.isChatOnlyMode()) return this; if(yes){ D.addClass(f.elemsToToggle, 'hidden'); D.addClass(document.body, 'chat-only-mode'); document.body.scroll(0,document.body.height); |
︙ | ︙ | |||
394 395 396 397 398 399 400 401 402 403 404 405 406 407 | ctrl-enter both send them. */ "edit-ctrl-send": false, /* When on, the edit field starts as a single line and expands as the user types, and the relevant buttons are laid out in a compact form. When off, the edit field and buttons are larger. */ "edit-compact-mode": true, /* When on, sets the font-family on messages and the edit field to monospace. */ "monospace-messages": false, /* When on, non-chat UI elements (page header/footer) are hidden */ "chat-only-mode": false, /* When set to a URI, it is assumed to be an audio file, | > > > > > | 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 | ctrl-enter both send them. */ "edit-ctrl-send": false, /* When on, the edit field starts as a single line and expands as the user types, and the relevant buttons are laid out in a compact form. When off, the edit field and buttons are larger. */ "edit-compact-mode": true, /* See notes for this setting in fossil.page.wikiedit.js. Both /wikiedit and /fileedit share this persistent config option under the same storage key. */ "edit-shift-enter-preview": F.storage.getBool('edit-shift-enter-preview', true), /* When on, sets the font-family on messages and the edit field to monospace. */ "monospace-messages": false, /* When on, non-chat UI elements (page header/footer) are hidden */ "chat-only-mode": false, /* When set to a URI, it is assumed to be an audio file, |
︙ | ︙ | |||
1497 1498 1499 1500 1501 1502 1503 | /* Shift-enter will run preview mode UNLESS preview mode is active AND the input field is empty, in which case it will switch back to message view. */ if(Chat.e.currentView===Chat.e.viewPreview && !text){ Chat.setCurrentView(Chat.e.viewMessages); }else if(!text){ f.$toggleCompact(compactMode); | | | | | 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 | /* Shift-enter will run preview mode UNLESS preview mode is active AND the input field is empty, in which case it will switch back to message view. */ if(Chat.e.currentView===Chat.e.viewPreview && !text){ Chat.setCurrentView(Chat.e.viewMessages); }else if(!text){ f.$toggleCompact(compactMode); }else if(Chat.settings.getBool('edit-shift-enter-preview', true)){ Chat.e.btnPreview.click(); } return false; } if(ev.ctrlKey && !text && !BlobXferState.blob){ /* Ctrl-enter on empty input field(s) toggles Enter/Ctrl-enter mode */ ev.preventDefault(); ev.stopPropagation(); f.$toggleCtrl(ctrlMode); return false; } if(!ctrlMode && ev.ctrlKey && text){ //console.debug("!ctrlMode && ev.ctrlKey && text."); /* Ctrl-enter in Enter-sends mode SHOULD, with this logic add a newline, but that is not happening, for unknown reasons (possibly related to this element being a contenteditable DIV instead of a textarea). Forcibly appending a newline do the input area does not work, also for unknown reasons, and would only be suitable when we're at the end of the input. Strangely, this approach DOES work for shift-enter, but we need shift-enter as a hotkey for preview mode. */ //return; // return here "should" cause newline to be added, but that doesn't work } if((!ctrlMode && !ev.ctrlKey) || (ev.ctrlKey/* && ctrlMode*/)){ /* Ship it! */ ev.preventDefault(); ev.stopPropagation(); Chat.submitMessage(); return false; } }; Chat.e.inputFields.forEach( (e)=>e.addEventListener('keydown', inputWidgetKeydown, false) ); Chat.e.btnSubmit.addEventListener('click',(e)=>{ e.preventDefault(); Chat.submitMessage(); return false; |
︙ | ︙ | |||
1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 | boolValue: 'edit-widget-x', hint: [ "When enabled, chat input uses a so-called 'contenteditable' ", "field. Though generally more comfortable and modern than ", "plain-text input fields, browser-specific quirks and bugs ", "may lead to frustration. Ideal for mobile devices." ].join('') }] },{ label: "Appearance Options...", children:[{ label: "Left-align my posts", hint: "Default alignment of your own messages is selected " | > > > > > > > | | 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 | boolValue: 'edit-widget-x', hint: [ "When enabled, chat input uses a so-called 'contenteditable' ", "field. Though generally more comfortable and modern than ", "plain-text input fields, browser-specific quirks and bugs ", "may lead to frustration. Ideal for mobile devices." ].join('') },{ label: "Shift-enter to preview", hint: ["Use shift-enter to preview being-edited messages. ", "This is normally desirable but some software-mode ", "keyboards misinteract with this, in which cases it can be ", "disabled."], boolValue: 'edit-shift-enter-preview' }] },{ label: "Appearance Options...", children:[{ label: "Left-align my posts", hint: "Default alignment of your own messages is selected " + "based on the window width/height ratio.", boolValue: ()=>!document.body.classList.contains('my-messages-right'), callback: function f(){ document.body.classList[ this.checkbox.checked ? 'remove' : 'add' ]('my-messages-right'); } },{ |
︙ | ︙ | |||
1963 1964 1965 1966 1967 1968 1969 | D.enable(elemsToEnable); } }); return false; }; btnPreview.addEventListener('click', submit, false); })()/*message preview setup*/; | | | 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 | D.enable(elemsToEnable); } }); return false; }; btnPreview.addEventListener('click', submit, false); })()/*message preview setup*/; /** Callback for poll() to inject new content into the page. jx == the response from /chat-poll. If atEnd is true, the message is appended to the end of the chat list (for loading older messages), else the beginning (the default). */ const newcontent = function f(jx,atEnd){ if(!f.processPost){ /** Processes chat message m, placing it either the start (if atEnd |
︙ | ︙ |
Changes to src/fossil.page.fileedit.js.
︙ | ︙ | |||
68 69 70 71 72 73 74 | ); */ const E = (s)=>document.querySelector(s), D = F.dom, P = F.page; P.config = { | | > > > > > > | 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | ); */ const E = (s)=>document.querySelector(s), D = F.dom, P = F.page; P.config = { defaultMaxStashSize: 7, /** See notes for this setting in fossil.page.wikiedit.js. Both /wikiedit and /fileedit share this persistent config option under the same storage key. */ shiftEnterPreview: F.storage.getBool('edit-shift-enter-preview', true) }; /** $stash is an internal-use-only object for managing "stashed" local edits, to help avoid that users accidentally lose content by switching tabs or following links or some such. The basic theory of operation is... |
︙ | ︙ | |||
568 569 570 571 572 573 574 | opt._finfo = finfo; if(0===f.compare(currentFinfo, finfo)){ D.attr(opt, 'selected', true); } }); } }/*P.stashWidget*/; | | | 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 | opt._finfo = finfo; if(0===f.compare(currentFinfo, finfo)){ D.attr(opt, 'selected', true); } }); } }/*P.stashWidget*/; /** Internal workaround to select the current preview mode and fire a change event if the value actually changes or if forceEvent is truthy. */ P.selectPreviewMode = function(modeValue, forceEvent){ const s = this.e.selectPreviewMode; |
︙ | ︙ | |||
722 723 724 725 726 727 728 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ | | | 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ if(P.config.shiftEnterPreview && ev.shiftKey && 13===ev.keyCode){ ev.preventDefault(); ev.stopPropagation(); P.e.taEditor.blur(/*force change event, if needed*/); P.tabs.switchToTab(P.e.tabs.preview); if(!P.e.cbAutoPreview.checked){/* If NOT in auto-preview mode, trigger an update. */ P.preview(); } |
︙ | ︙ | |||
843 844 845 846 847 848 849 850 851 852 853 854 855 856 | } ); P.fileSelectWidget.init(); P.stashWidget.init( P.e.tabs.content.lastElementChild ); }/*F.onPageLoad()*/); /** Getter (if called with no args) or setter (if passed an arg) for the current file content. The setter form sets the content, dispatches a | > > > > > > > | 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 | } ); P.fileSelectWidget.init(); P.stashWidget.init( P.e.tabs.content.lastElementChild ); const cbEditPreview = E('#edit-shift-enter-preview'); cbEditPreview.addEventListener('change', function(e){ F.storage.set('edit-shift-enter-preview', P.config.shiftEnterPreview = e.target.checked); }, false); cbEditPreview.checked = P.config.shiftEnterPreview; }/*F.onPageLoad()*/); /** Getter (if called with no args) or setter (if passed an arg) for the current file content. The setter form sets the content, dispatches a |
︙ | ︙ | |||
1159 1160 1161 1162 1163 1164 1165 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; | | | 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; /** Callback for use with F.connectPagePreviewers() */ P._postPreview = function(content,callback){ if(!affirmHasFile()) return this; if(!content){ callback(content); |
︙ | ︙ |
Changes to src/fossil.page.pikchrshow.js.
︙ | ︙ | |||
314 315 316 317 318 319 320 | \u00a0 to , so...*/.split(' ').join('\u00a0')); if(needsPreview) P.preview(); else{ /*If it's from the server, it's already rendered, but this gets all labels/headers in sync.*/ P.renderPreview(); } | | | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | \u00a0 to , so...*/.split(' ').join('\u00a0')); if(needsPreview) P.preview(); else{ /*If it's from the server, it's already rendered, but this gets all labels/headers in sync.*/ P.renderPreview(); } } }/*F.onPageLoad()*/); /** Updates the preview view based on the current preview mode and error state. */ P.renderPreview = function f(){ |
︙ | ︙ |
Changes to src/fossil.page.pikchrshowasm.js.
︙ | ︙ | |||
390 391 392 393 394 395 396 | const val = ev.target.value; if(!val) return; setCurrentText(val); }, false); }/*Examples*/ /** | | | 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 | const val = ev.target.value; if(!val) return; setCurrentText(val); }, false); }/*Examples*/ /** TODO? Handle load/import of an external pikchr file. */ if(0) E('#load-pikchr').addEventListener('change',function(){ const f = this.files[0]; const r = new FileReader(); const status = {loaded: 0, total: 0}; this.setAttribute('disabled','disabled'); const that = this; |
︙ | ︙ | |||
477 478 479 480 481 482 483 | that height here. Larger than ~95% is too big for Firefox on Android, causing the input area to move off-screen. */ const appViews = EAll('.app-view'); const elemsToCount = [ /* Elements which we need to always count in the visible body size. */ | | | | | 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 | that height here. Larger than ~95% is too big for Firefox on Android, causing the input area to move off-screen. */ const appViews = EAll('.app-view'); const elemsToCount = [ /* Elements which we need to always count in the visible body size. */ E('body > header'), E('body > nav.mainmenu'), E('body > footer') ]; const resized = function f(){ if(f.$disabled) return; const wh = window.innerHeight; var ht; var extra = 0; elemsToCount.forEach((e)=>e ? extra += F.dom.effectiveHeight(e) : false); |
︙ | ︙ |
Changes to src/fossil.page.whistory.js.
1 2 3 4 5 6 7 8 9 10 11 12 13 | /* This script adds interactivity for wiki-history webpages. * * The main code is within the 'on-click' handler of the "diff" links. * Instead of standard redirection it fills-in two hidden inputs with * the appropriate values and submits the corresponding form. * A special care should be taken if some intermediate edits are hidden. * * For the sake of compatibility with ascetic browsers the code tries * to avoid modern API and ECMAScript constructs. This makes it less * readable and may be reconsidered in the future. */ window.addEventListener( 'load', function() { | | > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | /* This script adds interactivity for wiki-history webpages. * * The main code is within the 'on-click' handler of the "diff" links. * Instead of standard redirection it fills-in two hidden inputs with * the appropriate values and submits the corresponding form. * A special care should be taken if some intermediate edits are hidden. * * For the sake of compatibility with ascetic browsers the code tries * to avoid modern API and ECMAScript constructs. This makes it less * readable and may be reconsidered in the future. */ window.addEventListener( 'load', function() { var form = document.getElementById("wh-form"); form.method = "GET"; var csrf = form.querySelector("input[name='csrf']"); if( csrf ) form.removeChild( csrf ); var wh_id = document.getElementById("wh-id" ); var wh_pid = document.getElementById("wh-pid"); var wh_cleaner = document.getElementById("wh-cleaner"); var wh_collapser = document.getElementById("wh-collapser"); var wh_radios = []; // user-visible controls for baseline selection |
︙ | ︙ |
Changes to src/fossil.page.wikiedit.js.
︙ | ︙ | |||
73 74 75 76 77 78 79 | useConfirmerButtons:{ /* If true during fossil.page setup, certain buttons will use a "confirmer" step, else they will not. The confirmer topic has been the source of much contention in the forum. */ save: false, reload: true, discardStash: true | > > > > > > > | > > > > > > | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | useConfirmerButtons:{ /* If true during fossil.page setup, certain buttons will use a "confirmer" step, else they will not. The confirmer topic has been the source of much contention in the forum. */ save: false, reload: true, discardStash: true }, /** If true, a keyboard combo of shift-enter (from the editor) toggles between preview and edit modes. This is normally desired but at least one software keyboard is known to misinteract with this, treating an Enter after automatically-capitalized letters as a shift-enter: https://fossil-scm.org/forum/forumpost/dbd5b68366147ce8 Maintenance note: /fileedit also uses this same key for the same purpose. */ shiftEnterPreview: F.storage.getBool('edit-shift-enter-preview', true) }; /** $stash is an internal-use-only object for managing "stashed" local edits, to help avoid that users accidentally lose content by switching tabs or following links or some such. The basic theory of operation is... |
︙ | ︙ | |||
452 453 454 455 456 457 458 | opt.dataset.isDeleted = true; } self._refreshStashMarks(opt); }); D.enable(sel); if(P.winfo) sel.value = P.winfo.name; }, | | | 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 | opt.dataset.isDeleted = true; } self._refreshStashMarks(opt); }); D.enable(sel); if(P.winfo) sel.value = P.winfo.name; }, /** Loads the page list and populates the selection list. */ loadList: function callee(){ if(!callee.onload){ const self = this; callee.onload = function(list){ self.cache.pageList = list; self._rebuildList(); |
︙ | ︙ | |||
649 650 651 652 653 654 655 | }, false); D.append( parentElem, D.append(D.addClass(D.div(), 'fieldset-wrapper'), fsFilter, fsNewPage, fsLegend) ); | | | 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 | }, false); D.append( parentElem, D.append(D.addClass(D.div(), 'fieldset-wrapper'), fsFilter, fsNewPage, fsLegend) ); D.append(parentElem, btn); btn.addEventListener('click', ()=>this.loadList(), false); this.loadList(); const onSelect = (e)=>P.loadPage(e.target.value); sel.addEventListener('change', onSelect, false); sel.addEventListener('dblclick', onSelect, false); F.page.addEventListener('wiki-stash-updated', ()=>{ |
︙ | ︙ | |||
672 673 674 675 676 677 678 679 | if(page.isEmpty) opt.dataset.isDeleted = true; else delete opt.dataset.isDeleted; self._refreshStashMarks(opt); }else if('sandbox'!==page.type){ F.error("BUG: internal mis-handling of page object: missing OPTION for page "+page.name); } }); delete this.init; | > > > > > > > | | 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 | if(page.isEmpty) opt.dataset.isDeleted = true; else delete opt.dataset.isDeleted; self._refreshStashMarks(opt); }else if('sandbox'!==page.type){ F.error("BUG: internal mis-handling of page object: missing OPTION for page "+page.name); } }); const cbEditPreview = E('#edit-shift-enter-preview'); cbEditPreview.addEventListener('change', function(e){ F.storage.set('edit-shift-enter-preview', P.config.shiftEnterPreview = e.target.checked); }, false); cbEditPreview.checked = P.config.shiftEnterPreview; delete this.init; }/*init()*/ }; /** Widget for listing and selecting $stash entries. */ P.stashWidget = { e:{/*DOM element(s)*/}, |
︙ | ︙ | |||
912 913 914 915 916 917 918 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ | | | 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ if(P.config.shiftEnterPreview && ev.shiftKey && 13===ev.keyCode){ ev.preventDefault(); ev.stopPropagation(); P.e.taEditor.blur(/*force change event, if needed*/); P.tabs.switchToTab(P.e.tabs.preview); if(!P.e.cbAutoPreview.checked){/* If NOT in auto-preview mode, trigger an update. */ P.preview(); } |
︙ | ︙ | |||
1459 1460 1461 1462 1463 1464 1465 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; | | | 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; /** Callback for use with F.connectPagePreviewers() */ P._postPreview = function(content,callback){ if(!affirmPageLoaded()) return this; if(!content){ callback(content); |
︙ | ︙ |
Changes to src/graph.c.
︙ | ︙ | |||
309 310 311 312 313 314 315 | dist = i - iNearto; if( dist<0 ) dist = -dist; if( dist<iBestDist ){ iBestDist = dist; iBest = i; } } | | | 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 | dist = i - iNearto; if( dist<0 ) dist = -dist; if( dist<iBestDist ){ iBestDist = dist; iBest = i; } } /* If no match, consider all possible rails */ if( iBestDist>1000 ){ for(i=0; i<=p->mxRail+1; i++){ int dist; if( inUseMask & BIT(i) ) continue; if( iNearto<=0 ){ iBest = i; |
︙ | ︙ | |||
537 538 539 540 541 542 543 | ** the aParent[] array. */ if( (tmFlags & (TIMELINE_DISJOINT|TIMELINE_XMERGE))!=0 ){ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ GraphRow *pParent = hashFind(p, pRow->aParent[i]); if( pParent==0 ){ | | | | 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 | ** the aParent[] array. */ if( (tmFlags & (TIMELINE_DISJOINT|TIMELINE_XMERGE))!=0 ){ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ GraphRow *pParent = hashFind(p, pRow->aParent[i]); if( pParent==0 ){ memmove(pRow->aParent+i, pRow->aParent+i+1, sizeof(pRow->aParent[0])*(pRow->nParent-i-1)); pRow->nParent--; if( i<pRow->nNonCherrypick ){ pRow->nNonCherrypick--; }else{ pRow->nCherrypick--; } i--; } } } } /* Put the deepest (earliest) merge parent first in the list. ** An off-screen merge parent is considered deepest. */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext ){ if( pRow->nParent<=1 ) continue; for(i=1; i<pRow->nParent; i++){ GraphRow *pParent = hashFind(p, pRow->aParent[i]); |
︙ | ︙ | |||
938 939 940 941 942 943 944 | /* The parent branch from which this branch emerges is on the ** same rail as pRow. Do not shift as that would stack a child ** branch directly above its parent. */ continue; } /* All clear. Make the translation | | | 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 | /* The parent branch from which this branch emerges is on the ** same rail as pRow. Do not shift as that would stack a child ** branch directly above its parent. */ continue; } /* All clear. Make the translation */ for(pLoop=pRow; pLoop && pLoop->idx<=pBottom->idx; pLoop=pLoop->pNext){ if( pLoop->iRail==iFrom ){ pLoop->iRail = iTo; pLoop->aiRiser[iTo] = pLoop->aiRiser[iFrom]; pLoop->aiRiser[iFrom] = -1; } } |
︙ | ︙ |
Changes to src/graph.js.
︙ | ︙ | |||
132 133 134 135 136 137 138 | function hideGraphTooltip(){ /* Hide the tooltip */ document.removeEventListener('keydown',onKeyDown,/* useCapture == */true); stopCloseTimer(); tooltipObj.style.display = "none"; tooltipInfo.ixActive = -1; tooltipInfo.idNodeActive = 0; } | | | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | function hideGraphTooltip(){ /* Hide the tooltip */ document.removeEventListener('keydown',onKeyDown,/* useCapture == */true); stopCloseTimer(); tooltipObj.style.display = "none"; tooltipInfo.ixActive = -1; tooltipInfo.idNodeActive = 0; } window.onpagehide = hideGraphTooltip; function stopDwellTimer(){ if(tooltipInfo.idTimer!=0){ clearTimeout(tooltipInfo.idTimer); tooltipInfo.idTimer = 0; } } function resumeCloseTimer(){ |
︙ | ︙ |
Changes to src/hbmenu.js.
︙ | ︙ | |||
19 20 21 22 23 24 25 | ** ** This was original the "js.txt" file for the default skin. It was subsequently ** moved into src/hbmenu.js so that it could be more easily reused by other skins ** using the "builtin_request_js" TH1 command. ** ** Operation: ** | | | | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | ** ** This was original the "js.txt" file for the default skin. It was subsequently ** moved into src/hbmenu.js so that it could be more easily reused by other skins ** using the "builtin_request_js" TH1 command. ** ** Operation: ** ** This script expects the HTML to contain two elements: ** ** <a id="hbbtn"> <--- The hamburger menu button ** <nav id="hbdrop"> <--- Container for the hamburger menu ** ** Bindings are made on hbbtn so that when it is clicked, the following ** happens: ** ** 1. An XHR is made to /sitemap?popup to fetch the HTML for the ** popup menu. ** |
︙ | ︙ |
Changes to src/hook.c.
︙ | ︙ | |||
230 231 232 233 234 235 236 | ** ** > fossil hook test [OPTIONS] ID ** ** Run the hook script given by ID for testing purposes. ** Options: ** ** --dry-run Print the script on stdout rather than run it | | | 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 | ** ** > fossil hook test [OPTIONS] ID ** ** Run the hook script given by ID for testing purposes. ** Options: ** ** --dry-run Print the script on stdout rather than run it ** --base-rcvid N Pretend that the hook-last-rcvid value is N ** --new-rcvid M Pretend that the last rcvid valud is M ** --aux-file NAME NAME is substituted for %A in the script ** ** The --base-rcvid and --new-rcvid options are silently ignored if ** the hook type is not "after-receive". The default values for ** --base-rcvid and --new-rcvid cause the last receive to be processed. */ |
︙ | ︙ |
Changes to src/http.c.
︙ | ︙ | |||
96 97 98 99 100 101 102 | ** sha1_shared_secret()), not the original password. So convert the ** password to its SHA1 encoding if it isn't already a SHA1 hash. ** ** We assume that a hexadecimal string of exactly 40 characters is a ** SHA1 hash, not an original password. If a user has a password which ** just happens to be a 40-character hex string, then this routine won't ** be able to distinguish it from a hash, the translation will not be | | | 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | ** sha1_shared_secret()), not the original password. So convert the ** password to its SHA1 encoding if it isn't already a SHA1 hash. ** ** We assume that a hexadecimal string of exactly 40 characters is a ** SHA1 hash, not an original password. If a user has a password which ** just happens to be a 40-character hex string, then this routine won't ** be able to distinguish it from a hash, the translation will not be ** performed, and the sync won't work. */ if( zPw && zPw[0] && (strlen(zPw)!=40 || !validate16(zPw,40)) ){ const char *zProjectCode = 0; if( g.url.flags & URL_USE_PARENT ){ zProjectCode = db_get("parent-project-code", 0); }else{ zProjectCode = db_get("project-code", 0); |
︙ | ︙ | |||
256 257 258 259 260 261 262 | blob_write_to_file(pSend, zUplink); if( g.fHttpTrace ){ fossil_print("RUN %s\n", zCmd); } rc = fossil_system(zCmd); if( rc ){ fossil_warning("Transport command failed: %s\n", zCmd); | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 | blob_write_to_file(pSend, zUplink); if( g.fHttpTrace ){ fossil_print("RUN %s\n", zCmd); } rc = fossil_system(zCmd); if( rc ){ fossil_warning("Transport command failed: %s\n", zCmd); } fossil_free(zCmd); file_delete(zUplink); if( file_size(zDownlink, ExtFILE)<0 ){ blob_zero(pReply); }else{ blob_read_from_file(pReply, zDownlink, ExtFILE); file_delete(zDownlink); } return rc; } /* If iTruth<0 then guess as to whether or not a PATH= argument is required ** when using ssh to run fossil on a remote machine name zHostname. Return ** true if a PATH= should be provided and 0 if not. ** ** If iTruth is 1 or 0 then that means that the PATH= is or is not required, ** respectively. Record this fact for future reference. ** ** If iTruth is 99 or more, then toggle the value that will be returned ** for future iTruth==(-1) queries. */ int ssh_needs_path_argument(const char *zHostname, int iTruth){ int ans = 0; /* Default to "no" */ char *z = mprintf("use-path-for-ssh:%s", zHostname); if( iTruth<0 ){ if( db_get_boolean(z/*works-like:"x"*/, 0) ) ans = 1; }else{ if( iTruth>=99 ){ iTruth = !db_get_boolean(z/*works-like:"x"*/, 0); } if( iTruth ){ ans = 1; db_set(z/*works-like:"x"*/, "1", 1); }else{ db_unset(z/*works-like:"x"*/, 1); } } fossil_free(z); return ans; } /* ** COMMAND: test-ssh-needs-path ** ** Usage: fossil test-ssh-needs-path ?HOSTNAME? ?BOOLEAN? ** ** With one argument, show whether or not the PATH= argument is included ** by default for HOSTNAME. If the second argument is a boolean, then ** change the value. ** ** With no arguments, show all hosts for which ssh-needs-path is true. */ void test_ssh_needs_path(void){ db_find_and_open_repository(OPEN_OK_NOT_FOUND|OPEN_SUBSTITUTE,0); db_open_config(0,0); if( g.argc>=3 ){ const char *zHost = g.argv[2]; int a = -1; int rc; if( g.argc>=4 ) a = is_truth(g.argv[3]); rc = ssh_needs_path_argument(zHost, a); fossil_print("%-20s %s\n", zHost, rc ? "yes" : "no"); }else{ Stmt s; db_swap_connections(); db_prepare(&s, "SELECT substr(name,18) FROM global_config" " WHERE name GLOB 'use-path-for-ssh:*'"); while( db_step(&s)==SQLITE_ROW ){ const char *zHost = db_column_text(&s,0); fossil_print("%-20s yes\n", zHost); } db_finalize(&s); db_swap_connections(); } } /* Add an approprate PATH= argument to the SSH command under construction ** in pCmd. ** ** About This Feature ** ================== ** ** On some ssh servers (Macs in particular are guilty of this) the PATH ** variable in the shell that runs the command that is sent to the remote ** host contains a limited number of read-only system directories: ** ** /usr/bin:/bin:/usr/sbin:/sbin ** ** The fossil executable cannot be installed into any of those directories ** because they are locked down, and so the "fossil" command cannot run. ** ** To work around this, the fossil command is prefixed with the PATH= ** argument, inserted by this function, to augment the PATH with additional ** directories in which the fossil executable is often found. ** ** But other ssh servers are confused by this initial PATH= argument. ** Some ssh servers have a list of programs that they are allowed to run ** and will fail if the first argument is not on that list, and PATH=.... ** is not on that list. ** ** So that various commands that use ssh can run seamlessly on a variety ** of systems (commands that use ssh include "fossil sync" with an ssh: ** URL and the "fossil patch pull" and "fossil patch push" commands where ** the destination directory starts with HOSTNAME: or USER@HOSTNAME:.) ** the following algorithm is used: ** ** * First try running the fossil without any PATH= argument. If that ** works (and it does on a majority of systems) then we are done. ** ** * If the first attempt fails, then try again after adding the ** PATH= prefix argument. (This function is what adds that ** argument.) If the retry works, then remember that fact using ** the use-path-for-ssh:HOSTNAME setting so that the first step ** is skipped on subsequent uses of the same command. ** ** See the forum thread at ** https://fossil-scm.org/forum/forumpost/4903cb4b691af7ce for more ** background. ** ** See also: ** ** * The ssh_needs_path_argument() function above. ** * The test-ssh-needs-path command that shows the settings ** that cache whether or not a PATH= is needed for a particular ** HOSTNAME. */ void ssh_add_path_argument(Blob *pCmd){ blob_append_escaped_arg(pCmd, "PATH=$HOME/bin:/usr/local/bin:/opt/homebrew/bin:$PATH", 1); } /* ** Sign the content in pSend, compress it, and send it to the server ** via HTTP or HTTPS. Get a reply, uncompress the reply, and store the reply ** in pRecv. pRecv is assumed to be uninitialized when ** this routine is called - this routine will initialize it. |
︙ | ︙ | |||
302 303 304 305 306 307 308 309 310 311 312 313 314 315 | int isError = 0; /* True if the reply is an error message */ int isCompressed = 1; /* True if the reply is compressed */ if( g.zHttpCmd!=0 ){ /* Handle the --transport-command option for "fossil sync" and similar */ return http_exchange_external(pSend,pReply,mHttpFlags,zAltMimetype); } if( transport_open(&g.url) ){ fossil_warning("%s", transport_errmsg(&g.url)); return 1; } /* Construct the login card and prepare the complete payload */ | > > > > > > > > > > | 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 | int isError = 0; /* True if the reply is an error message */ int isCompressed = 1; /* True if the reply is compressed */ if( g.zHttpCmd!=0 ){ /* Handle the --transport-command option for "fossil sync" and similar */ return http_exchange_external(pSend,pReply,mHttpFlags,zAltMimetype); } /* Activate the PATH= auxiliary argument to the ssh command if that ** is called for. */ if( g.url.isSsh && (g.url.flags & URL_SSH_RETRY)==0 && ssh_needs_path_argument(g.url.hostname, -1) ){ g.url.flags |= URL_SSH_PATH; } if( transport_open(&g.url) ){ fossil_warning("%s", transport_errmsg(&g.url)); return 1; } /* Construct the login card and prepare the complete payload */ |
︙ | ︙ | |||
482 483 484 485 486 487 488 | }else if( fossil_strnicmp(&zLine[14], "application/x-fossil", -1)!=0 ){ isError = 1; } } } } if( iLength<0 ){ | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | > | 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 | }else if( fossil_strnicmp(&zLine[14], "application/x-fossil", -1)!=0 ){ isError = 1; } } } } if( iLength<0 ){ /* We got nothing back from the server. If using the ssh: protocol, ** this might mean we need to add or remove the PATH=... argument ** to the SSH command being sent. If that is the case, retry the ** request after adding or removing the PATH= argument. */ if( g.url.isSsh /* This is an SSH: sync */ && (g.url.flags & URL_SSH_EXE)==0 /* Does not have ?fossil=.... */ && (g.url.flags & URL_SSH_RETRY)==0 /* Not retried already */ ){ /* Retry after flipping the SSH_PATH setting */ transport_close(&g.url); fossil_print( "First attempt to run fossil on %s using SSH failed.\n" "Retrying %s the PATH= argument.\n", g.url.hostname, (g.url.flags & URL_SSH_PATH)!=0 ? "without" : "with" ); g.url.flags ^= URL_SSH_PATH|URL_SSH_RETRY; rc = http_exchange(pSend,pReply,mHttpFlags,0,zAltMimetype); if( rc==0 ){ (void)ssh_needs_path_argument(g.url.hostname, (g.url.flags & URL_SSH_PATH)!=0); } return rc; }else{ /* The problem could not be corrected by retrying. Report the ** the error. */ if( g.url.isSsh && !g.fSshTrace ){ fossil_warning("server did not reply: " " rerun with --sshtrace for diagnostics"); }else{ fossil_warning("server did not reply"); } goto write_err; } } if( rc!=200 ){ fossil_warning("\"location:\" missing from %d redirect reply", rc); goto write_err; } /* |
︙ | ︙ |
Changes to src/http_ssl.c.
︙ | ︙ | |||
57 58 59 60 61 62 63 | } sException; static int sslNoCertVerify = 0; /* Do not verify SSL certs */ /* This is a self-signed cert in the PEM format that can be used when ** no other certs are available. */ | | | 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | } sException; static int sslNoCertVerify = 0; /* Do not verify SSL certs */ /* This is a self-signed cert in the PEM format that can be used when ** no other certs are available. */ static const char sslSelfCert[] = "-----BEGIN CERTIFICATE-----\n" "MIIDMTCCAhkCFGrDmuJkkzWERP/ITBvzwwI2lv0TMA0GCSqGSIb3DQEBCwUAMFQx\n" "CzAJBgNVBAYTAlVTMQswCQYDVQQIDAJOQzESMBAGA1UEBwwJQ2hhcmxvdHRlMRMw\n" "EQYDVQQKDApGb3NzaWwtU0NNMQ8wDQYDVQQDDAZGb3NzaWwwIBcNMjExMjI3MTEz\n" "MTU2WhgPMjEyMTEyMjcxMTMxNTZaMFQxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJO\n" "QzESMBAGA1UEBwwJQ2hhcmxvdHRlMRMwEQYDVQQKDApGb3NzaWwtU0NNMQ8wDQYD\n" "VQQDDAZGb3NzaWwwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCCbTU2\n" |
︙ | ︙ | |||
81 82 83 84 85 86 87 | "G6wxc4kN9dLK+5S29q3nzl24/qzXoF8P9Re5KBCbrwaHgy+OEEceq5jkmfGFxXjw\n" "pvVCNry5uAhH5NqbXZampUWqiWtM4eTaIPo7Y2mDA1uWhuWtO6F9PsnFJlQHCnwy\n" "s/TsrXk=\n" "-----END CERTIFICATE-----\n"; /* This is the private-key corresponding to the cert above */ | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | "G6wxc4kN9dLK+5S29q3nzl24/qzXoF8P9Re5KBCbrwaHgy+OEEceq5jkmfGFxXjw\n" "pvVCNry5uAhH5NqbXZampUWqiWtM4eTaIPo7Y2mDA1uWhuWtO6F9PsnFJlQHCnwy\n" "s/TsrXk=\n" "-----END CERTIFICATE-----\n"; /* This is the private-key corresponding to the cert above */ static const char sslSelfPKey[] = "-----BEGIN PRIVATE KEY-----\n" "MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCCbTU26GRQHQqL\n" "q7vyZ0OxpAxmgfAKCxt6eIz+jBi2ZM/CB5vVXWVh2+SkSiWEA3UZiUqXxZlzmS/C\n" "glZdiwLLDJML8B4OiV72oivFH/vJ7+cbvh1dTxnYiHuww7GfQngPrLfefiIYPDk1\n" "GTUJHBQ7Ue477F7F8vKuHdVgwktF/JDM6M60aSqlo2D/oysirrb+dlurTlv0rjsY\n" "Ofq6bLAajoL3qi/vek6DNssoywbge4PfbTgS9g7Gcgncbcet5pvaS12JavhFcd4J\n" "U4Ity49Hl9S/C2MfZ1tE53xVggRwKz4FPj65M5uymTdcxtjKXtCxIE1kKxJxXQh7\n" |
︙ | ︙ | |||
204 205 206 207 208 209 210 | "or the ssl-identity setting."); return 0; /* no cert available */ } /* ** Convert an OpenSSL ASN1_TIME to an ISO8601 timestamp. ** | | | 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 | "or the ssl-identity setting."); return 0; /* no cert available */ } /* ** Convert an OpenSSL ASN1_TIME to an ISO8601 timestamp. ** ** Per RFC 5280, ASN1 timestamps in X.509 certificates must ** be in UTC (Zulu timezone) with no fractional seconds. ** ** If showUtc==1, add " UTC" at the end of the returned string. This is ** not ISO8601-compliant, but makes the displayed value more user-friendly. */ static const char *ssl_asn1time_to_iso8601(ASN1_TIME *asn1_time, int showUtc){ |
︙ | ︙ | |||
410 411 412 413 414 415 416 | ** Invoke this routine to disable SSL cert verification. After ** this call is made, any SSL cert that the server provides will ** be accepted. Communication will still be encrypted, but the ** client has no way of knowing whether it is talking to the ** real server or a man-in-the-middle imposter. */ void ssl_disable_cert_verification(void){ | | | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | ** Invoke this routine to disable SSL cert verification. After ** this call is made, any SSL cert that the server provides will ** be accepted. Communication will still be encrypted, but the ** client has no way of knowing whether it is talking to the ** real server or a man-in-the-middle imposter. */ void ssl_disable_cert_verification(void){ sslNoCertVerify = 1; } /* ** Open an SSL connection as a client that is to connect to the server ** identified by pUrlData. ** * The identify of the server is determined as follows: |
︙ | ︙ | |||
563 564 565 566 567 568 569 | X509_NAME_print_ex(mem, X509_get_issuer_name(cert), 0, XN_FLAG_ONELINE); BIO_printf(mem, "\n notBefore: %s", ssl_asn1time_to_iso8601(X509_get_notBefore(cert), 1)); BIO_printf(mem, "\n notAfter: %s", ssl_asn1time_to_iso8601(X509_get_notAfter(cert), 1)); BIO_printf(mem, "\n sha256: %s", zHash); desclen = BIO_get_mem_data(mem, &desc); | | | | 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 | X509_NAME_print_ex(mem, X509_get_issuer_name(cert), 0, XN_FLAG_ONELINE); BIO_printf(mem, "\n notBefore: %s", ssl_asn1time_to_iso8601(X509_get_notBefore(cert), 1)); BIO_printf(mem, "\n notAfter: %s", ssl_asn1time_to_iso8601(X509_get_notAfter(cert), 1)); BIO_printf(mem, "\n sha256: %s", zHash); desclen = BIO_get_mem_data(mem, &desc); prompt = mprintf("Unable to verify SSL cert from %s\n%.*s\n" "accept this cert and continue (y/N/fingerprint)? ", pUrlData->name, desclen, desc); BIO_free(mem); prompt_user(prompt, &ans); free(prompt); cReply = blob_str(&ans)[0]; if( cReply!='y' && cReply!='Y' && fossil_stricmp(blob_str(&ans),zHash)!=0 ){ X509_free(cert); |
︙ | ︙ | |||
1183 1184 1185 1186 1187 1188 1189 | /* ** Return the OpenSSL version number being used. Space to hold ** this name is obtained from fossil_malloc() and should be ** freed by the caller. */ char *fossil_openssl_version(void){ | | | 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 | /* ** Return the OpenSSL version number being used. Space to hold ** this name is obtained from fossil_malloc() and should be ** freed by the caller. */ char *fossil_openssl_version(void){ #if defined(FOSSIL_ENABLE_SSL) return mprintf("%s (0x%09x)\n", SSLeay_version(SSLEAY_VERSION), OPENSSL_VERSION_NUMBER); #else return mprintf("none"); #endif } |
Changes to src/http_transport.c.
︙ | ︙ | |||
129 130 131 132 133 134 135 | if( pUrlData->user && pUrlData->user[0] ){ zHost = mprintf("%s@%s", pUrlData->user, pUrlData->name); blob_append_escaped_arg(&zCmd, zHost, 0); fossil_free(zHost); }else{ blob_append_escaped_arg(&zCmd, pUrlData->name, 0); } | > | > > > > > > | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 | if( pUrlData->user && pUrlData->user[0] ){ zHost = mprintf("%s@%s", pUrlData->user, pUrlData->name); blob_append_escaped_arg(&zCmd, zHost, 0); fossil_free(zHost); }else{ blob_append_escaped_arg(&zCmd, pUrlData->name, 0); } if( (pUrlData->flags & URL_SSH_EXE)!=0 && !is_safe_fossil_command(pUrlData->fossil) ){ fossil_fatal("the ssh:// URL is asking to run an unsafe command [%s] on " "the server.", pUrlData->fossil); } if( (pUrlData->flags & URL_SSH_EXE)==0 && (pUrlData->flags & URL_SSH_PATH)!=0 ){ ssh_add_path_argument(&zCmd); } blob_append_escaped_arg(&zCmd, pUrlData->fossil, 1); blob_append(&zCmd, " test-http", 10); if( pUrlData->path && pUrlData->path[0] ){ blob_append_escaped_arg(&zCmd, pUrlData->path, 1); }else{ fossil_fatal("ssh:// URI does not specify a path to the repository"); |
︙ | ︙ | |||
310 311 312 313 314 315 316 | /* ** Read N bytes of content directly from the wire and write into ** the buffer. */ static int transport_fetch(UrlData *pUrlData, char *zBuf, int N){ int got; | | | 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | /* ** Read N bytes of content directly from the wire and write into ** the buffer. */ static int transport_fetch(UrlData *pUrlData, char *zBuf, int N){ int got; if( pUrlData->isSsh ){ int x; int wanted = N; got = 0; while( wanted>0 ){ x = read(sshIn, &zBuf[got], wanted); if( x<=0 ) break; got += x; |
︙ | ︙ |
Changes to src/import.c.
︙ | ︙ | |||
815 816 817 818 819 820 821 | gg.fromLoaded = 1; }else if( strncmp(zLine, "N ", 2)==0 ){ /* No-op */ }else if( strncmp(zLine, "property branch-nick ", 21)==0 ){ /* Breezy uses this property to store the branch name. | | | 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 | gg.fromLoaded = 1; }else if( strncmp(zLine, "N ", 2)==0 ){ /* No-op */ }else if( strncmp(zLine, "property branch-nick ", 21)==0 ){ /* Breezy uses this property to store the branch name. ** It has two values. Integer branch number, then the ** user-readable branch name. */ z = &zLine[21]; next_token(&z); fossil_free(gg.zBranch); gg.zBranch = fossil_strdup(next_token(&z)); }else if( strncmp(zLine, "property rebase-of ", 19)==0 ){ |
︙ | ︙ |
Changes to src/info.c.
︙ | ︙ | |||
1222 1223 1224 1225 1226 1227 1228 | } } pTo = vdiff_parse_manifest("to", &ridTo); if( pTo==0 ) return; pFrom = vdiff_parse_manifest("from", &ridFrom); if( pFrom==0 ) return; zGlob = P("glob"); | > > > > > > > | | | 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 | } } pTo = vdiff_parse_manifest("to", &ridTo); if( pTo==0 ) return; pFrom = vdiff_parse_manifest("from", &ridFrom); if( pFrom==0 ) return; zGlob = P("glob"); /* ** Maintenace reminder: we explicitly do _not_ use P_NoBot() ** for "from" and "to" because those args can contain legitimate ** strings which may trigger the looks-like SQL checks, e.g. ** from=merge-in:OR-clause-improvement ** to=OR-clause-improvement */ zFrom = P("from"); zTo = P("to"); if( bInvert ){ Manifest *pTemp = pTo; const char *zTemp = zTo; pTo = pFrom; pFrom = pTemp; zTo = zFrom; zFrom = zTemp; |
︙ | ︙ | |||
2500 2501 2502 2503 2504 2505 2506 | ) ){ if( P("ci")==0 ) cgi_set_query_parameter("ci","tip"); page_tree(); return; } /* No directory found, look for an historic version of the file ** that was subsequently deleted. */ | | | 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 | ) ){ if( P("ci")==0 ) cgi_set_query_parameter("ci","tip"); page_tree(); return; } /* No directory found, look for an historic version of the file ** that was subsequently deleted. */ db_prepare(&q, "SELECT fid, uuid FROM mlink, filename, event, blob" " WHERE filename.name=%Q" " AND mlink.fnid=filename.fnid AND mlink.fid>0" " AND event.objid=mlink.mid" " AND blob.rid=mlink.mid" " ORDER BY event.mtime DESC", zName |
︙ | ︙ | |||
2798 2799 2800 2801 2802 2803 2804 | } } if( strcmp(zModAction,"approve")==0 ){ moderation_approve('t', rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) | | | 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 | } } if( strcmp(zModAction,"approve")==0 ){ moderation_approve('t', rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) ? db_text("(No title)", "SELECT title FROM ticket WHERE tkt_uuid=%Q", zTktName) : 0; style_set_current_feature("tinfo"); style_header("Ticket Change Details"); style_submenu_element("Raw", "%R/artifact/%s", zUuid); style_submenu_element("History", "%R/tkthistory/%s#%S", zTktName,zUuid); style_submenu_element("Page", "%R/tktview/%t", zTktName); |
︙ | ︙ | |||
3545 3546 3547 3548 3549 3550 3551 | Blob ctrl; Blob comment; char *zNow; int nTags, nCancels; int i; Stmt q; | < | 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 | Blob ctrl; Blob comment; char *zNow; int nTags, nCancels; int i; Stmt q; fEditComment = find_option("edit-comment","e",0)!=0; zNewComment = find_option("comment","m",1); zComFile = find_option("message-file","M",1); zNewBranch = find_option("branch",0,1); zNewColor = find_option("bgcolor",0,1); zNewBrColor = find_option("branchcolor",0,1); if( zNewBrColor ){ |
︙ | ︙ | |||
3831 3832 3833 3834 3835 3836 3837 | ** If no VERSION is provided, describe the currently checked-out version. ** ** If VERSION and the found ancestor refer to the same commit, the last two ** components are omitted, unless --long is provided. When no fitting tagged ** ancestor is found, show only the short hash of VERSION. ** ** Options: | | | 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 | ** If no VERSION is provided, describe the currently checked-out version. ** ** If VERSION and the found ancestor refer to the same commit, the last two ** components are omitted, unless --long is provided. When no fitting tagged ** ancestor is found, show only the short hash of VERSION. ** ** Options: ** --digits Display so many hex digits of the hash ** (default: the larger of 6 and the 'hash-digit' setting) ** -d|--dirty Show whether there are changes to be committed ** --long Always show all three components ** --match GLOB Consider only non-propagating tags matching GLOB */ void describe_cmd(void){ const char *zName; |
︙ | ︙ |
Changes to src/interwiki.c.
︙ | ︙ | |||
43 44 45 46 47 48 49 | ** ** { ** "base": Base URL for the remote site. ** "hash": Append this to "base" for Hash targets. ** "wiki": Append this to "base" for Wiki targets. ** } ** | | | | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ** ** { ** "base": Base URL for the remote site. ** "hash": Append this to "base" for Hash targets. ** "wiki": Append this to "base" for Wiki targets. ** } ** ** If the remote wiki is Fossil, then the correct value for "hash" ** is "/info/" and the correct value for "wiki" is "/wiki?name=". ** If (for example) Wikipedia is the remote, then "hash" should be ** omitted and the correct value for "wiki" is "/wiki/". ** ** PageName is link name of the target wiki. Several different forms ** of PageName are recognized. ** ** Path If PageName is empty or begins with a "/" character, then ** it is a pathname that is appended to "base". ** |
︙ | ︙ | |||
80 81 82 83 84 85 86 | static Stmt q; for(i=0; fossil_isalnum(zTarget[i]); i++){} if( zTarget[i]!=':' ) return 0; nCode = i; if( nCode==4 && strncmp(zTarget,"wiki",4)==0 ) return 0; zPage = zTarget + nCode + 1; nPage = (int)strlen(zPage); | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | static Stmt q; for(i=0; fossil_isalnum(zTarget[i]); i++){} if( zTarget[i]!=':' ) return 0; nCode = i; if( nCode==4 && strncmp(zTarget,"wiki",4)==0 ) return 0; zPage = zTarget + nCode + 1; nPage = (int)strlen(zPage); db_static_prepare(&q, "SELECT value->>'base', value->>'hash', value->>'wiki'" " FROM config WHERE name=lower($name) AND json_valid(value)" ); zName = mprintf("interwiki:%.*s", nCode, zTarget); db_bind_text(&q, "$name", zName); while( db_step(&q)==SQLITE_ROW ){ const char *zBase = db_column_text(&q,0); |
︙ | ︙ | |||
220 221 222 223 224 225 226 | verify_all_options(); if( g.argc<4 ) usage("delete ID ..."); db_begin_write(); db_unprotect(PROTECT_CONFIG); for(i=3; i<g.argc; i++){ const char *zName = g.argv[i]; db_multi_exec( | | | 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | verify_all_options(); if( g.argc<4 ) usage("delete ID ..."); db_begin_write(); db_unprotect(PROTECT_CONFIG); for(i=3; i<g.argc; i++){ const char *zName = g.argv[i]; db_multi_exec( "DELETE FROM config WHERE name='interwiki:%q'", zName ); } setup_incr_cfgcnt(); db_protect_pop(); db_commit_transaction(); }else |
︙ | ︙ |
Changes to src/json.c.
︙ | ︙ | |||
21 22 23 24 25 26 27 | ** The JSON API's public interface is documented at: ** ** https://fossil-scm.org/fossil/doc/trunk/www/json-api/index.md ** ** Notes for hackers... ** ** Here's how command/page dispatching works: json_page_top() (in HTTP mode) or | | | > | > | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | ** The JSON API's public interface is documented at: ** ** https://fossil-scm.org/fossil/doc/trunk/www/json-api/index.md ** ** Notes for hackers... ** ** Here's how command/page dispatching works: json_page_top() (in HTTP mode) or ** json_cmd_top() (in CLI mode) catch the "json" path/command. Those functions ** then dispatch to a JSON-mode-specific command/page handler with the type ** fossil_json_f(). ** See the API docs for that typedef (below) for the semantics of the callbacks. ** ** */ #include "VERSION.h" #include "config.h" #include "json.h" #include <assert.h> #include <time.h> #if INTERFACE #include "json_detail.h" /* workaround for apparent enum limitation in makeheaders */ #endif const FossilJsonKeys_ FossilJsonKeys = { "anonymousSeed" /*anonymousSeed*/, "authToken" /*authToken*/, "COMMAND_PATH" /*commandPath*/, "mtime" /*mtime*/, |
︙ | ︙ | |||
174 175 176 177 178 179 180 | return 0; } /* ** Convenience wrapper around cson_output() which appends the output ** to pDest. pOpt may be NULL, in which case g.json.outOpt will be used. */ | | > | 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | return 0; } /* ** Convenience wrapper around cson_output() which appends the output ** to pDest. pOpt may be NULL, in which case g.json.outOpt will be used. */ int cson_output_Blob( cson_value const * pVal, Blob * pDest, cson_output_opt const * pOpt ){ return cson_output( pVal, cson_data_dest_Blob, pDest, pOpt ? pOpt : &g.json.outOpt ); } /* ** Convenience wrapper around cson_parse() which reads its input ** from pSrc. pSrc is rewound before parsing. |
︙ | ︙ | |||
705 706 707 708 709 710 711 | login_cookie_name(), there is(?) a potential(?) login hijacking window here. We may need to change the JSON auth token to be in the form: login_cookie_name()=... Then again, the hardened cookie value helps ensure that only a proper key/value match is valid. */ | | > | 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 | login_cookie_name(), there is(?) a potential(?) login hijacking window here. We may need to change the JSON auth token to be in the form: login_cookie_name()=... Then again, the hardened cookie value helps ensure that only a proper key/value match is valid. */ cgi_replace_parameter( login_cookie_name(), cson_value_get_cstr(g.json.authToken) ); }else if( g.isHTTP ){ /* try fossil's conventional cookie. */ /* Reminder: chicken/egg scenario regarding db access in CLI mode because login_cookie_name() needs the db. CLI mode does not use any authentication, so we don't need to support it here. */ |
︙ | ︙ | |||
902 903 904 905 906 907 908 | assert( head != p ); zPart = (char*)fossil_malloc(len+1); memcpy(zPart, head, len); zPart[len] = 0; if(doDeHttp){ dehttpize(zPart); } | > | | 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 | assert( head != p ); zPart = (char*)fossil_malloc(len+1); memcpy(zPart, head, len); zPart[len] = 0; if(doDeHttp){ dehttpize(zPart); } if( *zPart ){ /* should only fail if someone manages to url-encoded a NUL byte */ part = cson_value_new_string(zPart, strlen(zPart)); if( 0 != cson_array_append( target, part ) ){ cson_value_free(part); rc = -rc; break; } }else{ |
︙ | ︙ | |||
1084 1085 1086 1087 1088 1089 1090 | break; } /* g.json.reqPayload exists only to simplify some of our access to the request payload. We currently only use this in the context of Object payloads, not Arrays, strings, etc. */ | | | 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 | break; } /* g.json.reqPayload exists only to simplify some of our access to the request payload. We currently only use this in the context of Object payloads, not Arrays, strings, etc. */ g.json.reqPayload.v = cson_object_get( g.json.post.o,FossilJsonKeys.payload ); if( g.json.reqPayload.v ){ g.json.reqPayload.o = cson_value_get_object( g.json.reqPayload.v ) /* g.json.reqPayload.o may legally be NULL, which means only that g.json.reqPayload.v is-not-a Object. */; } |
︙ | ︙ | |||
1113 1114 1115 1116 1117 1118 1119 | } if(!g.json.jsonp){ g.json.jsonp = json_find_option_cstr("jsonp",NULL,NULL); } if(!g.isHTTP){ | | | 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 | } if(!g.json.jsonp){ g.json.jsonp = json_find_option_cstr("jsonp",NULL,NULL); } if(!g.isHTTP){ g.json.errorDetailParanoia = 0;/*disable error code dumb-down for CLI mode*/ } {/* set up JSON output formatting options. */ int indent = -1; indent = json_find_option_int("indent",NULL,"I",-1); g.json.outOpt.indentation = (0>indent) ? (g.isHTTP ? 0 : 1) |
︙ | ︙ | |||
1164 1165 1166 1167 1168 1169 1170 | ** Note that CLI options are not included in the command path. Use ** find_option() to get those. ** */ char const * json_command_arg(unsigned short ndx){ cson_array * ar = g.json.cmd.a; assert((NULL!=ar) && "Internal error. Was json_bootstrap_late() called?"); | | | 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 | ** Note that CLI options are not included in the command path. Use ** find_option() to get those. ** */ char const * json_command_arg(unsigned short ndx){ cson_array * ar = g.json.cmd.a; assert((NULL!=ar) && "Internal error. Was json_bootstrap_late() called?"); assert((g.argc>1) &&"Internal error - we never should have gotten this far."); if( g.json.cmd.offset < 0 ){ /* first-time setup. */ short i = 0; #define NEXT cson_string_cstr( \ cson_value_get_string( \ cson_array_get(ar,i) \ )) |
︙ | ︙ | |||
1190 1191 1192 1193 1194 1195 1196 | } } #undef NEXT if(g.json.cmd.offset < 0){ return NULL; }else{ ndx = g.json.cmd.offset + ndx; | | > | > | 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 | } } #undef NEXT if(g.json.cmd.offset < 0){ return NULL; }else{ ndx = g.json.cmd.offset + ndx; return cson_string_cstr(cson_value_get_string( cson_array_get( ar, g.json.cmd.offset + ndx ))); } } /* Returns the C-string form of json_auth_token(), or NULL ** if json_auth_token() returns NULL. */ char const * json_auth_token_cstr(){ return cson_value_get_cstr( json_auth_token() ); } /* ** Returns the JsonPageDef with the given name, or NULL if no match is ** found. ** ** head must be a pointer to an array of JsonPageDefs in which the ** last entry has a NULL name. */ JsonPageDef const * json_handler_for_name( char const * name, JsonPageDef const * head ){ JsonPageDef const * pageDef = head; assert( head != NULL ); if(name && *name) for( ; pageDef->name; ++pageDef ){ if( 0 == strcmp(name, pageDef->name) ){ return pageDef; } } |
︙ | ︙ | |||
1290 1291 1292 1293 1294 1295 1296 | */ static cson_value * json_response_command_path(){ if(!g.json.cmd.a){ return NULL; }else{ cson_value * rc = NULL; Blob path = empty_blob; | | > | > | 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 | */ static cson_value * json_response_command_path(){ if(!g.json.cmd.a){ return NULL; }else{ cson_value * rc = NULL; Blob path = empty_blob; unsigned int aLen = g.json.dispatchDepth+1; /*cson_array_length_get(g.json.cmd.a);*/ unsigned int i = 1; for( ; i < aLen; ++i ){ char const * part = cson_string_cstr(cson_value_get_string( cson_array_get(g.json.cmd.a, i))); if(!part){ #if 1 fossil_warning("Iterating further than expected in %s.", __FILE__); #endif break; } |
︙ | ︙ | |||
1327 1328 1329 1330 1331 1332 1333 | */ cson_value * json_g_to_json(){ cson_object * o = NULL; cson_object * pay = NULL; pay = o = cson_new_object(); #define INT(OBJ,K) cson_object_set(o, #K, json_new_int(OBJ.K)) | | > | 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 | */ cson_value * json_g_to_json(){ cson_object * o = NULL; cson_object * pay = NULL; pay = o = cson_new_object(); #define INT(OBJ,K) cson_object_set(o, #K, json_new_int(OBJ.K)) #define CSTR(OBJ,K) cson_object_set(o, #K, OBJ.K ? json_new_string(OBJ.K) \ : cson_value_null()) #define VAL(K,V) cson_object_set(o, #K, (V) ? (V) : cson_value_null()) VAL(capabilities, json_cap_value()); INT(g, argc); INT(g, isConst); CSTR(g, zConfigDbName); INT(g, repositoryOpen); INT(g, localOpen); |
︙ | ︙ | |||
1811 1812 1813 1814 1815 1816 1817 | cson_string * kDesc; cson_array_reserve( list, 35 ); kRC = cson_new_string("resultCode",10); kSymbol = cson_new_string("cSymbol",7); kNumber = cson_new_string("number",6); kDesc = cson_new_string("description",11); #define C(K) obj = cson_new_object(); \ | | | | | > | 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 | cson_string * kDesc; cson_array_reserve( list, 35 ); kRC = cson_new_string("resultCode",10); kSymbol = cson_new_string("cSymbol",7); kNumber = cson_new_string("number",6); kDesc = cson_new_string("description",11); #define C(K) obj = cson_new_object(); \ cson_object_set_s(obj, kRC,json_new_string(json_rc_cstr(FSL_JSON_E_##K))); \ cson_object_set_s(obj, kSymbol, json_new_string("FSL_JSON_E_"#K) ); \ cson_object_set_s(obj, kNumber, cson_value_new_integer(FSL_JSON_E_##K) ); \ cson_object_set_s(obj, kDesc, \ json_new_string(json_err_cstr(FSL_JSON_E_##K))); \ cson_array_append( list, cson_object_value(obj) ); obj = NULL; C(GENERIC); C(INVALID_REQUEST); C(UNKNOWN_COMMAND); C(UNKNOWN); C(TIMEOUT); |
︙ | ︙ | |||
2004 2005 2006 2007 2008 2009 2010 | if( !g.perm.Read ){ json_set_err(FSL_JSON_E_DENIED, "Requires 'o' permissions."); return NULL; } full = json_find_option_bool("full",NULL,"f", json_find_option_bool("verbose",NULL,"v",0)); | | > | | 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 | if( !g.perm.Read ){ json_set_err(FSL_JSON_E_DENIED, "Requires 'o' permissions."); return NULL; } full = json_find_option_bool("full",NULL,"f", json_find_option_bool("verbose",NULL,"v",0)); #define SETBUF(O,K) cson_object_set(O, K, \ cson_value_new_string(zBuf, strlen(zBuf))); jv = cson_value_new_object(); jo = cson_value_get_object(jv); zTmp = db_get("project-name",NULL); cson_object_set(jo, "projectName", json_new_string(zTmp)); fossil_free(zTmp); zTmp = db_get("project-description",NULL); cson_object_set(jo, "projectDescription", json_new_string(zTmp)); fossil_free(zTmp); zTmp = NULL; fsize = file_size(g.zRepositoryName, ExtFILE); cson_object_set(jo, "repositorySize", cson_value_new_integer((cson_int_t)fsize)); if(full){ n = db_int(0, "SELECT count(*) FROM blob"); m = db_int(0, "SELECT count(*) FROM delta"); cson_object_set(jo, "blobCount", cson_value_new_integer((cson_int_t)n)); cson_object_set(jo, "deltaCount", cson_value_new_integer((cson_int_t)m)); |
︙ | ︙ | |||
2066 2067 2068 2069 2070 2071 2072 | }/*full*/ n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" " + 0.99"); cson_object_set(jo, "ageDays", cson_value_new_integer((cson_int_t)n)); cson_object_set(jo, "ageYears", cson_value_new_double(n/365.2425)); sqlite3_snprintf(BufLen, zBuf, db_get("project-code","")); SETBUF(jo, "projectCode"); | > | | | | > | > | > | | > | > | 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 | }/*full*/ n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" " + 0.99"); cson_object_set(jo, "ageDays", cson_value_new_integer((cson_int_t)n)); cson_object_set(jo, "ageYears", cson_value_new_double(n/365.2425)); sqlite3_snprintf(BufLen, zBuf, db_get("project-code","")); SETBUF(jo, "projectCode"); cson_object_set(jo, "compiler", cson_value_new_string(COMPILER_NAME, strlen(COMPILER_NAME))); jv2 = cson_value_new_object(); jo2 = cson_value_get_object(jv2); cson_object_set(jo, "sqlite", jv2); sqlite3_snprintf(BufLen, zBuf, "%.19s [%.10s] (%s)", sqlite3_sourceid(), &sqlite3_sourceid()[20], sqlite3_libversion()); SETBUF(jo2, "version"); cson_object_set(jo2, "pageCount", cson_value_new_integer( (cson_int_t)db_int(0, "PRAGMA repository.page_count"))); cson_object_set(jo2, "pageSize", cson_value_new_integer( (cson_int_t)db_int(0, "PRAGMA repository.page_size"))); cson_object_set(jo2, "freeList", cson_value_new_integer( (cson_int_t)db_int(0, "PRAGMA repository.freelist_count"))); sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0,"PRAGMA repository.encoding")); SETBUF(jo2, "encoding"); sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0, "PRAGMA repository.journal_mode")); cson_object_set(jo2, "journalMode", *zBuf ? cson_value_new_string(zBuf, strlen(zBuf)) : cson_value_null()); return jv; #undef SETBUF } |
︙ | ︙ | |||
2235 2236 2237 2238 2239 2240 2241 | cson_value * json_page_status(void); /* ** Mapping of names to JSON pages/commands. Each name is a subpath of ** /json (in CGI mode) or a subcommand of the json command in CLI mode */ static const JsonPageDef JsonPageDefs[] = { | | > | 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 | cson_value * json_page_status(void); /* ** Mapping of names to JSON pages/commands. Each name is a subpath of ** /json (in CGI mode) or a subcommand of the json command in CLI mode */ static const JsonPageDef JsonPageDefs[] = { /* please keep alphabetically sorted (case-insensitive) for maintenance reasons. */ {"anonymousPassword", json_page_anon_password, 0}, {"artifact", json_page_artifact, 0}, {"branch", json_page_branch,0}, {"cap", json_page_cap, 0}, {"config", json_page_config, 0 }, {"diff", json_page_diff, 0}, {"dir", json_page_dir, 0}, |
︙ | ︙ |
Changes to src/json_artifact.c.
︙ | ︙ | |||
209 210 211 212 213 214 215 | } /* ** Sub-impl of /json/artifact for check-ins. */ static cson_value * json_artifact_ci( cson_object * zParent, int rid ){ if(!g.perm.Read){ | | > | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 | } /* ** Sub-impl of /json/artifact for check-ins. */ static cson_value * json_artifact_ci( cson_object * zParent, int rid ){ if(!g.perm.Read){ json_set_err( FSL_JSON_E_DENIED, "Viewing check-ins requires 'o' privileges." ); return NULL; }else{ cson_value * artV = json_artifact_for_ci(rid, 1); cson_object * art = cson_value_get_object(artV); if(art){ cson_object_merge( zParent, art, CSON_MERGE_REPLACE ); cson_free_object(art); |
︙ | ︙ | |||
248 249 250 251 252 253 254 | ** if either the includeContent (HTTP) or -content|-c boolean flags ** (CLI) are set. */ static int json_artifact_get_content_format_flag(void){ enum { MagicValue = -9 }; int contentFormat = json_wiki_get_content_format_flag(MagicValue); if(MagicValue == contentFormat){ | | > | | 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | ** if either the includeContent (HTTP) or -content|-c boolean flags ** (CLI) are set. */ static int json_artifact_get_content_format_flag(void){ enum { MagicValue = -9 }; int contentFormat = json_wiki_get_content_format_flag(MagicValue); if(MagicValue == contentFormat){ contentFormat = json_find_option_bool("includeContent", "content","c",0) /* deprecated */ ? -1 : 0; } return contentFormat; } extern int json_wiki_get_content_format_flag(int defaultValue) /* json_wiki.c*/; cson_value * json_artifact_wiki(cson_object * zParent, int rid){ if( ! g.perm.RdWiki ){ json_set_err(FSL_JSON_E_DENIED, "Requires 'j' privileges."); return NULL; }else{ |
︙ | ︙ | |||
378 379 380 381 382 383 384 | ); /* TODO: add a "state" flag for the file in each check-in, e.g. "modified", "new", "deleted". */ checkin_arr = cson_new_array(); cson_object_set(pay, "checkins", cson_array_value(checkin_arr)); while( (SQLITE_ROW==db_step(&q) ) ){ | | > | | | 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 | ); /* TODO: add a "state" flag for the file in each check-in, e.g. "modified", "new", "deleted". */ checkin_arr = cson_new_array(); cson_object_set(pay, "checkins", cson_array_value(checkin_arr)); while( (SQLITE_ROW==db_step(&q) ) ){ cson_object * row = cson_value_get_object( cson_sqlite3_row_to_object(q.pStmt)); /* FIXME: move this isNew/isDel stuff into an SQL CASE statement. */ char const isNew = cson_value_get_bool(cson_object_get(row,"isNew")); char const isDel = cson_value_get_bool(cson_object_get(row,"isDel")); cson_object_set(row, "isNew", NULL); cson_object_set(row, "isDel", NULL); cson_object_set(row, "state", json_new_string( json_artifact_status_to_string(isNew, isDel))); cson_array_append( checkin_arr, cson_object_value(row) ); } db_finalize(&q); return cson_object_value(pay); } /* |
︙ | ︙ |
Changes to src/json_branch.c.
︙ | ︙ | |||
200 201 202 203 204 205 206 | Manifest *pParent; /* Parsed parent manifest */ Blob mcksum; /* Self-checksum on the manifest */ int bAutoColor = 0; /* Value of "--bgcolor" is "auto" */ if( fossil_strncmp(zColor, "auto", 4)==0 ) { bAutoColor = 1; zColor = 0; | | | 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | Manifest *pParent; /* Parsed parent manifest */ Blob mcksum; /* Self-checksum on the manifest */ int bAutoColor = 0; /* Value of "--bgcolor" is "auto" */ if( fossil_strncmp(zColor, "auto", 4)==0 ) { bAutoColor = 1; zColor = 0; } /* fossil branch new name */ if( zBranch==0 || zBranch[0]==0 ){ zOpt->rcErrMsg = "Branch name may not be null/empty."; return FSL_JSON_E_INVALID_ARGS; } if( db_exists( "SELECT 1 FROM tagxref" |
︙ | ︙ | |||
333 334 335 336 337 338 339 | } if(!opt.zName){ opt.zName = json_command_arg(g.json.dispatchDepth+1); } if(!opt.zName){ | | > | 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 | } if(!opt.zName){ opt.zName = json_command_arg(g.json.dispatchDepth+1); } if(!opt.zName){ json_set_err(FSL_JSON_E_MISSING_ARGS, "'name' parameter was not specified." ); return NULL; } opt.zColor = json_find_option_cstr("bgColor","bgcolor",NULL); opt.zBasis = json_find_option_cstr("basis",NULL,NULL); if(!opt.zBasis && !g.isHTTP){ opt.zBasis = json_command_arg(g.json.dispatchDepth+2); |
︙ | ︙ |
Changes to src/json_config.c.
︙ | ︙ | |||
255 256 257 258 259 260 261 | } for(i=0; i<nSetting; ++i){ const Setting *pSet = &aSetting[i]; cson_object * jSet; cson_value * pVal = 0, * pSrc = 0; jSet = cson_new_object(); cson_object_set(pay, pSet->name, cson_object_value(jSet)); | | | 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | } for(i=0; i<nSetting; ++i){ const Setting *pSet = &aSetting[i]; cson_object * jSet; cson_value * pVal = 0, * pSrc = 0; jSet = cson_new_object(); cson_object_set(pay, pSet->name, cson_object_value(jSet)); cson_object_set(jSet, "versionable",cson_value_new_bool(pSet->versionable)); cson_object_set(jSet, "sensitive", cson_value_new_bool(pSet->sensitive)); cson_object_set(jSet, "defaultValue", (pSet->def && pSet->def[0]) ? json_new_string(pSet->def) : cson_value_null()); if( 0==pSet->sensitive || 0!=g.perm.Setup ){ if( pSet->versionable ){ /* Check to see if this is overridden by a versionable |
︙ | ︙ | |||
290 291 292 293 294 295 296 | Blob versionedPathname; blob_zero(&versionedPathname); blob_appendf(&versionedPathname, "%s.fossil-settings/%s", g.zLocalRoot, pSet->name); if( file_size(blob_str(&versionedPathname), ExtFILE)>=0 ){ Blob content; blob_zero(&content); | | | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | Blob versionedPathname; blob_zero(&versionedPathname); blob_appendf(&versionedPathname, "%s.fossil-settings/%s", g.zLocalRoot, pSet->name); if( file_size(blob_str(&versionedPathname), ExtFILE)>=0 ){ Blob content; blob_zero(&content); blob_read_from_file(&content, blob_str(&versionedPathname),ExtFILE); pSrc = json_new_string("versioned"); pVal = json_new_string(blob_str(&content)); blob_reset(&content); } blob_reset(&versionedPathname); } } |
︙ | ︙ |
Changes to src/json_finfo.c.
︙ | ︙ | |||
34 35 36 37 38 39 40 | Blob sql = empty_blob; Stmt q = empty_Stmt; char const * zAfter = NULL; char const * zBefore = NULL; int limit = -1; int currentRow = 0; char const * zCheckin = NULL; | | > | | | | | 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | Blob sql = empty_blob; Stmt q = empty_Stmt; char const * zAfter = NULL; char const * zBefore = NULL; int limit = -1; int currentRow = 0; char const * zCheckin = NULL; signed char sort = -1; if(!g.perm.Read){ json_set_err(FSL_JSON_E_DENIED,"Requires 'o' privileges."); return NULL; } json_warn( FSL_JSON_W_UNKNOWN, "Achtung: the output of the finfo command is up for change."); /* For the "name" argument we have to jump through some hoops to make sure that we don't get the fossil-internally-assigned "name" option. */ zFilename = json_find_option_cstr2("name",NULL,NULL, g.json.dispatchDepth+1); if(!zFilename || !*zFilename){ json_set_err(FSL_JSON_E_MISSING_ARGS, "Missing 'name' parameter."); return NULL; } if(0==db_int(0,"SELECT 1 FROM filename WHERE name=%Q",zFilename)){ json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, "File entry not found."); return NULL; } zBefore = json_find_option_cstr("before",NULL,"b"); zAfter = json_find_option_cstr("after",NULL,"a"); limit = json_find_option_int("limit",NULL,"n", -1); zCheckin = json_find_option_cstr("checkin",NULL,"ci"); blob_append_sql(&sql, /*0*/ "SELECT b.uuid," /*1*/ " ci.uuid," /*2*/ " (SELECT uuid FROM blob WHERE rid=mlink.fid),"/* Current file uuid */ /*3*/ " cast(strftime('%%s',event.mtime) AS INTEGER)," /*4*/ " coalesce(event.euser, event.user)," /*5*/ " coalesce(event.ecomment, event.comment)," /*6*/ " (SELECT uuid FROM blob WHERE rid=mlink.pid)," /* Parent file uuid */ /*7*/ " event.bgcolor," /*8*/ " b.size," /*9*/ " (mlink.pid==0) AS isNew," |
︙ | ︙ | |||
86 87 88 89 90 91 92 | ); if( zCheckin && *zCheckin ){ char * zU = NULL; int rc = name_to_uuid2( zCheckin, "ci", &zU ); /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/ if(rc<=0){ | | > | > | > | > | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 | ); if( zCheckin && *zCheckin ){ char * zU = NULL; int rc = name_to_uuid2( zCheckin, "ci", &zU ); /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/ if(rc<=0){ json_set_err((rc<0) ? FSL_JSON_E_AMBIGUOUS_UUID : FSL_JSON_E_RESOURCE_NOT_FOUND, "Check-in hash %s.", (rc<0) ? "is ambiguous" : "not found"); blob_reset(&sql); return NULL; } blob_append_sql(&sql, " AND ci.uuid='%q'", zU); free(zU); }else{ if( zAfter && *zAfter ){ blob_append_sql(&sql, " AND event.mtime>=julianday('%q')", zAfter); sort = 1; }else if( zBefore && *zBefore ){ blob_append_sql(&sql, " AND event.mtime<=julianday('%q')", zBefore); } } blob_append_sql(&sql," ORDER BY event.mtime %s /*sort*/", (sort>0 ? "ASC" : "DESC")); /*printf("SQL=\n%s\n",blob_str(&sql));*/ db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); pay = cson_new_object(); cson_object_set(pay, "name", json_new_string(zFilename)); if( limit > 0 ){ cson_object_set(pay, "limit", json_new_int(limit)); } checkins = cson_new_array(); cson_object_set(pay, "checkins", cson_array_value(checkins)); while( db_step(&q)==SQLITE_ROW ){ cson_object * row = cson_new_object(); int const isNew = db_column_int(&q,9); int const isDel = db_column_int(&q,10); cson_array_append( checkins, cson_object_value(row) ); cson_object_set(row, "checkin", json_new_string( db_column_text(&q,1) )); cson_object_set(row, "uuid", json_new_string( db_column_text(&q,2) )); /*cson_object_set(row, "parentArtifact", json_new_string( db_column_text(&q,6) ));*/ cson_object_set(row, "timestamp", json_new_int( db_column_int64(&q,3) )); cson_object_set(row, "user", json_new_string( db_column_text(&q,4) )); cson_object_set(row, "comment", json_new_string( db_column_text(&q,5) )); /*cson_object_set(row, "bgColor", json_new_string( db_column_text(&q,7) ));*/ cson_object_set(row, "size", json_new_int( db_column_int64(&q,8) )); cson_object_set(row, "state", json_new_string( json_artifact_status_to_string(isNew, isDel))); if( (0 < limit) && (++currentRow >= limit) ){ break; } } db_finalize(&q); return pay ? cson_object_value(pay) : NULL; } #endif /* FOSSIL_ENABLE_JSON */ |
Changes to src/json_login.c.
︙ | ︙ | |||
153 154 155 156 157 158 159 | } payload = cson_value_new_object(); po = cson_value_get_object(payload); cson_object_set(po, "authToken", json_new_string(cookie)); free(cookie); cson_object_set(po, "name", json_new_string(name)); cap = db_text(NULL, "SELECT cap FROM user WHERE login=%Q", name); | | > | > | 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | } payload = cson_value_new_object(); po = cson_value_get_object(payload); cson_object_set(po, "authToken", json_new_string(cookie)); free(cookie); cson_object_set(po, "name", json_new_string(name)); cap = db_text(NULL, "SELECT cap FROM user WHERE login=%Q", name); cson_object_set(po, "capabilities", cap ? json_new_string(cap) : cson_value_null() ); free(cap); cson_object_set(po, "loginCookieName", json_new_string( login_cookie_name() ) ); /* TODO: add loginExpiryTime to the payload. To do this properly we "should" add an ([unsigned] int *) to login_set_user_cookie() and login_set_anon_cookie(), to which the expiry time is assigned. (Remember that JSON doesn't do unsigned int.) For non-anonymous users we could also simply query the |
︙ | ︙ |
Changes to src/json_tag.c.
︙ | ︙ | |||
115 116 117 118 119 120 121 | cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); { Blob uu = empty_blob; int rc; blob_append(&uu, zName, -1); rc = name_to_uuid(&uu, 9, "*"); if(0!=rc){ | | > | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); { Blob uu = empty_blob; int rc; blob_append(&uu, zName, -1); rc = name_to_uuid(&uu, 9, "*"); if(0!=rc){ json_set_err(FSL_JSON_E_UNKNOWN, "Could not convert name back to artifact hash!"); blob_reset(&uu); goto error; } cson_object_set(pay, "appliedTo", json_new_string(blob_buffer(&uu))); blob_reset(&uu); } |
︙ | ︙ |
Changes to src/json_timeline.c.
︙ | ︙ | |||
141 142 143 144 145 146 147 | ** ** If payload is not NULL then on success its "tag" or "branch" ** property is set to the tag/branch name found in the request. ** ** Only one of "tag" or "branch" modes will work at a time, and if ** both are specified, which one takes precedence is unspecified. */ | | | | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | ** ** If payload is not NULL then on success its "tag" or "branch" ** property is set to the tag/branch name found in the request. ** ** Only one of "tag" or "branch" modes will work at a time, and if ** both are specified, which one takes precedence is unspecified. */ static signed char json_timeline_add_tag_branch_clause(Blob *pSql, cson_object * pPayload){ char const * zTag = NULL; char const * zBranch = NULL; char const * zMiOnly = NULL; char const * zUnhide = NULL; int tagid = 0; if(! g.perm.Read ){ return 0; |
︙ | ︙ | |||
167 168 169 170 171 172 173 | zUnhide = json_find_option_cstr("unhide",NULL,NULL); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zTag); if(tagid<=0){ return -1; } if(pPayload){ | | > | > | 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | zUnhide = json_find_option_cstr("unhide",NULL,NULL); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zTag); if(tagid<=0){ return -1; } if(pPayload){ cson_object_set( pPayload, zBranch ? "branch" : "tag", json_new_string(zTag) ); } blob_appendf(pSql, " AND (" " EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)", tagid); if(!zUnhide){ blob_appendf(pSql, " AND NOT EXISTS(SELECT 1 FROM plink " " JOIN tagxref ON rid=blob.rid" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)", TAG_HIDDEN); } if(zBranch){ /* from "r" flag code in page_timeline().*/ blob_appendf(pSql, " OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid" |
︙ | ︙ | |||
218 219 220 221 222 223 224 | ** of the "after" ("a") or "before" ("b") environment parameters. ** This function gives "after" precedence over "before", and only ** applies one of them. ** ** Returns -1 if it adds a "before" clause, 1 if it adds ** an "after" clause, and 0 if adds only an order-by clause. */ | | | 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | ** of the "after" ("a") or "before" ("b") environment parameters. ** This function gives "after" precedence over "before", and only ** applies one of them. ** ** Returns -1 if it adds a "before" clause, 1 if it adds ** an "after" clause, and 0 if adds only an order-by clause. */ static signed char json_timeline_add_time_clause(Blob *pSql){ char const * zAfter = NULL; char const * zBefore = NULL; int rc = 0; zAfter = json_find_option_cstr("after",NULL,"a"); zBefore = zAfter ? NULL : json_find_option_cstr("before",NULL,"b"); if(zAfter&&*zAfter){ |
︙ | ︙ | |||
350 351 352 353 354 355 356 | cson_object_set(row, "uuid", json_new_string(db_column_text(&q,3))); if(!isNew && (flags & json_get_changed_files_ELIDE_PARENT)){ cson_object_set(row, "parent", json_new_string(db_column_text(&q,4))); } cson_object_set(row, "size", json_new_int(db_column_int(&q,5))); cson_object_set(row, "state", | | | > | 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 | cson_object_set(row, "uuid", json_new_string(db_column_text(&q,3))); if(!isNew && (flags & json_get_changed_files_ELIDE_PARENT)){ cson_object_set(row, "parent", json_new_string(db_column_text(&q,4))); } cson_object_set(row, "size", json_new_int(db_column_int(&q,5))); cson_object_set(row, "state", json_new_string(json_artifact_status_to_string(isNew,isDel))); zDownload = mprintf("/raw/%s?name=%s", /* reminder: g.zBaseURL is of course not set for CLI mode. */ db_column_text(&q,2), db_column_text(&q,3)); cson_object_set(row, "downloadPath", json_new_string(zDownload)); free(zDownload); } db_finalize(&q); return rowsV; |
︙ | ︙ | |||
503 504 505 506 507 508 509 | int const rid = db_column_int(&q,0); cson_value * rowV = json_artifact_for_ci(rid, verboseFlag); cson_object * row = cson_value_get_object(rowV); if(!row){ if( !warnRowToJsonFailed ){ warnRowToJsonFailed = 1; json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, | | | 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 | int const rid = db_column_int(&q,0); cson_value * rowV = json_artifact_for_ci(rid, verboseFlag); cson_object * row = cson_value_get_object(rowV); if(!row){ if( !warnRowToJsonFailed ){ warnRowToJsonFailed = 1; json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, "Could not convert at least one timeline result row to JSON." ); } continue; } cson_array_append(list, rowV); } #undef SET goto ok; |
︙ | ︙ | |||
546 547 548 549 550 551 552 | if(check){ json_set_err(check, "Query initialization failed."); goto error; } #if 0 /* only for testing! */ | | > | > > | 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 | if(check){ json_set_err(check, "Query initialization failed."); goto error; } #if 0 /* only for testing! */ cson_object_set(pay, "timelineSql", cson_value_new_string(blob_buffer(&sql), strlen(blob_buffer(&sql)))); #endif db_multi_exec("%s", blob_buffer(&sql) /*safe-for-%s*/); blob_reset(&sql); db_prepare(&q, "SELECT" /* For events, the name is generally more useful than the uuid, but the uuid is unambiguous and can be used with commands like 'artifact'. */ " substr((SELECT tagname FROM tag AS tn " " WHERE tn.tagid=json_timeline.tagId " " AND tagname LIKE 'event-%%'),7) AS name," " uuid as uuid," " mtime AS timestamp," " comment AS comment, " " user AS user," " eventType AS eventType" " FROM json_timeline" " ORDER BY rowid"); |
︙ | ︙ | |||
589 590 591 592 593 594 595 | cson_value * payV = NULL; cson_object * pay = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdWiki && !g.perm.Read ){ | | > | > | 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 | cson_value * payV = NULL; cson_object * pay = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdWiki && !g.perm.Read ){ json_set_err( FSL_JSON_E_DENIED, "Wiki timeline requires 'o' or 'j' access."); return NULL; } payV = cson_value_new_object(); pay = cson_value_get_object(payV); check = json_timeline_setup_sql( "w", &sql, pay ); if(check){ json_set_err(check, "Query initialization failed."); goto error; } #if 0 /* only for testing! */ cson_object_set(pay, "timelineSql", cson_value_new_string(blob_buffer(&sql), strlen(blob_buffer(&sql)))); #endif db_multi_exec("%s", blob_buffer(&sql) /*safe-for-%s*/); blob_reset(&sql); db_prepare(&q, "SELECT" " uuid AS uuid," " mtime AS timestamp," #if 0 |
︙ | ︙ | |||
652 653 654 655 656 657 658 | cson_value * tmp = NULL; cson_value * listV = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdTkt && !g.perm.Read ){ | | > | 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 | cson_value * tmp = NULL; cson_value * listV = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdTkt && !g.perm.Read ){ json_set_err(FSL_JSON_E_DENIED, "Ticket timeline requires 'o' or 'r' access."); return NULL; } payV = cson_value_new_object(); pay = cson_value_get_object(payV); check = json_timeline_setup_sql( "t", &sql, pay ); if(check){ json_set_err(check, "Query initialization failed."); |
︙ | ︙ | |||
724 725 726 727 728 729 730 | } rowV = cson_sqlite3_row_to_object(q.pStmt); row = cson_value_get_object(rowV); if(!row){ manifest_destroy(pMan); json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, | | | 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 | } rowV = cson_sqlite3_row_to_object(q.pStmt); row = cson_value_get_object(rowV); if(!row){ manifest_destroy(pMan); json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, "Could not convert at least one timeline result row to JSON." ); continue; } /* FIXME: certainly there's a more efficient way for use to get the ticket UUIDs? */ cson_object_set(row,"ticketUuid",json_new_string(pMan->zTicketUuid)); manifest_destroy(pMan); |
︙ | ︙ |
Changes to src/json_user.c.
︙ | ︙ | |||
168 169 170 171 172 173 174 | ** Requires either Admin, Setup, or Password access. Non-admin/setup ** users can only change their own information. Non-setup users may ** not modify the 's' permission. Admin users without setup ** permissions may not edit any other user who has the 's' permission. ** */ int json_user_update_from_json( cson_object * pUser ){ | | > | 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | ** Requires either Admin, Setup, or Password access. Non-admin/setup ** users can only change their own information. Non-setup users may ** not modify the 's' permission. Admin users without setup ** permissions may not edit any other user who has the 's' permission. ** */ int json_user_update_from_json( cson_object * pUser ){ #define CSTR(X) cson_string_cstr(cson_value_get_string( cson_object_get(pUser, \ X ) )) char const * zName = CSTR("name"); char const * zNameNew = zName; char * zNameFree = NULL; char const * zInfo = CSTR("info"); char const * zCap = CSTR("capabilities"); char const * zPW = CSTR("password"); cson_value const * forceLogout = cson_object_get(pUser, "forceLogout"); |
︙ | ︙ |
Changes to src/json_wiki.c.
︙ | ︙ | |||
161 162 163 164 165 166 167 | } /* ** Searches for the latest version of a wiki page with the given ** name. If found it behaves like json_get_wiki_page_by_rid(theRid, ** contentFormat), else it returns NULL. */ | | > | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | } /* ** Searches for the latest version of a wiki page with the given ** name. If found it behaves like json_get_wiki_page_by_rid(theRid, ** contentFormat), else it returns NULL. */ cson_value * json_get_wiki_page_by_name(char const * zPageName, int contentFormat){ int rid; rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x, blob b" " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q' " " AND b.rid=x.rid" " ORDER BY x.mtime DESC LIMIT 1", zPageName |
︙ | ︙ | |||
257 258 259 260 261 262 263 | } zPageName = json_find_option_cstr2("name",NULL,"n",g.json.dispatchDepth+1); zSymName = json_find_option_cstr("uuid",NULL,"u"); if((!zPageName||!*zPageName) && (!zSymName || !*zSymName)){ json_set_err(FSL_JSON_E_MISSING_ARGS, | | | 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 | } zPageName = json_find_option_cstr2("name",NULL,"n",g.json.dispatchDepth+1); zSymName = json_find_option_cstr("uuid",NULL,"u"); if((!zPageName||!*zPageName) && (!zSymName || !*zSymName)){ json_set_err(FSL_JSON_E_MISSING_ARGS, "At least one of the 'name' or 'uuid' arguments must be provided."); return NULL; } /* TODO: see if we have a page named zPageName. If not, try to resolve zPageName as a UUID. */ |
︙ | ︙ | |||
295 296 297 298 299 300 301 | zMime = cson_value_get_cstr(cson_object_get(g.json.reqPayload.o, "mimetype")); }else{ sContent = cson_value_get_string(g.json.reqPayload.v); } if(!sContent) { json_set_err(FSL_JSON_E_MISSING_ARGS, | | | | | | > | 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | zMime = cson_value_get_cstr(cson_object_get(g.json.reqPayload.o, "mimetype")); }else{ sContent = cson_value_get_string(g.json.reqPayload.v); } if(!sContent) { json_set_err(FSL_JSON_E_MISSING_ARGS, "The 'payload' property must be either a string containing the " "Fossil wiki code to preview or an object with body + mimetype " "properties."); return NULL; } zContent = cson_string_cstr(sContent); blob_append( &contentOrig, zContent, (int)cson_string_length_bytes(sContent)); zMime = wiki_filter_mimetypes(zMime); if( 0==fossil_strcmp(zMime, "text/x-markdown") ){ markdown_to_html(&contentOrig, 0, &contentHtml); }else if( 0==fossil_strcmp(zMime, "text/plain") ){ blob_append(&contentHtml, "<pre class='textPlain'>", -1); blob_append(&contentHtml, blob_str(&contentOrig), blob_size(&contentOrig)); blob_append(&contentHtml, "</pre>", -1); }else{ wiki_convert( &contentOrig, &contentHtml, 0 ); } blob_reset( &contentOrig ); pay = cson_value_new_string( blob_str(&contentHtml), (unsigned int)blob_size(&contentHtml)); blob_reset( &contentHtml ); return pay; } /* ** Internal impl of /wiki/save and /wiki/create. If createMode is 0 |
︙ | ︙ | |||
344 345 346 347 348 349 350 | char allowCreateIfNotExists){ Blob content = empty_blob; /* wiki page content */ cson_value * nameV; /* wiki page name */ char const * zPageName; /* cstr form of page name */ cson_value * contentV; /* passed-in content */ cson_value * emptyContent = NULL; /* placeholder for empty content. */ cson_value * payV = NULL; /* payload/return value */ | | > | 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 | char allowCreateIfNotExists){ Blob content = empty_blob; /* wiki page content */ cson_value * nameV; /* wiki page name */ char const * zPageName; /* cstr form of page name */ cson_value * contentV; /* passed-in content */ cson_value * emptyContent = NULL; /* placeholder for empty content. */ cson_value * payV = NULL; /* payload/return value */ cson_string const * jstr = NULL; /* temp for cson_value-to-cson_string conversions. */ char const * zMimeType = 0; unsigned int contentLen = 0; int rid; if( (createMode && !g.perm.NewWiki) || (!createMode && !g.perm.WrWiki)){ json_set_err(FSL_JSON_E_DENIED, "Requires '%c' permissions.", |
︙ | ︙ |
Changes to src/login.c.
︙ | ︙ | |||
52 53 54 55 56 57 58 | #include <time.h> /* ** Compute an appropriate Anti-CSRF token into g.zCsrfToken[]. */ static void login_create_csrf_secret(const char *zSeed){ unsigned char zResult[20]; | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | #include <time.h> /* ** Compute an appropriate Anti-CSRF token into g.zCsrfToken[]. */ static void login_create_csrf_secret(const char *zSeed){ unsigned char zResult[20]; unsigned int i; sha1sum_binary(zSeed, zResult); for(i=0; i<sizeof(g.zCsrfToken)-1; i++){ g.zCsrfToken[i] = "abcdefghijklmnopqrstuvwxyz" "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "0123456789-/"[zResult[i]%64]; } |
︙ | ︙ | |||
252 253 254 255 256 257 258 | const char *zLogin = db_column_text(&q,0); if( (uid = login_search_uid(&zLogin, zPasswd) ) != 0 ){ *pzUsername = fossil_strdup(zLogin); break; } } db_finalize(&q); | | | 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | const char *zLogin = db_column_text(&q,0); if( (uid = login_search_uid(&zLogin, zPasswd) ) != 0 ){ *pzUsername = fossil_strdup(zLogin); break; } } db_finalize(&q); } free(zSha1Pw); return uid; } /* ** Generates a login cookie value for a non-anonymous user. ** |
︙ | ︙ | |||
773 774 775 776 777 778 779 | }else{ zAnonPw = 0; } @ <table class="login_out"> if( P("HTTPS")==0 ){ @ <tr><td class="form_label">Warning:</td> @ <td><span class='securityWarning'> | | | 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 | }else{ zAnonPw = 0; } @ <table class="login_out"> if( P("HTTPS")==0 ){ @ <tr><td class="form_label">Warning:</td> @ <td><span class='securityWarning'> @ Login information, including the password, @ will be sent in the clear over an unencrypted connection. if( !g.sslNotAvailable ){ @ Consider logging in at @ <a href='%s(g.zHttpsURL)'>%h(g.zHttpsURL)</a> instead. } @ </span></td></tr> } |
︙ | ︙ | |||
822 823 824 825 826 827 828 | @ </tr> } @ </table> if( zAnonPw && !noAnon ){ const char *zDecoded = captcha_decode(uSeed); int bAutoCaptcha = db_get_boolean("auto-captcha", 0); char *zCaptcha = captcha_render(zDecoded); | | | 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 | @ </tr> } @ </table> if( zAnonPw && !noAnon ){ const char *zDecoded = captcha_decode(uSeed); int bAutoCaptcha = db_get_boolean("auto-captcha", 0); char *zCaptcha = captcha_render(zDecoded); @ <p><input type="hidden" name="cs" value="%u(uSeed)"> @ Visitors may enter <b>anonymous</b> as the user-ID with @ the 8-character hexadecimal password shown below:</p> @ <div class="captcha"><table class="captcha"><tr><td>\ @ <pre class="captcha"> @ %h(zCaptcha) @ </pre></td></tr></table> |
︙ | ︙ | |||
851 852 853 854 855 856 857 858 859 860 861 862 863 864 | @ for user <b>%h(g.zLogin)</b></p> } if( db_table_exists("repository","forumpost") ){ @ <hr><p> @ <a href="%R/timeline?ss=v&y=f&vfx&u=%t(g.zLogin)">Forum @ post timeline</a> for user <b>%h(g.zLogin)</b></p> } if( g.perm.Password ){ char *zRPW = fossil_random_password(12); @ <hr> @ <p>Change Password for user <b>%h(g.zLogin)</b>:</p> form_begin(0, "%R/login"); @ <table> @ <tr><td class="form_label" id="oldpw">Old Password:</td> | > > > | 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 | @ for user <b>%h(g.zLogin)</b></p> } if( db_table_exists("repository","forumpost") ){ @ <hr><p> @ <a href="%R/timeline?ss=v&y=f&vfx&u=%t(g.zLogin)">Forum @ post timeline</a> for user <b>%h(g.zLogin)</b></p> } @ <hr><p> @ Select your preferred <a href="%R/skins">site skin</a>. @ </p> if( g.perm.Password ){ char *zRPW = fossil_random_password(12); @ <hr> @ <p>Change Password for user <b>%h(g.zLogin)</b>:</p> form_begin(0, "%R/login"); @ <table> @ <tr><td class="form_label" id="oldpw">Old Password:</td> |
︙ | ︙ | |||
1027 1028 1029 1030 1031 1032 1033 | uid = login_resetpw_suffix_is_valid(zName); if( uid==0 ){ @ <p><span class="loginError"> @ This password-reset URL is invalid, probably because it has expired. @ Password-reset URLs have a short lifespan. @ </span></p> style_finish_page(); | | | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 | uid = login_resetpw_suffix_is_valid(zName); if( uid==0 ){ @ <p><span class="loginError"> @ This password-reset URL is invalid, probably because it has expired. @ Password-reset URLs have a short lifespan. @ </span></p> style_finish_page(); sleep(1); /* Introduce a small delay on an invalid suffix as an ** extra defense against search attacks */ return; } fossil_redirect_to_https_if_needed(1); login_set_uid(uid, 0); if( g.perm.Setup || g.perm.Admin || !g.perm.Password || g.zLogin==0 ){ @ <p><span class="loginError"> |
︙ | ︙ | |||
1161 1162 1163 1164 1165 1166 1167 | pStmt = 0; rc = sqlite3_prepare_v2(pOther, zSQL, -1, &pStmt, 0); if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ db_unprotect(PROTECT_USER); db_multi_exec( "UPDATE user SET cookie=%Q, cexpire=%.17g" " WHERE login=%Q", | | | 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 | pStmt = 0; rc = sqlite3_prepare_v2(pOther, zSQL, -1, &pStmt, 0); if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ db_unprotect(PROTECT_USER); db_multi_exec( "UPDATE user SET cookie=%Q, cexpire=%.17g" " WHERE login=%Q", zHash, sqlite3_column_double(pStmt, 0), zLogin ); db_protect_pop(); nXfer++; } sqlite3_finalize(pStmt); } |
︙ | ︙ | |||
1578 1579 1580 1581 1582 1583 1584 | case 'a': p->Admin = p->RdTkt = p->WrTkt = p->Zip = p->RdWiki = p->WrWiki = p->NewWiki = p->ApndWiki = p->Hyperlink = p->Clone = p->NewTkt = p->Password = p->RdAddr = p->TktFmt = p->Attach = p->ApndTkt = p->ModWiki = p->ModTkt = p->RdForum = p->WrForum = p->ModForum = | | | 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 | case 'a': p->Admin = p->RdTkt = p->WrTkt = p->Zip = p->RdWiki = p->WrWiki = p->NewWiki = p->ApndWiki = p->Hyperlink = p->Clone = p->NewTkt = p->Password = p->RdAddr = p->TktFmt = p->Attach = p->ApndTkt = p->ModWiki = p->ModTkt = p->RdForum = p->WrForum = p->ModForum = p->WrTForum = p->AdminForum = p->Chat = p->EmailAlert = p->Announce = p->Debug = 1; /* Fall thru into Read/Write */ case 'i': p->Read = p->Write = 1; break; case 'o': p->Read = 1; break; case 'z': p->Zip = 1; break; case 'h': p->Hyperlink = 1; break; |
︙ | ︙ | |||
1825 1826 1827 1828 1829 1830 1831 | */ void login_insert_csrf_secret(void){ @ <input type="hidden" name="csrf" value="%s(g.zCsrfToken)"> } /* ** Check to see if the candidate username zUserID is already used. | | | 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 | */ void login_insert_csrf_secret(void){ @ <input type="hidden" name="csrf" value="%s(g.zCsrfToken)"> } /* ** Check to see if the candidate username zUserID is already used. ** Return 1 if it is already in use. Return 0 if the name is ** available for a self-registeration. */ static int login_self_choosen_userid_already_exists(const char *zUserID){ int rc = db_exists( "SELECT 1 FROM user WHERE login=%Q " "UNION ALL " "SELECT 1 FROM event WHERE user=%Q OR euser=%Q", |
︙ | ︙ | |||
1847 1848 1849 1850 1851 1852 1853 | ** searches for a user or subscriber that has that email address. If the ** email address is used no-where in the system, return 0. If the email ** address is assigned to a particular user return the UID for that user. ** If the email address is used, but not by a particular user, return -1. */ static int email_address_in_use(const char *zEMail){ int uid; | | | 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 | ** searches for a user or subscriber that has that email address. If the ** email address is used no-where in the system, return 0. If the email ** address is assigned to a particular user return the UID for that user. ** If the email address is used, but not by a particular user, return -1. */ static int email_address_in_use(const char *zEMail){ int uid; uid = db_int(0, "SELECT uid FROM user" " WHERE info LIKE '%%<%q>%%'", zEMail); if( uid>0 ){ if( db_exists("SELECT 1 FROM user WHERE uid=%d AND (" " cap GLOB '*[as]*' OR" " find_emailaddr(info)<>%Q COLLATE nocase)", uid, zEMail) ){ |
︙ | ︙ | |||
1876 1877 1878 1879 1880 1881 1882 | } return uid; } /* ** COMMAND: test-email-used ** Usage: fossil test-email-used EMAIL ... | | | 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 | } return uid; } /* ** COMMAND: test-email-used ** Usage: fossil test-email-used EMAIL ... ** ** Given a list of email addresses, show the UID and LOGIN associated ** with each one. */ void test_email_used(void){ int i; db_find_and_open_repository(0, 0); verify_all_options(); |
︙ | ︙ | |||
1901 1902 1903 1904 1905 1906 1907 | }else{ char *zLogin = db_text(0, "SELECT login FROM user WHERE uid=%d", uid); fossil_print("%s: UID %d (%s)\n", zEMail, uid, zLogin); fossil_free(zLogin); } } } | | | 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 | }else{ char *zLogin = db_text(0, "SELECT login FROM user WHERE uid=%d", uid); fossil_print("%s: UID %d (%s)\n", zEMail, uid, zLogin); fossil_free(zLogin); } } } /* ** Check an email address and confirm that it is valid for self-registration. ** The email address is known already to be well-formed. Return true ** if the email address is on the allowed list. ** ** The default behavior is that any valid email address is accepted. |
︙ | ︙ | |||
1993 1994 1995 1996 1997 1998 1999 | zErr = "Incorrect CAPTCHA"; }else if( strlen(zUserID)<6 ){ iErrLine = 1; zErr = "User ID too short. Must be at least 6 characters."; }else if( sqlite3_strglob("*[^-a-zA-Z0-9_.]*",zUserID)==0 ){ iErrLine = 1; zErr = "User ID may not contain spaces or special characters."; | | | 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 | zErr = "Incorrect CAPTCHA"; }else if( strlen(zUserID)<6 ){ iErrLine = 1; zErr = "User ID too short. Must be at least 6 characters."; }else if( sqlite3_strglob("*[^-a-zA-Z0-9_.]*",zUserID)==0 ){ iErrLine = 1; zErr = "User ID may not contain spaces or special characters."; }else if( sqlite3_strlike("anonymous%", zUserID, 0)==0 || sqlite3_strlike("nobody%", zUserID, 0)==0 || sqlite3_strlike("reader%", zUserID, 0)==0 || sqlite3_strlike("developer%", zUserID, 0)==0 ){ iErrLine = 1; zErr = "This User ID is reserved. Choose something different."; }else if( zDName[0]==0 ){ |
︙ | ︙ |
Changes to src/lookslike.c.
︙ | ︙ | |||
268 269 270 271 272 273 274 | const WCHAR_T *z = (WCHAR_T *)blob_buffer(pContent); unsigned int n = blob_size(pContent); int j, c, flags = LOOK_NONE; /* Assume UTF-16 text, prove otherwise */ if( n%sizeof(WCHAR_T) ){ flags |= LOOK_ODD; /* Odd number of bytes -> binary (UTF-8?) */ } | | | 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 | const WCHAR_T *z = (WCHAR_T *)blob_buffer(pContent); unsigned int n = blob_size(pContent); int j, c, flags = LOOK_NONE; /* Assume UTF-16 text, prove otherwise */ if( n%sizeof(WCHAR_T) ){ flags |= LOOK_ODD; /* Odd number of bytes -> binary (UTF-8?) */ } if( n<sizeof(WCHAR_T) ) return flags;/* Zero or One byte -> binary (UTF-8?) */ c = *z; if( bReverse ){ c = UTF16_SWAP(c); } if( c==0 ){ flags |= LOOK_NUL; /* NUL character in a file -> binary */ }else if( c=='\r' ){ |
︙ | ︙ |
Changes to src/main.c.
︙ | ︙ | |||
850 851 852 853 854 855 856 | zNewArgv[0] = g.argv[0]; zNewArgv[1] = "ui"; zNewArgv[2] = g.argv[1]; zNewArgv[3] = 0; g.argc = 3; g.argv = zNewArgv; #endif | | | 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 | zNewArgv[0] = g.argv[0]; zNewArgv[1] = "ui"; zNewArgv[2] = g.argv[1]; zNewArgv[3] = 0; g.argc = 3; g.argv = zNewArgv; #endif } zCmdName = g.argv[1]; } #ifndef _WIN32 /* There is a bug in stunnel4 in which it sometimes starts up client ** processes without first opening file descriptor 2 (standard error). ** If this happens, and a subsequent open() of a database returns file ** descriptor 2, and then an assert() fires and writes on fd 2, that |
︙ | ︙ | |||
1412 1413 1414 1415 1416 1417 1418 | /* Remove trailing ":443" from the HOST, if any */ if( i>4 && z[i-1]=='3' && z[i-2]=='4' && z[i-3]=='4' && z[i-4]==':' ){ i -= 4; } }else{ /* Remove trailing ":80" from the HOST */ if( i>3 && z[i-1]=='0' && z[i-2]=='8' && z[i-3]==':' ) i -= 3; | | | 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 | /* Remove trailing ":443" from the HOST, if any */ if( i>4 && z[i-1]=='3' && z[i-2]=='4' && z[i-3]=='4' && z[i-4]==':' ){ i -= 4; } }else{ /* Remove trailing ":80" from the HOST */ if( i>3 && z[i-1]=='0' && z[i-2]=='8' && z[i-3]==':' ) i -= 3; } if( i && z[i-1]=='.' ) i--; z[i] = 0; zCur = PD("SCRIPT_NAME","/"); i = strlen(zCur); while( i>0 && zCur[i-1]=='/' ) i--; if( fossil_stricmp(zMode,"on")==0 ){ g.zBaseURL = mprintf("https://%s%.*s", z, i, zCur); |
︙ | ︙ | |||
1612 1613 1614 1615 1616 1617 1618 | if( db_get_int("redirect-to-https",0)<iLevel ) return 0; if( P("HTTPS")!=0 ) return 0; return 1; } /* ** Redirect to the equivalent HTTPS request if the current connection is | | | 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 | if( db_get_int("redirect-to-https",0)<iLevel ) return 0; if( P("HTTPS")!=0 ) return 0; return 1; } /* ** Redirect to the equivalent HTTPS request if the current connection is ** insecure and if the redirect-to-https flag greater than or equal to ** iLevel. iLevel is 1 for /login pages and 2 for every other page. */ int fossil_redirect_to_https_if_needed(int iLevel){ if( fossil_wants_https(iLevel) ){ const char *zQS = P("QUERY_STRING"); char *zURL; if( zQS==0 || zQS[0]==0 ){ |
︙ | ︙ | |||
1967 1968 1969 1970 1971 1972 1973 | zPathInfo += 7; g.nExtraURL += 7; cgi_replace_parameter("PATH_INFO", zPathInfo); cgi_replace_parameter("SCRIPT_NAME", zNewScript); etag_cancel(); } | | | 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 | zPathInfo += 7; g.nExtraURL += 7; cgi_replace_parameter("PATH_INFO", zPathInfo); cgi_replace_parameter("SCRIPT_NAME", zNewScript); etag_cancel(); } /* If the content type is application/x-fossil or ** application/x-fossil-debug, then a sync/push/pull/clone is ** desired, so default the PATH_INFO to /xfer */ if( g.zContentType && strncmp(g.zContentType, "application/x-fossil", 20)==0 ){ /* Special case: If the content mimetype shows that it is "fossil sync" ** payload, then pretend that the PATH_INFO is /xfer so that we always |
︙ | ︙ | |||
2523 2524 2525 2526 2527 2528 2529 | ** the elements of the built-in skin. If LABEL does not match, ** this directive is a silent no-op. It may alternately be ** an absolute path to a directory which holds skin definition ** files (header.txt, footer.txt, etc.). If LABEL is empty, ** the skin stored in the CONFIG db table is used. */ blob_token(&line, &value); | | | 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 | ** the elements of the built-in skin. If LABEL does not match, ** this directive is a silent no-op. It may alternately be ** an absolute path to a directory which holds skin definition ** files (header.txt, footer.txt, etc.). If LABEL is empty, ** the skin stored in the CONFIG db table is used. */ blob_token(&line, &value); fossil_free(skin_use_alternative(blob_str(&value), 1, SKIN_FROM_CGI)); blob_reset(&value); continue; } if( blob_eq(&key, "jsmode:") && blob_token(&line, &value) ){ /* jsmode: MODE ** ** Change how JavaScript resources are delivered with each HTML |
︙ | ︙ | |||
2783 2784 2785 2786 2787 2788 2789 | ** --nocompress Do not compress HTTP replies ** --nodelay Omit backoffice processing if it would delay ** process exit ** --nojail Drop root privilege but do not enter the chroot jail ** --nossl Do not do http: to https: redirects, regardless of ** the redirect-to-https setting. ** --notfound URL Use URL as the "HTTP 404, object not found" page | | | 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 | ** --nocompress Do not compress HTTP replies ** --nodelay Omit backoffice processing if it would delay ** process exit ** --nojail Drop root privilege but do not enter the chroot jail ** --nossl Do not do http: to https: redirects, regardless of ** the redirect-to-https setting. ** --notfound URL Use URL as the "HTTP 404, object not found" page ** --out FILE Write the HTTP reply to FILE instead of to ** standard output ** --pkey FILE Read the private key used for TLS from FILE ** --repolist If REPOSITORY is directory, URL "/" lists all repos ** --scgi Interpret input as SCGI rather than HTTP ** --skin LABEL Use override skin LABEL. Use an empty string ("") ** to force use of the current local skin config. ** --th-trace Trace TH1 execution (for debugging purposes) |
︙ | ︙ | |||
3063 3064 3065 3066 3067 3068 3069 | ** This only works for the "fossil ui" command, not the "fossil server" ** command. ** ** If REPOSITORY begins with a "HOST:" or "USER@HOST:" prefix, then ** the command is run on the remote host specified and the results are ** tunneled back to the local machine via SSH. This feature only works for ** the "fossil ui" command, not the "fossil server" command. The name of the | | | < | > > | 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 | ** This only works for the "fossil ui" command, not the "fossil server" ** command. ** ** If REPOSITORY begins with a "HOST:" or "USER@HOST:" prefix, then ** the command is run on the remote host specified and the results are ** tunneled back to the local machine via SSH. This feature only works for ** the "fossil ui" command, not the "fossil server" command. The name of the ** fossil executable on the remote host is specified by the --fossilcmd ** option, or if there is no --fossilcmd, it first tries "fossil" and if it ** is not found in the default $PATH set by SSH on the remote, it then adds ** "$HOME/bin:/usr/local/bin:/opt/homebrew/bin" to the PATH and tries again to ** run "fossil". ** ** REPOSITORY may also be a directory (aka folder) that contains one or ** more repositories with names ending in ".fossil". In this case, a ** prefix of the URL pathname is used to search the directory for an ** appropriate repository. To thwart mischief, the pathname in the URL must ** contain only alphanumerics, "_", "/", "-", and ".", and no "-" may ** occur after "/", and every "." must be surrounded on both sides by |
︙ | ︙ | |||
3139 3140 3141 3142 3143 3144 3145 | ** --nojail Drop root privileges but do not enter the chroot jail ** --nossl Do not force redirects to SSL even if the repository ** setting "redirect-to-https" requests it. This is set ** by default for the "ui" command. ** --notfound URL Redirect to URL if a page is not found. ** -p|--page PAGE Start "ui" on PAGE. ex: --page "timeline?y=ci" ** --pkey FILE Read the private key used for TLS from FILE | | | 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 | ** --nojail Drop root privileges but do not enter the chroot jail ** --nossl Do not force redirects to SSL even if the repository ** setting "redirect-to-https" requests it. This is set ** by default for the "ui" command. ** --notfound URL Redirect to URL if a page is not found. ** -p|--page PAGE Start "ui" on PAGE. ex: --page "timeline?y=ci" ** --pkey FILE Read the private key used for TLS from FILE ** -P|--port [IP:]PORT Listen on the given IP (optional) and port ** --repolist If REPOSITORY is dir, URL "/" lists repos ** --scgi Accept SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** --th-trace Trace TH1 execution (for debugging purposes) ** --usepidkey Use saved encryption key from parent process. This is ** only necessary when using SEE on Windows or Linux. ** |
︙ | ︙ | |||
3173 3174 3175 3176 3177 3178 3179 | int fCreate = 0; /* The --create flag */ int fNoBrowser = 0; /* Do not auto-launch web-browser */ const char *zInitPage = 0; /* Start on this page. --page option */ int findServerArg = 2; /* argv index for find_server_repository() */ char *zRemote = 0; /* Remote host on which to run "fossil ui" */ const char *zJsMode; /* The --jsmode parameter */ const char *zFossilCmd =0; /* Name of "fossil" binary on remote system */ | | | 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 | int fCreate = 0; /* The --create flag */ int fNoBrowser = 0; /* Do not auto-launch web-browser */ const char *zInitPage = 0; /* Start on this page. --page option */ int findServerArg = 2; /* argv index for find_server_repository() */ char *zRemote = 0; /* Remote host on which to run "fossil ui" */ const char *zJsMode; /* The --jsmode parameter */ const char *zFossilCmd =0; /* Name of "fossil" binary on remote system */ #if USE_SEE db_setup_for_saved_encryption_key(); #endif #if defined(_WIN32) const char *zStopperFile; /* Name of file used to terminate server */ |
︙ | ︙ | |||
3323 3324 3325 3326 3327 3328 3329 | }else{ iPort = db_get_int("http-port", 8080); mxPort = iPort+100; } if( isUiCmd && !fNoBrowser ){ char *zBrowserArg; const char *zProtocol = g.httpUseSSL ? "https" : "http"; | | > > > > > | < | | | | | > > > | | | | | | | | | | | | | > > > > | | | | | | | | | > > > > > > | | | | | | | > | > | 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 | }else{ iPort = db_get_int("http-port", 8080); mxPort = iPort+100; } if( isUiCmd && !fNoBrowser ){ char *zBrowserArg; const char *zProtocol = g.httpUseSSL ? "https" : "http"; db_open_config(0,0); zBrowser = fossil_web_browser(); if( zIpAddr==0 ){ zBrowserArg = mprintf("%s://localhost:%%d/%s", zProtocol, zInitPage); }else if( strchr(zIpAddr,':') ){ zBrowserArg = mprintf("%s://[%s]:%%d/%s", zProtocol, zIpAddr, zInitPage); }else{ zBrowserArg = mprintf("%s://%s:%%d/%s", zProtocol, zIpAddr, zInitPage); } zBrowserCmd = mprintf("%s %!$ &", zBrowser, zBrowserArg); fossil_free(zBrowserArg); } if( zRemote ){ /* If a USER@HOST:REPO argument is supplied, then use SSH to run ** "fossil ui --nobrowser" on the remote system and to set up a ** tunnel from the local machine to the remote. */ FILE *sshIn; Blob ssh; int bRunning = 0; /* True when fossil starts up on the remote */ int isRetry; /* True if on the second attempt */ char zLine[1000]; blob_init(&ssh, 0, 0); for(isRetry=0; isRetry<2 && !bRunning; isRetry++){ blob_reset(&ssh); transport_ssh_command(&ssh); blob_appendf(&ssh, " -t -L 127.0.0.1:%d:127.0.0.1:%d %!$", iPort, iPort, zRemote ); if( zFossilCmd==0 ){ if( ssh_needs_path_argument(zRemote,-1) ^ isRetry ){ ssh_add_path_argument(&ssh); } blob_append_escaped_arg(&ssh, "fossil", 1); }else{ blob_appendf(&ssh, " %$", zFossilCmd); } blob_appendf(&ssh, " ui --nobrowser --localauth --port %d", iPort); if( zNotFound ) blob_appendf(&ssh, " --notfound %!$", zNotFound); if( zFileGlob ) blob_appendf(&ssh, " --files-urlenc %T", zFileGlob); if( g.zCkoutAlias ) blob_appendf(&ssh," --ckout-alias %!$",g.zCkoutAlias); if( g.zExtRoot ) blob_appendf(&ssh, " --extroot %$", g.zExtRoot); if( skin_in_use() ) blob_appendf(&ssh, " --skin %s", skin_in_use()); if( zJsMode ) blob_appendf(&ssh, " --jsmode %s", zJsMode); if( fCreate ) blob_appendf(&ssh, " --create"); blob_appendf(&ssh, " %$", g.argv[2]); if( isRetry ){ fossil_print("First attempt to run \"fossil\" on %s failed\n" "Retry: ", zRemote); } fossil_print("%s\n", blob_str(&ssh)); sshIn = popen(blob_str(&ssh), "r"); if( sshIn==0 ){ fossil_fatal("unable to %s", blob_str(&ssh)); } while( fgets(zLine, sizeof(zLine), sshIn) ){ fputs(zLine, stdout); fflush(stdout); if( !bRunning && sqlite3_strglob("*Listening for HTTP*",zLine)==0 ){ bRunning = 1; if( isRetry ){ ssh_needs_path_argument(zRemote,99); } db_close_config(); if( zBrowserCmd ){ char *zCmd = mprintf(zBrowserCmd/*works-like:"%d"*/,iPort); fossil_system(zCmd); fossil_free(zCmd); fossil_free(zBrowserCmd); zBrowserCmd = 0; } } } pclose(sshIn); } fossil_free(zBrowserCmd); return; } if( g.repositoryOpen ) flags |= HTTP_SERVER_HAD_REPOSITORY; if( g.localOpen ) flags |= HTTP_SERVER_HAD_CHECKOUT; db_close(1); #if !defined(_WIN32) |
︙ | ︙ |
Changes to src/main.mk.
︙ | ︙ | |||
189 190 191 192 193 194 195 196 197 198 199 200 201 202 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ $(SRCDIR)/../skins/khaki/css.txt \ $(SRCDIR)/../skins/khaki/details.txt \ $(SRCDIR)/../skins/khaki/footer.txt \ $(SRCDIR)/../skins/khaki/header.txt \ $(SRCDIR)/../skins/original/css.txt \ $(SRCDIR)/../skins/original/details.txt \ $(SRCDIR)/../skins/original/footer.txt \ | > > > > | 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ $(SRCDIR)/../skins/etienne/css.txt \ $(SRCDIR)/../skins/etienne/details.txt \ $(SRCDIR)/../skins/etienne/footer.txt \ $(SRCDIR)/../skins/etienne/header.txt \ $(SRCDIR)/../skins/khaki/css.txt \ $(SRCDIR)/../skins/khaki/details.txt \ $(SRCDIR)/../skins/khaki/footer.txt \ $(SRCDIR)/../skins/khaki/header.txt \ $(SRCDIR)/../skins/original/css.txt \ $(SRCDIR)/../skins/original/details.txt \ $(SRCDIR)/../skins/original/footer.txt \ |
︙ | ︙ |
Changes to src/manifest.c.
︙ | ︙ | |||
1224 1225 1226 1227 1228 1229 1230 | ** control artifact. Make a copy, and run it through the official ** artifact parser. This is the slow path, but it is rarely taken. */ blob_init(©, 0, 0); blob_init(&errmsg, 0, 0); blob_append(©, zIn, nIn); pManifest = manifest_parse(©, 0, &errmsg); | | | 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 | ** control artifact. Make a copy, and run it through the official ** artifact parser. This is the slow path, but it is rarely taken. */ blob_init(©, 0, 0); blob_init(&errmsg, 0, 0); blob_append(©, zIn, nIn); pManifest = manifest_parse(©, 0, &errmsg); iRes = pManifest!=0; manifest_destroy(pManifest); blob_reset(&errmsg); return iRes; } /* ** COMMAND: test-parse-manifest |
︙ | ︙ | |||
1336 1337 1338 1339 1340 1341 1342 | id, blob_str(&err)); nErr++; }else if( !isWF && p!=0 ){ fossil_print("%d ERROR: manifest_is_well_formed() reported false " "but manifest_parse() found nothing wrong.\n", id); nErr++; } | | | 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 | id, blob_str(&err)); nErr++; }else if( !isWF && p!=0 ){ fossil_print("%d ERROR: manifest_is_well_formed() reported false " "but manifest_parse() found nothing wrong.\n", id); nErr++; } }else{ p = manifest_get(id, CFTYPE_ANY, &err); if( p==0 ){ fossil_print("%d ERROR: %s\n", id, blob_str(&err)); nErr++; } } blob_reset(&err); |
︙ | ︙ | |||
2111 2112 2113 2114 2115 2116 2117 | ** Activate EVENT triggers if they do not already exist. */ void manifest_create_event_triggers(void){ if( manifest_event_triggers_are_enabled ){ return; /* Triggers already exists. No-op. */ } alert_create_trigger(); | | | 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 | ** Activate EVENT triggers if they do not already exist. */ void manifest_create_event_triggers(void){ if( manifest_event_triggers_are_enabled ){ return; /* Triggers already exists. No-op. */ } alert_create_trigger(); manifest_event_triggers_are_enabled = 1; } /* ** Disable manifest event triggers. Drop them if they exist, but mark ** them has having been created so that they won't be recreated. This ** is used during "rebuild" to prevent triggers from firing then. */ |
︙ | ︙ |
Changes to src/markdown.c.
︙ | ︙ | |||
62 63 64 65 66 67 68 | void (*paragraph)(struct Blob *ob, struct Blob *text, void *opaque); void (*table)(struct Blob *ob, struct Blob *head_row, struct Blob *rows, void *opaque); void (*table_cell)(struct Blob *ob, struct Blob *text, int flags, void *opaque); void (*table_row)(struct Blob *ob, struct Blob *cells, int flags, void *opaque); | | | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | void (*paragraph)(struct Blob *ob, struct Blob *text, void *opaque); void (*table)(struct Blob *ob, struct Blob *head_row, struct Blob *rows, void *opaque); void (*table_cell)(struct Blob *ob, struct Blob *text, int flags, void *opaque); void (*table_row)(struct Blob *ob, struct Blob *cells, int flags, void *opaque); void (*footnote_item)(struct Blob *ob, const struct Blob *text, int index, int nUsed, void *opaque); /* span level callbacks - NULL or return 0 prints the span verbatim */ int (*autolink)(struct Blob *ob, struct Blob *link, enum mkd_autolink type, void *opaque); int (*codespan)(struct Blob *ob, struct Blob *text, int nSep, void *opaque); int (*double_emphasis)(struct Blob *ob, struct Blob *text, |
︙ | ︙ | |||
380 381 382 383 384 385 386 | /* release the given working buffer back to the cache */ static void release_work_buffer(struct render *rndr, struct Blob *buf){ if( !buf ) return; rndr->iDepth--; blob_reset(buf); | > | | 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 | /* release the given working buffer back to the cache */ static void release_work_buffer(struct render *rndr, struct Blob *buf){ if( !buf ) return; rndr->iDepth--; blob_reset(buf); if( rndr->nBlobCache < (int)(sizeof(rndr->aBlobCache)/sizeof(rndr->aBlobCache[0])) ){ rndr->aBlobCache[rndr->nBlobCache++] = buf; }else{ fossil_free(buf); } } |
︙ | ︙ | |||
1615 1616 1617 1618 1619 1620 1621 | /* parse_blockquote -- handles parsing of a blockquote fragment */ static size_t parse_blockquote( struct Blob *ob, struct render *rndr, char *data, size_t size ){ | | > > > > > > > > > > > > > > > > > > > > > > > > > | | 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 | /* parse_blockquote -- handles parsing of a blockquote fragment */ static size_t parse_blockquote( struct Blob *ob, struct render *rndr, char *data, size_t size ){ size_t beg, end = 0, pre, work_size = 0, nb, endFence = 0; char *work_data = 0; struct Blob *out = new_work_buffer(rndr); /* Check to see if this is a quote of a fenced code block, because ** if it is, then blank lines do not terminated the quoted text. Ex: ** ** > ~~~~ ** First line ** ** Line after blank ** ~~~~ ** ** If this is a quoted fenced block, then set endFence to be the ** offset of the end of the fenced block. */ pre = prefix_quote(data,size); pre += is_empty(data+pre,size-pre); nb = prefix_fencedcode(data+pre,size-pre); if( nb ){ size_t i = 0; char delim = data[pre]; for(end=pre+nb; end<size && i<nb; end++){ if( data[end]==delim ) i++; else i = 0; } if( i>=nb ) endFence = end; } beg = 0; while( beg<size ){ for(end=beg+1; end<size && data[end-1]!='\n'; end++); pre = prefix_quote(data+beg, end-beg); if( pre ){ beg += pre; /* skipping prefix */ }else if( is_empty(data+beg, end-beg) && (end>=size || (end>endFence && prefix_quote(data+end, size-end)==0 && !is_empty(data+end, size-end))) ){ /* empty line followed by non-quote line */ break; } if( beg<end ){ /* copy into the in-place working buffer */ if( !work_data ){ |
︙ | ︙ | |||
1681 1682 1683 1684 1685 1686 1687 | ** "end" is left with a value such that data[end] is one byte ** past the first '\n' or one byte past the end of the string */ if( is_empty(data+i, size-i) || (level = is_headerline(data+i, size-i))!= 0 ){ break; } | | > > > > | 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 | ** "end" is left with a value such that data[end] is one byte ** past the first '\n' or one byte past the end of the string */ if( is_empty(data+i, size-i) || (level = is_headerline(data+i, size-i))!= 0 ){ break; } if( (i && data[i]=='#') || is_hrule(data+i, size-i) || prefix_uli(data+i, size-i) || prefix_oli(data+i, size-i) ){ end = i; break; } i = end; } work_size = i; |
︙ | ︙ | |||
2339 2340 2341 2342 2343 2344 2345 | beg += parse_blockcode(ob, rndr, txt_data, end); }else if( prefix_uli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, 0); }else if( prefix_oli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, MKD_LIST_ORDERED); }else if( has_table && is_tableline(txt_data, end) ){ beg += parse_table(ob, rndr, txt_data, end); | | | 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 | beg += parse_blockcode(ob, rndr, txt_data, end); }else if( prefix_uli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, 0); }else if( prefix_oli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, MKD_LIST_ORDERED); }else if( has_table && is_tableline(txt_data, end) ){ beg += parse_table(ob, rndr, txt_data, end); }else if( prefix_fencedcode(txt_data, end) && (i = char_codespan(ob, rndr, txt_data, 0, end))!=0 ){ beg += i; }else{ beg += parse_paragraph(ob, rndr, txt_data, end); } } |
︙ | ︙ |
Changes to src/markdown_html.c.
︙ | ︙ | |||
595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 | html_escape(ob, blob_buffer(link)+7, blob_size(link)-7); }else{ html_escape(ob, blob_buffer(link), blob_size(link)); } blob_append_literal(ob, "</a>"); return 1; } /* ** The nSrc bytes at zSrc[] are Pikchr input text (allegedly). Process that ** text and insert the result in place of the original. */ void pikchr_to_html( Blob *ob, /* Write the generated SVG here */ const char *zSrc, int nSrc, /* The Pikchr source text */ const char *zArg, int nArg /* Addition arguments */ ){ int pikFlags = PIKCHR_PROCESS_NONCE | PIKCHR_PROCESS_DIV | PIKCHR_PROCESS_SRC | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 | html_escape(ob, blob_buffer(link)+7, blob_size(link)-7); }else{ html_escape(ob, blob_buffer(link), blob_size(link)); } blob_append_literal(ob, "</a>"); return 1; } /* ** Flags for use with/via pikchr_to_html_add_flags(). */ static int pikchrToHtmlFlags = 0; /* ** Sets additional pikchr_process() flags to use for all future calls ** to pikch_to_html(). This is intended to be used by commands such as ** test-wiki-render and test-markdown-render to set the ** PIKCHR_PROCESS_DARK_MODE flag for all embedded pikchr elements. ** ** Not all PIKCHR_PROCESS flags are legal, as pikchr_to_html() ** hard-codes a subset of flags and passing arbitrary flags here may ** interfere with that. ** ** The only tested/intended use of this function is to pass it either ** 0 or PIKCHR_PROCESS_DARK_MODE. ** ** Design note: this is not implemented as an additional argument to ** pikchr_to_html() because the commands for which dark-mode rendering ** are now supported (test-wiki-render and test-markdown-render) are ** far removed from their corresponding pikchr_to_html() calls and ** there is no direct path from those commands to those calls. A ** cleaner, but much more invasive, approach would be to add a flag to ** markdown_to_html(), extend the WIKI_... flags with ** WIKI_DARK_PIKCHR, and extend both wiki.c:Renderer and ** markdown_html.c:MarkdownToHtml to contain and pass on that flag. */ void pikchr_to_html_add_flags( int f ){ pikchrToHtmlFlags = f; } /* ** The nSrc bytes at zSrc[] are Pikchr input text (allegedly). Process that ** text and insert the result in place of the original. */ void pikchr_to_html( Blob *ob, /* Write the generated SVG here */ const char *zSrc, int nSrc, /* The Pikchr source text */ const char *zArg, int nArg /* Addition arguments */ ){ int pikFlags = PIKCHR_PROCESS_NONCE | PIKCHR_PROCESS_DIV | PIKCHR_PROCESS_SRC | PIKCHR_PROCESS_ERR_PRE | pikchrToHtmlFlags; Blob bSrc = empty_blob; const char *zPikVar; double rPikVar; while( nArg>0 ){ int i; for(i=0; i<nArg && !fossil_isspace(zArg[i]); i++){} |
︙ | ︙ | |||
778 779 780 781 782 783 784 | ){ char *zLink = blob_buffer(link); char *zTitle = title!=0 && blob_size(title)>0 ? blob_str(title) : 0; char zClose[20]; if( zLink==0 || zLink[0]==0 ){ zClose[0] = 0; | | | | 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 | ){ char *zLink = blob_buffer(link); char *zTitle = title!=0 && blob_size(title)>0 ? blob_str(title) : 0; char zClose[20]; if( zLink==0 || zLink[0]==0 ){ zClose[0] = 0; }else{ static const int flags = WIKI_NOBADLINKS | WIKI_MARKDOWNLINKS ; wiki_resolve_hyperlink(ob, flags, zLink, zClose, sizeof(zClose), 0, zTitle); } if( blob_size(content)==0 ){ if( link ) blob_appendb(ob, link); |
︙ | ︙ |
Changes to src/merge.c.
︙ | ︙ | |||
134 135 136 137 138 139 140 | /* ** Add an entry to the FV table for all files renamed between ** version N and the version specified by vid. */ static void add_renames( const char *zFnCol, /* The FV column for the filename in vid */ int vid, /* The desired version's- RID */ | | | 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | /* ** Add an entry to the FV table for all files renamed between ** version N and the version specified by vid. */ static void add_renames( const char *zFnCol, /* The FV column for the filename in vid */ int vid, /* The desired version's- RID */ int nid, /* The check-in rid for the name pivot */ int revOK, /* OK to move backwards (child->parent) if true */ const char *zDebug /* Generate trace output if not NULL */ ){ int nChng; /* Number of file name changes */ int *aChng; /* An array of file name changes */ int i; /* Loop counter */ find_filename_changes(nid, vid, revOK, &nChng, &aChng, zDebug); |
︙ | ︙ | |||
266 267 268 269 270 271 272 | */ void test_show_vfile_cmd(void){ if( g.argc!=2 ){ fossil_fatal("unknown arguments to the %s command\n", g.argv[1]); } verify_all_options(); db_must_be_within_tree(); | | | 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | */ void test_show_vfile_cmd(void){ if( g.argc!=2 ){ fossil_fatal("unknown arguments to the %s command\n", g.argv[1]); } verify_all_options(); db_must_be_within_tree(); debug_show_vfile(); } /* ** COMMAND: merge ** COMMAND: cherry-pick ** |
︙ | ︙ | |||
373 374 375 376 377 378 379 | /* Undocumented --debug and --show-vfile options: ** ** When included on the command-line, --debug causes lots of state ** information to be displayed. This option is undocumented as it ** might change or be eliminated in future releases. ** | | | 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | /* Undocumented --debug and --show-vfile options: ** ** When included on the command-line, --debug causes lots of state ** information to be displayed. This option is undocumented as it ** might change or be eliminated in future releases. ** ** The --show-vfile flag does a dump of the VFILE table for reference. ** ** Hints: ** * Combine --debug and --verbose for still more output. ** * The --dry-run option is also useful in combination with --debug. */ debugFlag = find_option("debug",0,0)!=0; if( debugFlag && verboseFlag ) debugFlag = 2; |
︙ | ︙ |
Changes to src/merge3.c.
︙ | ︙ | |||
209 210 211 212 213 214 215 | int limit1, limit2; /* Sizes of aC1[] and aC2[] */ int nConflict = 0; /* Number of merge conflicts seen so far */ int useCrLf = 0; int ln1, ln2, lnPivot; /* Line numbers for all files */ DiffConfig DCfg; blob_zero(pOut); /* Merge results stored in pOut */ | | | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | int limit1, limit2; /* Sizes of aC1[] and aC2[] */ int nConflict = 0; /* Number of merge conflicts seen so far */ int useCrLf = 0; int ln1, ln2, lnPivot; /* Line numbers for all files */ DiffConfig DCfg; blob_zero(pOut); /* Merge results stored in pOut */ /* If both pV1 and pV2 start with a UTF-8 byte-order-mark (BOM), ** keep it in the output. This should be secure enough not to cause ** unintended changes to the merged file and consistent with what ** users are using in their source files. */ if( starts_with_utf8_bom(pV1, 0) && starts_with_utf8_bom(pV2, 0) ){ blob_append(pOut, (char*)get_utf8_bom(0), -1); |
︙ | ︙ |
Changes to src/name.c.
︙ | ︙ | |||
208 209 210 211 212 213 214 | ** Find the RID of the most recent object with symbolic tag zTag ** and having a type that matches zType. ** ** Return 0 if there are no matches. ** ** This is a tricky query to do efficiently. ** If the tag is very common (ex: "trunk") then | | | 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | ** Find the RID of the most recent object with symbolic tag zTag ** and having a type that matches zType. ** ** Return 0 if there are no matches. ** ** This is a tricky query to do efficiently. ** If the tag is very common (ex: "trunk") then ** we want to use the query identified below as Q1 - which searches ** the most recent EVENT table entries for the most recent with the tag. ** But if the tag is relatively scarce (anything other than "trunk", basically) ** then we want to do the indexed search show below as Q2. */ static int most_recent_event_with_tag(const char *zTag, const char *zType){ return db_int(0, "SELECT objid FROM (" |
︙ | ︙ | |||
511 512 513 514 515 516 517 | return start_of_branch(rid, 0); } /* start:BR -> The first check-in on branch named BR */ if( strncmp(zTag, "start:", 6)==0 ){ rid = symbolic_name_to_rid(zTag+6, zType); return start_of_branch(rid, 1); | | | | | 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 | return start_of_branch(rid, 0); } /* start:BR -> The first check-in on branch named BR */ if( strncmp(zTag, "start:", 6)==0 ){ rid = symbolic_name_to_rid(zTag+6, zType); return start_of_branch(rid, 1); } /* merge-in:BR -> Most recent merge-in for the branch named BR */ if( strncmp(zTag, "merge-in:", 9)==0 ){ rid = symbolic_name_to_rid(zTag+9, zType); return start_of_branch(rid, 2); } /* symbolic-name ":" date-time */ nTag = strlen(zTag); for(i=0; i<nTag-8 && zTag[i]!=':'; i++){} if( zTag[i]==':' && (fossil_isdate(&zTag[i+1]) || fossil_expand_datetime(&zTag[i+1],0)!=0) ){ char *zDate = mprintf("%s", &zTag[i+1]); char *zTagBase = mprintf("%.*s", i, zTag); char *zXDate; int nDate = strlen(zDate); if( sqlite3_strnicmp(&zDate[nDate-3],"utc",3)==0 ){ |
︙ | ︙ | |||
817 818 819 820 821 822 823 824 825 826 827 828 829 830 | fossil_fatal("cannot resolve name: %s", zName); } return rid; } int name_to_rid(const char *zName){ return name_to_typed_rid(zName, "*"); } /* ** WEBPAGE: ambiguous ** URL: /ambiguous?name=NAME&src=WEBPAGE ** ** The NAME given by the name parameter is ambiguous. Display a page ** that shows all possible choices and let the user select between them. | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 | fossil_fatal("cannot resolve name: %s", zName); } return rid; } int name_to_rid(const char *zName){ return name_to_typed_rid(zName, "*"); } /* ** Try to resolve zQP1 into a check-in name. If zQP1 does not exist, ** return 0. If zQP1 exists but cannot be resolved, then also try to ** resolve zQP2 if it exists. If zQP1 cannot be resolved but zQP2 does ** not exist, then raise an error. If both zQP1 and zQP2 exists but ** neither can be resolved, also raise an error. ** ** If pzPick is not a NULL pointer, then *pzPick to be the value of whichever ** query parameter ended up being used. */ int name_choice(const char *zQP1, const char *zQP2, const char **pzPick){ const char *zName, *zName2; int rid; zName = P(zQP1); if( zName==0 || zName[0]==0 ) return 0; rid = symbolic_name_to_rid(zName, "ci"); if( rid>0 ){ if( pzPick ) *pzPick = zName; return rid; } if( rid<0 ){ fossil_fatal("ambiguous name: %s", zName); } zName2 = P(zQP2); if( zName2==0 || zName2[0]==0 ){ fossil_fatal("cannot resolve name: %s", zName); } if( pzPick ) *pzPick = zName2; return name_to_typed_rid(zName2, "ci"); } /* ** WEBPAGE: ambiguous ** URL: /ambiguous?name=NAME&src=WEBPAGE ** ** The NAME given by the name parameter is ambiguous. Display a page ** that shows all possible choices and let the user select between them. |
︙ | ︙ | |||
1085 1086 1087 1088 1089 1090 1091 | " coalesce(euser,user), coalesce(ecomment,comment)" " FROM mlink, filename, blob, event" " WHERE mlink.fid=%d" " AND filename.fnid=mlink.fnid" " AND event.objid=mlink.mid" " AND blob.rid=mlink.mid" " ORDER BY event.mtime %s /*sort*/", | | | 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 | " coalesce(euser,user), coalesce(ecomment,comment)" " FROM mlink, filename, blob, event" " WHERE mlink.fid=%d" " AND filename.fnid=mlink.fnid" " AND event.objid=mlink.mid" " AND blob.rid=mlink.mid" " ORDER BY event.mtime %s /*sort*/", rid, (flags & WHATIS_BRIEF) ? "LIMIT 1" : "DESC"); while( db_step(&q)==SQLITE_ROW ){ if( flags & WHATIS_BRIEF ){ fossil_print("mtime: %s\n", db_column_text(&q,2)); } fossil_print("file: %s\n", db_column_text(&q,0)); fossil_print(" part of [%S] by %s on %s\n", |
︙ | ︙ | |||
1161 1162 1163 1164 1165 1166 1167 | */ void whatis_artifact( const char *zName, /* Symbolic name or full hash */ const char *zFileName,/* Optional: original filename (in file mode) */ const char *zType, /* Artifact type filter */ int mFlags /* WHATIS_* flags */ ){ | < < < < < > > > | > > > > > > | | 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 | */ void whatis_artifact( const char *zName, /* Symbolic name or full hash */ const char *zFileName,/* Optional: original filename (in file mode) */ const char *zType, /* Artifact type filter */ int mFlags /* WHATIS_* flags */ ){ int rid = symbolic_name_to_rid(zName, zType); if( rid<0 ){ Stmt q; int cnt = 0; if( mFlags & WHATIS_REPO ){ fossil_print("\nrepository: %s\n", g.zRepositoryName); } if( zFileName ){ fossil_print("%-12s%s\n", "name:", zFileName); } fossil_print("%-12s%s (ambiguous)\n", "hash:", zName); db_prepare(&q, "SELECT rid FROM blob WHERE uuid>=lower(%Q) AND uuid<(lower(%Q)||'z')", zName, zName ); while( db_step(&q)==SQLITE_ROW ){ if( cnt++ ) fossil_print("%12s---- meaning #%d ----\n", " ", cnt); whatis_rid(db_column_int(&q, 0), mFlags); } db_finalize(&q); }else if( rid==0 ){ if( (mFlags & WHATIS_OMIT_UNK)==0 ){ /* 0123456789 12 */ if( zFileName ){ fossil_print("%-12s%s\n", "name:", zFileName); } fossil_print("unknown: %s\n", zName); } }else{ if( mFlags & WHATIS_REPO ){ fossil_print("\nrepository: %s\n", g.zRepositoryName); } if( zFileName ){ zName = zFileName; } fossil_print("%-12s%s\n", "name:", zName); whatis_rid(rid, mFlags); } } /* ** COMMAND: whatis* ** |
︙ | ︙ |
Changes to src/patch.c.
︙ | ︙ | |||
43 44 45 46 47 48 49 50 51 52 53 54 55 56 | /* ** Flags passed from the main patch_cmd() routine into subfunctions used ** to implement the various subcommands. */ #define PATCH_DRYRUN 0x0001 #define PATCH_VERBOSE 0x0002 #define PATCH_FORCE 0x0004 /* ** Implementation of the "readfile(X)" SQL function. The entire content ** of the check-out file named X is read and returned as a BLOB. */ static void readfileFunc( sqlite3_context *context, | > | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | /* ** Flags passed from the main patch_cmd() routine into subfunctions used ** to implement the various subcommands. */ #define PATCH_DRYRUN 0x0001 #define PATCH_VERBOSE 0x0002 #define PATCH_FORCE 0x0004 #define PATCH_RETRY 0x0008 /* Second attempt */ /* ** Implementation of the "readfile(X)" SQL function. The entire content ** of the check-out file named X is read and returned as a BLOB. */ static void readfileFunc( sqlite3_context *context, |
︙ | ︙ | |||
69 70 71 72 73 74 75 | } /* ** mkdelta(X,Y) ** ** X is an numeric artifact id. Y is a filename. ** | | | 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | } /* ** mkdelta(X,Y) ** ** X is an numeric artifact id. Y is a filename. ** ** Compute a compressed delta that carries X into Y. Or return ** and zero-length blob if X is equal to Y. */ static void mkdeltaFunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ |
︙ | ︙ | |||
130 131 132 133 134 135 136 | SQLITE_TRANSIENT); blob_reset(&x); } /* ** Generate a binary patch file and store it into the file | | > > | 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | SQLITE_TRANSIENT); blob_reset(&x); } /* ** Generate a binary patch file and store it into the file ** named zOut. Or if zOut is NULL, write it into out. ** ** Return the number of errors. */ void patch_create(unsigned mFlags, const char *zOut, FILE *out){ int vid; char *z; if( zOut && file_isdir(zOut, ExtFILE)!=0 ){ if( mFlags & PATCH_FORCE ){ |
︙ | ︙ | |||
160 161 162 163 164 165 166 | "PRAGMA patch.page_size=512;\n" "CREATE TABLE patch.chng(\n" " pathname TEXT,\n" /* Filename */ " origname TEXT,\n" /* Name before rename. NULL if not renamed */ " hash TEXT,\n" /* Baseline hash. NULL for new files. */ " isexe BOOL,\n" /* True if executable */ " islink BOOL,\n" /* True if is a symbolic link */ | | | 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 | "PRAGMA patch.page_size=512;\n" "CREATE TABLE patch.chng(\n" " pathname TEXT,\n" /* Filename */ " origname TEXT,\n" /* Name before rename. NULL if not renamed */ " hash TEXT,\n" /* Baseline hash. NULL for new files. */ " isexe BOOL,\n" /* True if executable */ " islink BOOL,\n" /* True if is a symbolic link */ " delta BLOB\n" /* compressed delta. NULL if deleted. ** length 0 if unchanged */ ");" "CREATE TABLE patch.cfg(\n" " key TEXT,\n" " value ANY\n" ");" ); |
︙ | ︙ | |||
194 195 196 197 198 199 200 | ";", vid, g.zLocalRoot, g.zRepositoryName, g.zLogin); z = fossil_hostname(); if( z ){ db_multi_exec( "INSERT INTO patch.cfg(key,value)VALUES('hostname',%Q)", z); fossil_free(z); } | | | 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 | ";", vid, g.zLocalRoot, g.zRepositoryName, g.zLogin); z = fossil_hostname(); if( z ){ db_multi_exec( "INSERT INTO patch.cfg(key,value)VALUES('hostname',%Q)", z); fossil_free(z); } /* New files */ db_multi_exec( "INSERT INTO patch.chng(pathname,hash,isexe,islink,delta)" " SELECT pathname, NULL, isexe, islink," " compress(read_co_file(%Q||pathname))" " FROM vfile WHERE rid==0;", g.zLocalRoot |
︙ | ︙ | |||
246 247 248 249 250 251 252 | if( pData==0 ){ fossil_fatal("out of memory"); } #ifdef _WIN32 fflush(out); _setmode(_fileno(out), _O_BINARY); #endif | | < > > | > > > > > | 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | if( pData==0 ){ fossil_fatal("out of memory"); } #ifdef _WIN32 fflush(out); _setmode(_fileno(out), _O_BINARY); #endif fwrite(pData, 1, sz, out); fflush(out); sqlite3_free(pData); } db_multi_exec("DETACH patch;"); } /* ** Attempt to load and validate a patchfile identified by the first ** argument. */ void patch_attach(const char *zIn, FILE *in, int bIgnoreEmptyPatch){ Stmt q; if( g.db==0 ){ sqlite3_open(":memory:", &g.db); } if( zIn==0 ){ Blob buf; int rc; int sz; unsigned char *pData; blob_init(&buf, 0, 0); #ifdef _WIN32 _setmode(_fileno(in), _O_BINARY); #endif sz = blob_read_from_channel(&buf, in, -1); pData = (unsigned char*)blob_buffer(&buf); if( sz<512 ){ blob_reset(&buf); if( bIgnoreEmptyPatch ) return; fossil_fatal("input is too small to be a patch file"); } db_multi_exec("ATTACH ':memory:' AS patch"); if( g.fSqlTrace ){ fossil_trace("-- deserialize(\"patch\", pData, %lld);\n", sz); } rc = sqlite3_deserialize(g.db, "patch", pData, sz, sz, 0); if( rc ){ fossil_fatal("cannot open patch database: %s", sqlite3_errmsg(g.db)); |
︙ | ︙ | |||
299 300 301 302 303 304 305 | } /* ** Show a summary of the content of a patch on standard output */ void patch_view(unsigned mFlags){ Stmt q; | | | | 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 | } /* ** Show a summary of the content of a patch on standard output */ void patch_view(unsigned mFlags){ Stmt q; db_prepare(&q, "WITH nmap(nkey,nm) AS (VALUES" "('baseline','BASELINE')," "('project-name','PROJECT-NAME'))" "SELECT nm, value FROM nmap, patch.cfg WHERE nkey=key;" ); while( db_step(&q)==SQLITE_ROW ){ fossil_print("%-12s %s\n", db_column_text(&q,0), db_column_text(&q,1)); } db_finalize(&q); if( mFlags & PATCH_VERBOSE ){ db_prepare(&q, "WITH nmap(nkey,nm,isDate) AS (VALUES" "('project-code','PROJECT-CODE',0)," "('date','TIMESTAMP',1)," "('user','USER',0)," "('hostname','HOSTNAME',0)," "('ckout','CHECKOUT',0)," "('repo','REPOSITORY',0))" |
︙ | ︙ | |||
431 432 433 434 435 436 437 | blob_append_escaped_arg(&cmd, g.nameOfExe, 1); if( strcmp(zType,"merge")==0 ){ blob_appendf(&cmd, " merge %s\n", db_column_text(&q,1)); }else{ blob_appendf(&cmd, " merge --%s %s\n", zType, db_column_text(&q,1)); } if( mFlags & PATCH_VERBOSE ){ | | | 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 | blob_append_escaped_arg(&cmd, g.nameOfExe, 1); if( strcmp(zType,"merge")==0 ){ blob_appendf(&cmd, " merge %s\n", db_column_text(&q,1)); }else{ blob_appendf(&cmd, " merge --%s %s\n", zType, db_column_text(&q,1)); } if( mFlags & PATCH_VERBOSE ){ fossil_print("%-10s %s\n", db_column_text(&q,2), db_column_text(&q,0)); } } db_finalize(&q); if( mFlags & PATCH_DRYRUN ){ fossil_print("%s", blob_str(&cmd)); }else{ |
︙ | ︙ | |||
559 560 561 562 563 564 565 | }else{ blob_append_escaped_arg(&cmd, g.nameOfExe, 1); blob_appendf(&cmd, " add %$\n", zPathname); if( mFlags & PATCH_VERBOSE ){ fossil_print("%-10s %s\n", "NEW", zPathname); } } | | | 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 | }else{ blob_append_escaped_arg(&cmd, g.nameOfExe, 1); blob_appendf(&cmd, " add %$\n", zPathname); if( mFlags & PATCH_VERBOSE ){ fossil_print("%-10s %s\n", "NEW", zPathname); } } if( (mFlags & PATCH_DRYRUN)==0 ){ if( isLink ){ symlink_create(blob_str(&data), zPathname); }else{ blob_write_to_file(&data, zPathname); } file_setexe(zPathname, isExe); blob_reset(&data); |
︙ | ︙ | |||
662 663 664 665 666 667 668 | static FILE *patch_remote_command( unsigned mFlags, /* flags */ const char *zThisCmd, /* "push" or "pull" */ const char *zRemoteCmd, /* "apply" or "create" */ const char *zFossilCmd, /* Name of "fossil" on remote system */ const char *zRW /* "w" or "r" */ ){ | | | | | > > > < > | > > > | > > > > > > > > > > > > > > > > > > > > | 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 | static FILE *patch_remote_command( unsigned mFlags, /* flags */ const char *zThisCmd, /* "push" or "pull" */ const char *zRemoteCmd, /* "apply" or "create" */ const char *zFossilCmd, /* Name of "fossil" on remote system */ const char *zRW /* "w" or "r" */ ){ char *zRemote = 0; char *zDir = 0; Blob cmd; FILE *f = 0; Blob flgs; char *zForce = 0; int isRetry = (mFlags & PATCH_RETRY)!=0; blob_init(&flgs, 0, 0); blob_init(&cmd, 0, 0); if( mFlags & PATCH_FORCE ) blob_appendf(&flgs, " -f"); if( mFlags & PATCH_VERBOSE ) blob_appendf(&flgs, " -v"); if( mFlags & PATCH_DRYRUN ) blob_appendf(&flgs, " -n"); zForce = blob_size(&flgs)>0 ? blob_str(&flgs) : ""; if( g.argc!=4 ){ usage(mprintf("%s [USER@]HOST:DIRECTORY", zThisCmd)); } zRemote = fossil_strdup(g.argv[3]); zDir = (char*)file_skip_userhost(zRemote); if( zDir==0 ){ if( isRetry ) goto remote_command_error; zDir = zRemote; blob_append_escaped_arg(&cmd, g.nameOfExe, 1); blob_appendf(&cmd, " patch %s%s %$ -", zRemoteCmd, zForce, zDir); }else{ Blob remote; *(char*)(zDir-1) = 0; transport_ssh_command(&cmd); blob_appendf(&cmd, " -T"); blob_append_escaped_arg(&cmd, zRemote, 0); blob_init(&remote, 0, 0); if( zFossilCmd==0 ){ if( ssh_needs_path_argument(zRemote,-1) ^ isRetry ){ ssh_add_path_argument(&cmd); } zFossilCmd = "fossil"; }else if( mFlags & PATCH_RETRY ){ goto remote_command_error; } blob_appendf(&remote, "%$ patch %s%s --dir64 %z -", zFossilCmd, zRemoteCmd, zForce, encode64(zDir, -1)); blob_append_escaped_arg(&cmd, blob_str(&remote), 0); blob_reset(&remote); } if( isRetry ){ fossil_print("First attempt to run \"fossil\" on %s failed\n" "Retry: ", zRemote); } fossil_print("%s\n", blob_str(&cmd)); fflush(stdout); f = popen(blob_str(&cmd), zRW); if( f==0 ){ fossil_fatal("cannot run command: %s", blob_str(&cmd)); } remote_command_error: fossil_free(zRemote); blob_reset(&cmd); blob_reset(&flgs); return f; } /* ** Toggle the use-path-for-ssh setting for the remote host defined ** by g.argv[3]. */ static void patch_toggle_ssh_needs_path(void){ char *zRemote = fossil_strdup(g.argv[3]); char *zDir = (char*)file_skip_userhost(zRemote); if( zDir ){ *(char*)(zDir - 1) = 0; ssh_needs_path_argument(zRemote, 99); } fossil_free(zRemote); } /* ** Show a diff for the patch currently loaded into database "patch". */ static void patch_diff( unsigned mFlags, /* Patch flags. only -f is allowed */ DiffConfig *pCfg /* Diff options */ |
︙ | ︙ | |||
775 776 777 778 779 780 781 | " FROM patch.chng" " ORDER BY pathname" ); while( db_step(&q)==SQLITE_ROW ){ int rid; const char *zName; Blob a, b; | | | 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 | " FROM patch.chng" " ORDER BY pathname" ); while( db_step(&q)==SQLITE_ROW ){ int rid; const char *zName; Blob a, b; if( db_column_type(&q,0)!=SQLITE_INTEGER && db_column_type(&q,4)==SQLITE_TEXT ){ char *zUuid = fossil_strdup(db_column_text(&q,4)); char *zName = fossil_strdup(db_column_text(&q,1)); if( mFlags & PATCH_FORCE ){ fossil_print("ERROR cannot find base artifact %S for file \"%s\"\n", |
︙ | ︙ | |||
797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 | fossil_fatal("base artifact %S for file \"%s\" not found", zUuid, zName); } } zName = db_column_text(&q, 1); rid = db_column_int(&q, 0); if( db_column_type(&q,3)==SQLITE_NULL ){ if( !bWebpage ) fossil_print("DELETE %s\n", zName); diff_print_index(zName, pCfg, 0); content_get(rid, &a); diff_file_mem(&a, &empty, zName, pCfg); }else if( rid==0 ){ db_ephemeral_blob(&q, 3, &a); blob_uncompress(&a, &a); if( !bWebpage ) fossil_print("ADDED %s\n", zName); diff_print_index(zName, pCfg, 0); diff_file_mem(&empty, &a, zName, pCfg); blob_reset(&a); }else if( db_column_bytes(&q, 3)>0 ){ Blob delta; db_ephemeral_blob(&q, 3, &delta); blob_uncompress(&delta, &delta); | > > > | 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 | fossil_fatal("base artifact %S for file \"%s\" not found", zUuid, zName); } } zName = db_column_text(&q, 1); rid = db_column_int(&q, 0); pCfg->diffFlags &= (~DIFF_FILE_MASK); if( db_column_type(&q,3)==SQLITE_NULL ){ if( !bWebpage ) fossil_print("DELETE %s\n", zName); pCfg->diffFlags |= DIFF_FILE_DELETED; diff_print_index(zName, pCfg, 0); content_get(rid, &a); diff_file_mem(&a, &empty, zName, pCfg); }else if( rid==0 ){ db_ephemeral_blob(&q, 3, &a); blob_uncompress(&a, &a); if( !bWebpage ) fossil_print("ADDED %s\n", zName); pCfg->diffFlags |= DIFF_FILE_ADDED; diff_print_index(zName, pCfg, 0); diff_file_mem(&empty, &a, zName, pCfg); blob_reset(&a); }else if( db_column_bytes(&q, 3)>0 ){ Blob delta; db_ephemeral_blob(&q, 3, &delta); blob_uncompress(&delta, &delta); |
︙ | ︙ | |||
894 895 896 897 898 899 900 | ** ** Command-line options: ** ** -f|--force Apply the patch even though there are unsaved ** changes in the current check-out. Unsaved ** changes will be reverted and then the patch is ** applied. | | | 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 | ** ** Command-line options: ** ** -f|--force Apply the patch even though there are unsaved ** changes in the current check-out. Unsaved ** changes will be reverted and then the patch is ** applied. ** --fossilcmd EXE Name of the "fossil" executable on the remote ** -n|--dry-run Do nothing, but print what would have happened ** -v|--verbose Extra output explaining what happens ** ** ** > fossil patch pull REMOTE-CHECKOUT ** ** Like "fossil patch push" except that the transfer is from remote |
︙ | ︙ | |||
929 930 931 932 933 934 935 | char *zIn; unsigned flags = 0; if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; zIn = patch_find_patch_filename("apply"); db_must_be_within_tree(); | | | 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 | char *zIn; unsigned flags = 0; if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; zIn = patch_find_patch_filename("apply"); db_must_be_within_tree(); patch_attach(zIn, stdin, 0); patch_apply(flags); fossil_free(zIn); }else if( strncmp(zCmd, "create", n)==0 ){ char *zOut; unsigned flags = 0; if( find_option("force","f",0) ) flags |= PATCH_FORCE; |
︙ | ︙ | |||
958 959 960 961 962 963 964 | return; } db_find_and_open_repository(0, 0); if( find_option("force","f",0) ) flags |= PATCH_FORCE; diff_options(&DCfg, zCmd[0]=='g', 0); verify_all_options(); zIn = patch_find_patch_filename("apply"); | | | | | > > > > > > > > > > | > > > > > > > > > | | | 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 | return; } db_find_and_open_repository(0, 0); if( find_option("force","f",0) ) flags |= PATCH_FORCE; diff_options(&DCfg, zCmd[0]=='g', 0); verify_all_options(); zIn = patch_find_patch_filename("apply"); patch_attach(zIn, stdin, 0); patch_diff(flags, &DCfg); fossil_free(zIn); }else if( strncmp(zCmd, "pull", n)==0 ){ FILE *pIn = 0; unsigned flags = 0; const char *zFossilCmd = find_option("fossilcmd",0,1); if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; db_must_be_within_tree(); verify_all_options(); pIn = patch_remote_command(flags & (~PATCH_FORCE), "pull", "create", zFossilCmd, "r"); if( pIn ){ patch_attach(0, pIn, 1); if( pclose(pIn) ){ flags |= PATCH_RETRY; pIn = patch_remote_command(flags & (~PATCH_FORCE), "pull", "create", zFossilCmd, "r"); if( pIn ){ patch_attach(0, pIn, 0); if( pclose(pIn)==0 ){ patch_toggle_ssh_needs_path(); } } } patch_apply(flags); } }else if( strncmp(zCmd, "push", n)==0 ){ FILE *pOut = 0; unsigned flags = 0; const char *zFossilCmd = find_option("fossilcmd",0,1); if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; db_must_be_within_tree(); verify_all_options(); pOut = patch_remote_command(flags, "push", "apply", zFossilCmd, "w"); if( pOut ){ patch_create(0, 0, pOut); if( pclose(pOut)!=0 ){ flags |= PATCH_RETRY; pOut = patch_remote_command(flags, "push", "apply", zFossilCmd, "w"); if( pOut ){ patch_create(0, 0, pOut); if( pclose(pOut)==0 ){ patch_toggle_ssh_needs_path(); } } } } }else if( strncmp(zCmd, "view", n)==0 ){ const char *zIn; unsigned int flags = 0; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; verify_all_options(); if( g.argc!=4 ){ usage("view FILENAME"); } zIn = g.argv[3]; if( fossil_strcmp(zIn, "-")==0 ) zIn = 0; patch_attach(zIn, stdin, 0); patch_view(flags); }else { goto patch_usage; } } |
Changes to src/pikchrshow.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 27 28 29 30 31 32 33 | #include "config.h" #include <assert.h> #include <ctype.h> #include "pikchrshow.h" #if INTERFACE /* These are described in pikchr_process()'s docs. */ #define PIKCHR_PROCESS_PASSTHROUGH 0x0003 /* Pass through these flags */ #define PIKCHR_PROCESS_TH1 0x0004 #define PIKCHR_PROCESS_TH1_NOSVG 0x0008 #define PIKCHR_PROCESS_NONCE 0x0010 #define PIKCHR_PROCESS_ERR_PRE 0x0020 #define PIKCHR_PROCESS_SRC 0x0040 #define PIKCHR_PROCESS_DIV 0x0080 | > > > > | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | #include "config.h" #include <assert.h> #include <ctype.h> #include "pikchrshow.h" #if INTERFACE /* These are described in pikchr_process()'s docs. */ /* The first two must match the values from pikchr.c */ #define PIKCHR_PROCESS_PLAINTEXT_ERRORS 0x0001 #define PIKCHR_PROCESS_DARK_MODE 0x0002 /* end of flags supported directly by pikchr() */ #define PIKCHR_PROCESS_PASSTHROUGH 0x0003 /* Pass through these flags */ #define PIKCHR_PROCESS_TH1 0x0004 #define PIKCHR_PROCESS_TH1_NOSVG 0x0008 #define PIKCHR_PROCESS_NONCE 0x0010 #define PIKCHR_PROCESS_ERR_PRE 0x0020 #define PIKCHR_PROCESS_SRC 0x0040 #define PIKCHR_PROCESS_DIV 0x0080 |
︙ | ︙ | |||
133 134 135 136 137 138 139 | ) & pikFlags){ pikFlags |= PIKCHR_PROCESS_DIV; } if(!(PIKCHR_PROCESS_TH1 & pikFlags) /* If any TH1_xxx flags are set, set TH1 */ && (PIKCHR_PROCESS_TH1_NOSVG & pikFlags || thFlags!=0)){ pikFlags |= PIKCHR_PROCESS_TH1; | | | 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | ) & pikFlags){ pikFlags |= PIKCHR_PROCESS_DIV; } if(!(PIKCHR_PROCESS_TH1 & pikFlags) /* If any TH1_xxx flags are set, set TH1 */ && (PIKCHR_PROCESS_TH1_NOSVG & pikFlags || thFlags!=0)){ pikFlags |= PIKCHR_PROCESS_TH1; } if(zNonce){ blob_appendf(pOut, "%s\n", zNonce); } if(PIKCHR_PROCESS_TH1 & pikFlags){ Blob out = empty_blob; isErr = Th_RenderToBlob(zIn, &out, thFlags) ? 1 : 0; |
︙ | ︙ | |||
544 545 546 547 548 549 550 | ** ** -div-source Set the 'source' CSS class on the div, which tells ** CSS to hide the SVG and reveal the source by default. ** ** -src Store the input pikchr's source code in the output as ** a separate element adjacent to the SVG one. Implied ** by -div-source. | | > > | 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 | ** ** -div-source Set the 'source' CSS class on the div, which tells ** CSS to hide the SVG and reveal the source by default. ** ** -src Store the input pikchr's source code in the output as ** a separate element adjacent to the SVG one. Implied ** by -div-source. ** ** ** -th Process the input using TH1 before passing it to pikchr ** ** -th-novar Disable $var and $<var> TH1 processing. Use this if the ** pikchr script uses '$' for its own purposes and that ** causes issues. This only affects parsing of '$' outside ** of TH1 script blocks. Code in such blocks is unaffected. ** ** -th-nosvg When using -th, output the post-TH1'd script ** instead of the pikchr-rendered output ** ** -th-trace Trace TH1 execution (for debugging purposes) ** ** -dark Change pikchr colors to assume a dark-mode theme. ** ** ** The -div-indent/center/left/right flags may not be combined. ** ** TH1-related Notes and Caveats: ** ** If the -th flag is used, this command must open a fossil database |
︙ | ︙ | |||
611 612 613 614 615 616 617 618 619 620 621 622 623 624 | } if(find_option("div-toggle",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_TOGGLE; } if(find_option("div-source",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_SOURCE | PIKCHR_PROCESS_SRC; } verify_all_options(); if(g.argc>4){ usage("?INFILE? ?OUTFILE?"); } if(g.argc>2){ zInfile = g.argv[2]; | > > > | 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 | } if(find_option("div-toggle",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_TOGGLE; } if(find_option("div-source",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_SOURCE | PIKCHR_PROCESS_SRC; } if(find_option("dark",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DARK_MODE; } verify_all_options(); if(g.argc>4){ usage("?INFILE? ?OUTFILE?"); } if(g.argc>2){ zInfile = g.argv[2]; |
︙ | ︙ |
Changes to src/pqueue.c.
︙ | ︙ | |||
40 41 42 43 44 45 46 | ** Integers must be positive. */ struct PQueue { int cnt; /* Number of entries in the queue */ int sz; /* Number of slots in a[] */ struct QueueElement { int id; /* ID of the element */ | < | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | ** Integers must be positive. */ struct PQueue { int cnt; /* Number of entries in the queue */ int sz; /* Number of slots in a[] */ struct QueueElement { int id; /* ID of the element */ double value; /* Value of element. Kept in ascending order */ } *a; }; #endif /* ** Initialize a PQueue structure |
︙ | ︙ | |||
72 73 74 75 76 77 78 | p->a = fossil_realloc(p->a, sizeof(p->a[0])*N); p->sz = N; } /* ** Insert element e into the queue. */ | | < | < < | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | p->a = fossil_realloc(p->a, sizeof(p->a[0])*N); p->sz = N; } /* ** Insert element e into the queue. */ void pqueuex_insert(PQueue *p, int e, double v){ int i, j; if( p->cnt+1>p->sz ){ pqueuex_resize(p, p->cnt+5); } for(i=0; i<p->cnt; i++){ if( p->a[i].value>v ){ for(j=p->cnt; j>i; j--){ p->a[j] = p->a[j-1]; } break; } } p->a[i].id = e; p->a[i].value = v; p->cnt++; } /* ** Extract the first element from the queue (the element with ** the smallest value) and return its ID. Return 0 if the queue ** is empty. */ int pqueuex_extract(PQueue *p){ int e, i; if( p->cnt==0 ){ return 0; } e = p->a[0].id; for(i=0; i<p->cnt-1; i++){ p->a[i] = p->a[i+1]; } p->cnt--; return e; } |
Changes to src/rebuild.c.
︙ | ︙ | |||
654 655 656 657 658 659 660 | ** executable in a way that changes the database schema. ** ** Options: ** --analyze Run ANALYZE on the database after rebuilding ** --cluster Compute clusters for unclustered artifacts ** --compress Strive to make the database as small as possible ** --compress-only Skip the rebuilding step. Do --compress only | < | | 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 | ** executable in a way that changes the database schema. ** ** Options: ** --analyze Run ANALYZE on the database after rebuilding ** --cluster Compute clusters for unclustered artifacts ** --compress Strive to make the database as small as possible ** --compress-only Skip the rebuilding step. Do --compress only ** --force Force the rebuild to complete even if errors are seen ** --ifneeded Only do the rebuild if it would change the schema version ** --index Always add in the full-text search index ** --noverify Skip the verification of changes to the BLOB table ** --noindex Always omit the full-text search index ** --pagesize N Set the database pagesize to N (512..65536, power of 2) ** --quiet Only show output if there are errors ** --stats Show artifact statistics after rebuilding ** --vacuum Run VACUUM on the database after rebuilding ** --wal Set Write-Ahead-Log journalling mode on the database */ void rebuild_database(void){ int forceFlag; |
︙ | ︙ | |||
689 690 691 692 693 694 695 | int optIfNeeded; int compressOnlyFlag; omitVerify = find_option("noverify",0,0)!=0; forceFlag = find_option("force","f",0)!=0; doClustering = find_option("cluster", 0, 0)!=0; runVacuum = find_option("vacuum",0,0)!=0; | | | 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 | int optIfNeeded; int compressOnlyFlag; omitVerify = find_option("noverify",0,0)!=0; forceFlag = find_option("force","f",0)!=0; doClustering = find_option("cluster", 0, 0)!=0; runVacuum = find_option("vacuum",0,0)!=0; runDeanalyze = find_option("deanalyze",0,0)!=0; /* Deprecated */ runAnalyze = find_option("analyze",0,0)!=0; runCompress = find_option("compress",0,0)!=0; zPagesize = find_option("pagesize",0,1); showStats = find_option("stats",0,0)!=0; optIndex = find_option("index",0,0)!=0; optNoIndex = find_option("noindex",0,0)!=0; optIfNeeded = find_option("ifneeded",0,0)!=0; |
︙ | ︙ | |||
1394 1395 1396 1397 1398 1399 1400 | */ verify_cancel(); db_end_transaction(0); fossil_print("project-id: %s\n", db_get("project-code", 0)); fossil_print("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); | | > | 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 | */ verify_cancel(); db_end_transaction(0); fossil_print("project-id: %s\n", db_get("project-code", 0)); fossil_print("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); fossil_print("admin-user: %s (initial password is \"%s\")\n", g.zLogin, zPassword); hash_user_password(g.zLogin); } /* ** COMMAND: deconstruct* ** ** Usage %fossil deconstruct ?OPTIONS? DESTINATION |
︙ | ︙ |
Changes to src/report.c.
︙ | ︙ | |||
1124 1125 1126 1127 1128 1129 1130 | char *zClrKey; char *zDesc; char *zMimetype; int tabs; Stmt q; char *zErr1 = 0; char *zErr2 = 0; | | | 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 | char *zClrKey; char *zDesc; char *zMimetype; int tabs; Stmt q; char *zErr1 = 0; char *zErr2 = 0; login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } report_update_reportfmt_table(); rn = report_number(); tabs = P("tablist")!=0; db_prepare(&q, "SELECT title, sqlcode, owner, cols, rn, jx->>'desc', jx->>'descmt'" |
︙ | ︙ | |||
1366 1367 1368 1369 1370 1371 1372 | Stmt q; char *zSql; char *zErr1 = 0; char *zErr2 = 0; int count = 0; int rn; | | > | 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 | Stmt q; char *zSql; char *zErr1 = 0; char *zErr2 = 0; int count = 0; int rn; if( !zRep || !strcmp(zRep,zFullTicketRptRn) || !strcmp(zRep,zFullTicketRptTitle) ){ zSql = "SELECT * FROM ticket"; }else{ rn = atoi(zRep); if( rn ){ db_prepare(&q, "SELECT sqlcode FROM reportfmt WHERE rn=%d", rn); }else{ |
︙ | ︙ |
Changes to src/rss.c.
︙ | ︙ | |||
141 142 143 144 145 146 147 | blob_append_sql( &bSQL, " ORDER BY event.mtime DESC" ); cgi_set_content_type("application/rss+xml"); zProjectName = db_get("project-name", 0); if( zProjectName==0 ){ | | | | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | blob_append_sql( &bSQL, " ORDER BY event.mtime DESC" ); cgi_set_content_type("application/rss+xml"); zProjectName = db_get("project-name", 0); if( zProjectName==0 ){ zFreeProjectName = zProjectName = mprintf("Fossil source repository for: %s", g.zBaseURL); } zProjectDescr = db_get("project-description", 0); if( zProjectDescr==0 ){ zProjectDescr = zProjectName; } zPubDate = cgi_rfc822_datestamp(time(NULL)); |
︙ | ︙ | |||
256 257 258 259 260 261 262 | ** The default is "URL-PLACEHOLDER" (without quotes). */ void cmd_timeline_rss(void){ Stmt q; int nLine=0; char *zPubDate, *zProjectName, *zProjectDescr, *zFreeProjectName=0; Blob bSQL; | | | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 | ** The default is "URL-PLACEHOLDER" (without quotes). */ void cmd_timeline_rss(void){ Stmt q; int nLine=0; char *zPubDate, *zProjectName, *zProjectDescr, *zFreeProjectName=0; Blob bSQL; const char *zType = find_option("type","y",1); /* Type of events;All if NULL*/ const char *zTicketUuid = find_option("tkt",NULL,1); const char *zTag = find_option("tag",NULL,1); const char *zFilename = find_option("name",NULL,1); const char *zWiki = find_option("wiki",NULL,1); const char *zLimit = find_option("limit", "n",1); const char *zBaseURL = find_option("url", NULL, 1); int nLimit = atoi( (zLimit && *zLimit) ? zLimit : "20" ); |
︙ | ︙ | |||
330 331 332 333 334 335 336 | }else if( nTagId!=0 ){ blob_append_sql(&bSQL, " AND (EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid))", nTagId); } if( zFilename ){ blob_append_sql(&bSQL, | | > | | | > | 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 | }else if( nTagId!=0 ){ blob_append_sql(&bSQL, " AND (EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid))", nTagId); } if( zFilename ){ blob_append_sql(&bSQL, " AND (SELECT mlink.fnid FROM mlink WHERE event.objid=mlink.mid) " " IN (SELECT fnid FROM filename WHERE name=%Q %s)", zFilename, filename_collation() ); } blob_append( &bSQL, " ORDER BY event.mtime DESC", -1 ); zProjectName = db_get("project-name", 0); if( zProjectName==0 ){ zFreeProjectName = zProjectName = mprintf("Fossil source repository for: %s", zBaseURL); } zProjectDescr = db_get("project-description", 0); if( zProjectDescr==0 ){ zProjectDescr = zProjectName; } zPubDate = cgi_rfc822_datestamp(time(NULL)); fossil_print("<?xml version=\"1.0\"?>"); fossil_print("<rss xmlns:dc=\"http://purl.org/dc/elements/1.1/\" " " version=\"2.0\">"); fossil_print("<channel>\n"); fossil_print("<title>%h</title>\n", zProjectName); fossil_print("<link>%s</link>\n", zBaseURL); fossil_print("<description>%h</description>\n", zProjectDescr); fossil_print("<pubDate>%s</pubDate>\n", zPubDate); fossil_print("<generator>Fossil version %s %s</generator>\n", MANIFEST_VERSION, MANIFEST_DATE); |
︙ | ︙ |
Changes to src/search.c.
︙ | ︙ | |||
581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 | ** option can be used to output all matches, regardless of their search ** score. The -limit option can be used to limit the number of entries ** returned. The -width option can be used to set the output width used ** when printing matches. ** ** Options: ** -a|--all Output all matches, not just best matches ** -n|--limit N Limit output to N matches ** -W|--width WIDTH Set display width to WIDTH columns, 0 for ** unlimited. Defaults the terminal's width. */ void search_cmd(void){ Blob pattern; int i; Blob sql = empty_blob; Stmt q; int iBest; | > > > > | < < > > | < > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | | | | | | | | | | | | | | | | | | | | | | > | 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 | ** option can be used to output all matches, regardless of their search ** score. The -limit option can be used to limit the number of entries ** returned. The -width option can be used to set the output width used ** when printing matches. ** ** Options: ** -a|--all Output all matches, not just best matches ** --debug Show additional debug content on --fts search ** --fts Use the full-text search mechanism (testing only) ** -n|--limit N Limit output to N matches ** --scope SCOPE Scope of search. Valid for --fts only. One or ** more of: all, c, d, e, f, t, w. Defaults to all. ** -W|--width WIDTH Set display width to WIDTH columns, 0 for ** unlimited. Defaults the terminal's width. */ void search_cmd(void){ Blob pattern; int i; Blob sql = empty_blob; Stmt q; int iBest; char fAll = NULL != find_option("all", "a", 0); const char *zLimit = find_option("limit","n",1); const char *zWidth = find_option("width","W",1); const char *zScope = find_option("scope",0,1); int bDebug = find_option("debug",0,0)!=0; int nLimit = zLimit ? atoi(zLimit) : -1000; int width; int bFts = find_option("fts",0,0)!=0; if( zWidth ){ width = atoi(zWidth); if( (width!=0) && (width<=20) ){ fossil_fatal("-W|--width value must be >20 or 0"); } }else{ width = -1; } db_find_and_open_repository(0, 0); if( g.argc<3 ) return; blob_init(&pattern, g.argv[2], -1); for(i=3; i<g.argc; i++){ blob_appendf(&pattern, " %s", g.argv[i]); } if( bFts ){ /* Search using FTS */ Blob com; Blob snip; const char *zPattern = blob_str(&pattern); int srchFlags; unsigned int j; if( zScope==0 ){ srchFlags = SRCH_ALL; }else{ srchFlags = 0; for(i=0; zScope[i]; i++){ switch( zScope[i] ){ case 'a': srchFlags = SRCH_ALL; break; case 'c': srchFlags |= SRCH_CKIN; break; case 'd': srchFlags |= SRCH_DOC; break; case 'e': srchFlags |= SRCH_TECHNOTE; break; case 'f': srchFlags |= SRCH_FORUM; break; case 't': srchFlags |= SRCH_TKT; break; case 'w': srchFlags |= SRCH_WIKI; break; } } } search_sql_setup(g.db); add_content_sql_commands(g.db); db_multi_exec( "CREATE TEMP TABLE x(label,url,score,id,date,snip);" ); if( !search_index_exists() ){ search_fullscan(zPattern, srchFlags); /* Full-scan search */ }else{ search_update_index(srchFlags); /* Update the index */ search_indexed(zPattern, srchFlags); /* Indexed search */ } db_prepare(&q, "SELECT snip, label, score, id, date" " FROM x" " ORDER BY score DESC, date DESC;"); blob_init(&com, 0, 0); blob_init(&snip, 0, 0); if( width<0 ) width = 80; while( db_step(&q)==SQLITE_ROW ){ const char *zSnippet = db_column_text(&q, 0); const char *zLabel = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 4); const char *zScore = db_column_text(&q, 2); const char *zId = db_column_text(&q, 3); blob_appendf(&snip, "%s", zSnippet); for(j=0; j<snip.nUsed; j++){ if( snip.aData[j]=='\n' ){ if( j>0 && snip.aData[j-1]=='\r' ) snip.aData[j-1] = ' '; snip.aData[j] = ' '; } } blob_appendf(&com, "%s\n%s\n%s", zLabel, blob_str(&snip), zDate); if( bDebug ){ blob_appendf(&com," score: %s id: %s", zScore, zId); } comment_print(blob_str(&com), 0, 5, width, COMMENT_PRINT_TRIM_CRLF | COMMENT_PRINT_WORD_BREAK | COMMENT_PRINT_TRIM_SPACE); blob_reset(&com); blob_reset(&snip); if( nLimit>=1 ){ nLimit--; if( nLimit==0 ) break; } } db_finalize(&q); blob_reset(&pattern); }else{ /* Legacy timeline search (the default) */ (void)search_init(blob_str(&pattern),"*","*","...",SRCHFLG_STATIC); blob_reset(&pattern); search_sql_setup(g.db); db_multi_exec( "CREATE TEMP TABLE srch(rid,uuid,date,comment,x);" "CREATE INDEX srch_idx1 ON srch(x);" "INSERT INTO srch(rid,uuid,date,comment,x)" " SELECT blob.rid, uuid, datetime(event.mtime,toLocal())," " coalesce(ecomment,comment)," " search_score()" " FROM event, blob" " WHERE blob.rid=event.objid" " AND search_match(coalesce(ecomment,comment));" ); iBest = db_int(0, "SELECT max(x) FROM srch"); blob_append(&sql, "SELECT rid, uuid, date, comment, 0, 0 FROM srch " "WHERE 1 ", -1); if(!fAll){ blob_append_sql(&sql,"AND x>%d ", iBest/3); } blob_append(&sql, "ORDER BY x DESC, date DESC ", -1); db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); print_timeline(&q, nLimit, width, 0, 0); db_finalize(&q); } } #if INTERFACE /* What to search for */ #define SRCH_CKIN 0x0001 /* Search over check-in comments */ #define SRCH_DOC 0x0002 /* Search over embedded documents */ #define SRCH_TKT 0x0004 /* Search over tickets */ |
︙ | ︙ | |||
704 705 706 707 708 709 710 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using a full-scan search. ** ** The companion indexed search routine is search_indexed(). */ | | | 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using a full-scan search. ** ** The companion indexed search routine is search_indexed(). */ LOCAL void search_fullscan( const char *zPattern, /* The query pattern */ unsigned int srchFlags /* What to search over */ ){ search_init(zPattern, "<mark>", "</mark>", " ... ", SRCHFLG_STATIC|SRCHFLG_HTML); if( (srchFlags & SRCH_DOC)!=0 ){ char *zDocGlob = db_get("doc-glob",""); |
︙ | ︙ | |||
912 913 914 915 916 917 918 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using FTS indexed search. ** ** The companion full-scan search routine is search_fullscan(). */ | | | 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using FTS indexed search. ** ** The companion full-scan search routine is search_fullscan(). */ LOCAL void search_indexed( const char *zPattern, /* The query pattern */ unsigned int srchFlags /* What to search over */ ){ Blob sql; char *zPat = mprintf("%s",zPattern); int i; static const char *zSnippetCall; |
︙ | ︙ | |||
1077 1078 1079 1080 1081 1082 1083 | } nRow++; @ <li><p><a href='%R%s(zUrl)'>%h(zLabel)</a> if( fDebug ){ @ (%e(db_column_double(&q,3)), %s(db_column_text(&q,4)) } @ <br><span class='snippet'>%z(cleanSnippet(zSnippet)) \ | | | 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 | } nRow++; @ <li><p><a href='%R%s(zUrl)'>%h(zLabel)</a> if( fDebug ){ @ (%e(db_column_double(&q,3)), %s(db_column_text(&q,4)) } @ <br><span class='snippet'>%z(cleanSnippet(zSnippet)) \ if( zLabel && zDate && zDate[0] && strstr(zLabel,zDate)==0 ){ @ <small>(%h(zDate))</small> } @ </span></li> if( nLimit && nRow>=nLimit ) break; } db_finalize(&q); if( nRow ){ |
︙ | ︙ | |||
1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 | } /* ** This is a helper function for search_stext(). Writing into pOut ** the search text obtained from pIn according to zMimetype. ** ** The title of the document is the first line of text. All subsequent ** lines are the body. If the document has no title, the first line ** is blank. */ static void get_stext_by_mimetype( Blob *pIn, const char *zMimetype, Blob *pOut ){ Blob html, title; blob_init(&html, 0, 0); | > > > > > > | > > > > > > | | | | | | | | | > | > | < | < | < < < | < > | < | 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 | } /* ** This is a helper function for search_stext(). Writing into pOut ** the search text obtained from pIn according to zMimetype. ** ** If a title is not specified in zTitle (e.g. for wiki pages that do not ** include the title in the body), it is determined from the page content. ** ** The title of the document is the first line of text. All subsequent ** lines are the body. If the document has no title, the first line ** is blank. */ static void get_stext_by_mimetype( Blob *pIn, const char *zMimetype, const char *zTitle, Blob *pOut ){ Blob html, title; Blob *pHtml = &html; blob_init(&html, 0, 0); if( zTitle==0 ){ blob_init(&title, 0, 0); }else{ blob_init(&title, zTitle, -1); } if( zMimetype==0 ) zMimetype = "text/plain"; if( fossil_strcmp(zMimetype,"text/x-fossil-wiki")==0 ){ if( blob_size(&title) ){ wiki_convert(pIn, &html, 0); }else{ Blob tail; blob_init(&tail, 0, 0); if( wiki_find_title(pIn, &title, &tail) ){ blob_appendf(pOut, "%s\n", blob_str(&title)); wiki_convert(&tail, &html, 0); blob_reset(&tail); }else{ blob_append(pOut, "\n", 1); wiki_convert(pIn, &html, 0); } } html_to_plaintext(blob_str(&html), pOut); }else if( fossil_strcmp(zMimetype,"text/x-markdown")==0 ){ markdown_to_html(pIn, blob_size(&title) ? NULL : &title, &html); }else if( fossil_strcmp(zMimetype,"text/html")==0 ){ if( blob_size(&title)==0 ) doc_is_embedded_html(pIn, &title); pHtml = pIn; } blob_appendf(pOut, "%s\n", blob_str(&title)); if( blob_size(pHtml) ){ html_to_plaintext(blob_str(pHtml), pOut); }else{ blob_append(pOut, blob_buffer(pIn), blob_size(pIn)); } blob_reset(&html); blob_reset(&title); } /* |
︙ | ︙ | |||
1303 1304 1305 1306 1307 1308 1309 | if( fossil_strcmp(zMime,"text/plain")==0 ) zMime = 0; }else if( zMime==0 || eType!=SQLITE_TEXT ){ blob_appendf(pAccum, "%s: %s |\n", zColName, db_column_text(pQuery,i)); }else{ Blob txt; blob_init(&txt, db_column_text(pQuery,i), -1); blob_appendf(pAccum, "%s: ", zColName); | | | 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 | if( fossil_strcmp(zMime,"text/plain")==0 ) zMime = 0; }else if( zMime==0 || eType!=SQLITE_TEXT ){ blob_appendf(pAccum, "%s: %s |\n", zColName, db_column_text(pQuery,i)); }else{ Blob txt; blob_init(&txt, db_column_text(pQuery,i), -1); blob_appendf(pAccum, "%s: ", zColName); get_stext_by_mimetype(&txt, zMime, NULL, pAccum); blob_append(pAccum, " |", 2); blob_reset(&txt); } } } |
︙ | ︙ | |||
1342 1343 1344 1345 1346 1347 1348 | ){ blob_init(pOut, 0, 0); switch( cType ){ case 'd': { /* Documents */ Blob doc; content_get(rid, &doc); blob_to_utf8_no_bom(&doc, 0); | | | 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 | ){ blob_init(pOut, 0, 0); switch( cType ){ case 'd': { /* Documents */ Blob doc; content_get(rid, &doc); blob_to_utf8_no_bom(&doc, 0); get_stext_by_mimetype(&doc, mimetype_from_name(zName), NULL, pOut); blob_reset(&doc); break; } case 'f': /* Forum messages */ case 'e': /* Tech Notes */ case 'w': { /* Wiki */ Manifest *pWiki = manifest_get(rid, |
︙ | ︙ | |||
1364 1365 1366 1367 1368 1369 1370 | blob_appendf(&wiki, "<h1>%h</h1>\n", pWiki->zThreadTitle); } blob_appendf(&wiki, "From %s:\n\n%s", pWiki->zUser, pWiki->zWiki); }else{ blob_init(&wiki, pWiki->zWiki, -1); } get_stext_by_mimetype(&wiki, wiki_filter_mimetypes(pWiki->zMimetype), | | | 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 | blob_appendf(&wiki, "<h1>%h</h1>\n", pWiki->zThreadTitle); } blob_appendf(&wiki, "From %s:\n\n%s", pWiki->zUser, pWiki->zWiki); }else{ blob_init(&wiki, pWiki->zWiki, -1); } get_stext_by_mimetype(&wiki, wiki_filter_mimetypes(pWiki->zMimetype), cType=='w' ? pWiki->zWikiTitle : NULL, pOut); blob_reset(&wiki); manifest_destroy(pWiki); break; } case 'c': { /* Check-in Comments */ static Stmt q; static int isPlainText = -1; |
︙ | ︙ | |||
1394 1395 1396 1397 1398 1399 1400 | blob_append(pOut, "\n", 1); if( isPlainText ){ db_column_blob(&q, 0, pOut); }else{ Blob x; blob_init(&x,0,0); db_column_blob(&q, 0, &x); | | | 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 | blob_append(pOut, "\n", 1); if( isPlainText ){ db_column_blob(&q, 0, pOut); }else{ Blob x; blob_init(&x,0,0); db_column_blob(&q, 0, &x); get_stext_by_mimetype(&x, "text/x-fossil-wiki", NULL, pOut); blob_reset(&x); } } db_reset(&q); break; } case 't': { /* Tickets */ |
︙ | ︙ | |||
1505 1506 1507 1508 1509 1510 1511 | */ void test_convert_stext(void){ Blob in, out; db_find_and_open_repository(0,0); if( g.argc!=4 ) usage("FILENAME MIMETYPE"); blob_read_from_file(&in, g.argv[2], ExtFILE); blob_init(&out, 0, 0); | | | 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 | */ void test_convert_stext(void){ Blob in, out; db_find_and_open_repository(0,0); if( g.argc!=4 ) usage("FILENAME MIMETYPE"); blob_read_from_file(&in, g.argv[2], ExtFILE); blob_init(&out, 0, 0); get_stext_by_mimetype(&in, g.argv[3], NULL, &out); fossil_print("%s\n",blob_str(&out)); blob_reset(&in); blob_reset(&out); } /* ** The schema for the full-text index. The %s part must be an empty |
︙ | ︙ | |||
2292 2293 2294 2295 2296 2297 2298 | return rc; } /* ** Argument f should be a flag accepted by matchinfo() (a valid character | | | 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 | return rc; } /* ** Argument f should be a flag accepted by matchinfo() (a valid character ** in the string passed as the second argument). If it is not, -1 is ** returned. Otherwise, if f is a valid matchinfo flag, the value returned ** is the number of 32-bit integers added to the output array if the ** table has nCol columns and the query nPhrase phrases. */ static int fts5MatchinfoFlagsize(int nCol, int nPhrase, char f){ int ret = -1; switch( f ){ |
︙ | ︙ |
Changes to src/security_audit.c.
︙ | ︙ | |||
334 335 336 337 338 339 340 | } /* Anonymous users probably should not be allowed act as moderators ** for wiki or tickets. */ if( hasAnyCap(zAnonCap, "lq5") ){ @ <li><p><b>WARNING:</b> | | | | 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 | } /* Anonymous users probably should not be allowed act as moderators ** for wiki or tickets. */ if( hasAnyCap(zAnonCap, "lq5") ){ @ <li><p><b>WARNING:</b> @ Anonymous users can act as moderators for wiki, tickets, or @ forum posts. This defeats the whole purpose of moderation. @ Fix this by removing the "Mod-Wiki", "Mod-Tkt", and "Mod-Forum" @ privileges (<a href="%R/setup_ucap_list">capabilities</a> "fq5") @ from users "anonymous" and "nobody" @ on the <a href="setup_ulist">User Configuration</a> page. } /* Check to see if any TH1 scripts are configured to run on a sync */ if( db_exists("SELECT 1 FROM config WHERE name GLOB 'xfer-*-script'" " AND length(value)>0") ){ @ <li><p><b>WARNING:</b> @ TH1 scripts might be configured to run on any sync, push, pull, or @ clone operation. See the the <a href="%R/xfersetup">/xfersetup</a> @ page for more information. These TH1 scripts are a potential @ security concern and so should be carefully audited by a human. } /* The strict-manifest-syntax setting should be on. */ if( db_get_boolean("strict-manifest-syntax",1)==0 ){ @ <li><p><b>WARNING:</b> @ The "strict-manifest-syntax" flag is off. This is a security @ risk. Turn this setting on (its default) to protect the users @ of this repository. |
︙ | ︙ | |||
580 581 582 583 584 585 586 | }else { double r = atof(db_get("max-loadavg", 0)); if( r<=0.0 ){ @ <li><p> @ Load average limiting is turned off. This can cause the server @ to bog down if many requests for expensive services (such as @ large diffs or tarballs) arrive at about the same time. | | | 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 | }else { double r = atof(db_get("max-loadavg", 0)); if( r<=0.0 ){ @ <li><p> @ Load average limiting is turned off. This can cause the server @ to bog down if many requests for expensive services (such as @ large diffs or tarballs) arrive at about the same time. @ To fix this, set the @ <a href='%R/setup_access#slal'>"Server Load Average Limit"</a> on the @ <a href='%R/setup_access'>Access Control</a> page to the approximate @ the number of available cores on your server, or maybe just a little @ less. }else if( r>=8.0 ){ @ <li><p> @ The <a href='%R/setup_access#slal'>"Server Load Average Limit"</a> on |
︙ | ︙ | |||
602 603 604 605 606 607 608 | @ <li><p> @ The server error log is disabled. @ To set up an error log, if( fossil_strcmp(g.zCmdName, "cgi")==0 ){ @ make an entry like "errorlog: <i>FILENAME</i>" in the @ CGI script at %h(P("SCRIPT_FILENAME")). }else{ | | | 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | @ <li><p> @ The server error log is disabled. @ To set up an error log, if( fossil_strcmp(g.zCmdName, "cgi")==0 ){ @ make an entry like "errorlog: <i>FILENAME</i>" in the @ CGI script at %h(P("SCRIPT_FILENAME")). }else{ @ add the "--errorlog <i>FILENAME</i>" option to the @ "%h(g.argv[0]) %h(g.zCmdName)" command that launched this server. } }else{ FILE *pTest = fossil_fopen(g.zErrlog,"a"); if( pTest==0 ){ @ <li><p> @ <b>Error:</b> |
︙ | ︙ | |||
633 634 635 636 637 638 639 | @ <li><p> CGI Extensions are enabled with a document root @ at <a href='%R/extfilelist'>%h(g.zExtRoot)</a> holding @ %d(nCgi) CGIs and %d(nFile-nCgi) static content and data files. } if( fileedit_glob()!=0 ){ @ <li><p><a href='%R/fileedit'>Online File Editing</a> is enabled | | | | 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 | @ <li><p> CGI Extensions are enabled with a document root @ at <a href='%R/extfilelist'>%h(g.zExtRoot)</a> holding @ %d(nCgi) CGIs and %d(nFile-nCgi) static content and data files. } if( fileedit_glob()!=0 ){ @ <li><p><a href='%R/fileedit'>Online File Editing</a> is enabled @ for this repository. Clear the @ <a href='%R/setup_settings'>"fileedit-glob" setting</a> to @ disable online editing.</p> } @ <li><p> User capability summary: capability_summary(); azCSP = parse_content_security_policy(); if( azCSP==0 ){ @ <li><p> WARNING: No Content Security Policy (CSP) is specified in the @ header. Though not required, a strong CSP is recommended. Fossil will @ automatically insert an appropriate CSP if you let it generate the @ HTML <tt><head></tt> element by omitting <tt><body></tt> @ from the header configuration in your customized skin. @ }else{ int ii; @ <li><p> Content Security Policy: @ <ol type="a"> for(ii=0; azCSP[ii]; ii++){ @ <li>%h(azCSP[ii]) } |
︙ | ︙ | |||
785 786 787 788 789 790 791 | @ <li><p> @ If the server is running as CGI, then create a line in the CGI file @ like this: @ <blockquote><pre> @ errorlog: <i>FILENAME</i> @ </pre></blockquote> @ <li><p> | | | 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 | @ <li><p> @ If the server is running as CGI, then create a line in the CGI file @ like this: @ <blockquote><pre> @ errorlog: <i>FILENAME</i> @ </pre></blockquote> @ <li><p> @ If the server is running using one of @ the "fossil http" or "fossil server" commands then add @ a command-line option "--errorlog <i>FILENAME</i>" to that @ command. @ </ol> style_finish_page(); return; } |
︙ | ︙ |
Changes to src/setup.c.
︙ | ︙ | |||
141 142 143 144 145 146 147 | "Configure URL aliases"); if( setup_user ){ setup_menu_entry("Notification", "setup_notification", "Automatic notifications of changes via outbound email"); setup_menu_entry("Transfers", "xfersetup", "Configure the transfer system for this repository"); } | | | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | "Configure URL aliases"); if( setup_user ){ setup_menu_entry("Notification", "setup_notification", "Automatic notifications of changes via outbound email"); setup_menu_entry("Transfers", "xfersetup", "Configure the transfer system for this repository"); } setup_menu_entry("Skins", "setup_skin_admin", "Select and/or modify the web interface \"skins\""); setup_menu_entry("Moderation", "setup_modreq", "Enable/Disable requiring moderator approval of Wiki and/or Ticket" " changes and attachments."); setup_menu_entry("Ad-Unit", "setup_adunit", "Edit HTML text for an ad unit inserted after the menu bar"); setup_menu_entry("URLs & Checkouts", "urllist", |
︙ | ︙ | |||
586 587 588 589 590 591 592 | @ for users who are not logged in. (Property: "require-captcha")</p> @ <hr> entry_attribute("Public pages", 30, "public-pages", "pubpage", "", 0); @ <p>A comma-separated list of glob patterns for pages that are accessible @ without needing a login and using the privileges given by the | | | 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 | @ for users who are not logged in. (Property: "require-captcha")</p> @ <hr> entry_attribute("Public pages", 30, "public-pages", "pubpage", "", 0); @ <p>A comma-separated list of glob patterns for pages that are accessible @ without needing a login and using the privileges given by the @ "Default privileges" setting below. @ @ <p>Example use case: Set this field to "/doc/trunk/www/*" and set @ the "Default privileges" to include the "o" privilege @ to give anonymous users read-only permission to the @ latest version of the embedded documentation in the www/ folder without @ allowing them to see the rest of the source code. @ (Property: "public-pages") |
︙ | ︙ | |||
1199 1200 1201 1202 1203 1204 1205 | @ choices (such as the hamburger button) to the menu that are not shown @ on this list. (Property: mainmenu) @ <p> if(P("resetMenu")!=0){ db_unset("mainmenu", 0); cgi_delete_parameter("mmenu"); } | | | 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 | @ choices (such as the hamburger button) to the menu that are not shown @ on this list. (Property: mainmenu) @ <p> if(P("resetMenu")!=0){ db_unset("mainmenu", 0); cgi_delete_parameter("mmenu"); } textarea_attribute("Main Menu", 12, 80, "mainmenu", "mmenu", style_default_mainmenu(), 0); @ </p> @ <p><input type='checkbox' id='cbResetMenu' name='resetMenu' value='1'> @ <label for='cbResetMenu'>Reset menu to default value</label> @ </p> @ <hr> @ <p>Extra links to appear on the <a href="%R/sitemap">/sitemap</a> page, |
︙ | ︙ | |||
1227 1228 1229 1230 1231 1232 1233 | @ If capexpr evaluates to true, then the entry is shown. If not, @ the entry is omitted. "*" is always true. @ </ol> @ @ <p>The default value is blank, meaning no added entries. @ (Property: sitemap-extra) @ <p> | | | 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 | @ If capexpr evaluates to true, then the entry is shown. If not, @ the entry is omitted. "*" is always true. @ </ol> @ @ <p>The default value is blank, meaning no added entries. @ (Property: sitemap-extra) @ <p> textarea_attribute("Custom Sitemap Entries", 8, 80, "sitemap-extra", "smextra", "", 0); @ <hr> @ <p><input type="submit" name="submit" value="Apply Changes"></p> @ </div></form> db_end_transaction(0); style_finish_page(); } |
︙ | ︙ |
Changes to src/setupuser.c.
︙ | ︙ | |||
808 809 810 811 812 813 814 | @ subscript suffix @ indicates the privileges of <span class="usertype">anonymous</span> that @ are inherited by all logged-in users. @ </p></li> @ @ <li><p> @ The "<span class="ueditInheritDeveloper"><sub>D</sub></span>" | | | 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 | @ subscript suffix @ indicates the privileges of <span class="usertype">anonymous</span> that @ are inherited by all logged-in users. @ </p></li> @ @ <li><p> @ The "<span class="ueditInheritDeveloper"><sub>D</sub></span>" @ subscript suffix indicates the privileges of @ <span class="usertype">developer</span> that @ are inherited by all users with the @ <span class="capability">Developer</span> privilege. @ </p></li> @ @ <li><p> @ The "<span class="ueditInheritReader"><sub>R</sub></span>" subscript suffix |
︙ | ︙ |
Changes to src/sha1.c.
︙ | ︙ | |||
30 31 32 33 34 35 36 | ** ** Downloaded on 2017-03-01 then repackaged to work with Fossil ** and makeheaders. */ #if FOSSIL_HARDENED_SHA1 #if INTERFACE | | > | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | ** ** Downloaded on 2017-03-01 then repackaged to work with Fossil ** and makeheaders. */ #if FOSSIL_HARDENED_SHA1 #if INTERFACE typedef void(*collision_block_callback)(uint64_t, const uint32_t*, const uint32_t*, const uint32_t*, const uint32_t*); struct SHA1_CTX { uint64_t total; uint32_t ihv[5]; unsigned char buffer[64]; int bigendian; int found_collision; int safe_hash; |
︙ | ︙ |
Changes to src/sha1hard.c.
︙ | ︙ | |||
71 72 73 74 75 76 77 | void sha1_message_expansion(uint32_t W[80]); void sha1_compression(uint32_t ihv[5], const uint32_t m[16]); void sha1_compression_W(uint32_t ihv[5], const uint32_t W[80]); void sha1_compression_states(uint32_t ihv[5], const uint32_t W[80], uint32_t states[80][5]); extern sha1_recompression_type sha1_recompression_step[80]; typedef void(*collision_block_callback)(uint64_t, const uint32_t*, const uint32_t*, const uint32_t*, const uint32_t*); typedef struct { | | | | | | | | | | | | | | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | void sha1_message_expansion(uint32_t W[80]); void sha1_compression(uint32_t ihv[5], const uint32_t m[16]); void sha1_compression_W(uint32_t ihv[5], const uint32_t W[80]); void sha1_compression_states(uint32_t ihv[5], const uint32_t W[80], uint32_t states[80][5]); extern sha1_recompression_type sha1_recompression_step[80]; typedef void(*collision_block_callback)(uint64_t, const uint32_t*, const uint32_t*, const uint32_t*, const uint32_t*); typedef struct { uint64_t total; uint32_t ihv[5]; unsigned char buffer[64]; int bigendian; int found_collision; int safe_hash; int detect_coll; int ubc_check; int reduced_round_coll; collision_block_callback callback; uint32_t ihv1[5]; uint32_t ihv2[5]; uint32_t m1[80]; uint32_t m2[80]; uint32_t states[80][5]; } SHA1_CTX; /******************** File: lib/ubc_check.c **************************/ /*** * Copyright 2017 Marc Stevens <marc@marc-stevens.nl>, Dan Shumow <danshu@microsoft.com> * Distributed under the MIT Software License. * See accompanying file LICENSE.txt or copy at |
︙ | ︙ |
Changes to src/sha3.c.
︙ | ︙ | |||
414 415 416 417 418 419 420 | static void SHA3Update( SHA3Context *p, const unsigned char *aData, unsigned int nData ){ unsigned int i = 0; #if SHA3_BYTEORDER==1234 | | | 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 | static void SHA3Update( SHA3Context *p, const unsigned char *aData, unsigned int nData ){ unsigned int i = 0; #if SHA3_BYTEORDER==1234 if( (p->nLoaded % 8)==0 && (((intptr_t)aData)&7)==0 ){ for(; i+7<nData; i+=8){ p->u.s[p->nLoaded/8] ^= *(u64*)&aData[i]; p->nLoaded += 8; if( p->nLoaded>=p->nRate ){ KeccakF1600Step(p); p->nLoaded = 0; } |
︙ | ︙ |
Changes to src/shun.c.
︙ | ︙ | |||
45 46 47 48 49 50 51 52 53 54 55 56 57 58 | void shun_page(void){ Stmt q; int cnt = 0; const char *zUuid = P("uuid"); const char *zShun = P("shun"); const char *zAccept = P("accept"); const char *zRcvid = P("rcvid"); int nRcvid = 0; int numRows = 3; char *zCanonical = 0; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); | > | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | void shun_page(void){ Stmt q; int cnt = 0; const char *zUuid = P("uuid"); const char *zShun = P("shun"); const char *zAccept = P("accept"); const char *zRcvid = P("rcvid"); int reviewList = P("review")!=0; int nRcvid = 0; int numRows = 3; char *zCanonical = 0; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); |
︙ | ︙ | |||
83 84 85 86 87 88 89 | } i++; } zCanonical[j+1] = zCanonical[j] = 0; p = zCanonical; while( *p ){ int nUuid = strlen(p); | | | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | } i++; } zCanonical[j+1] = zCanonical[j] = 0; p = zCanonical; while( *p ){ int nUuid = strlen(p); if( !(reviewList || hname_validate(p, nUuid)) ){ @ <p class="generalError">Error: Bad artifact IDs.</p> fossil_free(zCanonical); zCanonical = 0; break; }else{ canonical16(p, nUuid); p += nUuid+1; |
︙ | ︙ | |||
153 154 155 156 157 158 159 160 161 162 163 164 165 166 | for( p = zUuid ; *p ; p += strlen(p)+1 ){ @ <a href="%R/artifact/%s(p)">%s(p)</a><br> } @ have been shunned. They will no longer be pushed. @ They will be removed from the repository the next time the repository @ is rebuilt using the <b>fossil rebuild</b> command-line</p> } if( zRcvid ){ nRcvid = atoi(zRcvid); numRows = db_int(0, "SELECT min(count(), 10) FROM blob WHERE rcvid=%d", nRcvid); } @ <p>A shunned artifact will not be pushed nor accepted in a pull and the @ artifact content will be purged from the repository the next time the | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 | for( p = zUuid ; *p ; p += strlen(p)+1 ){ @ <a href="%R/artifact/%s(p)">%s(p)</a><br> } @ have been shunned. They will no longer be pushed. @ They will be removed from the repository the next time the repository @ is rebuilt using the <b>fossil rebuild</b> command-line</p> } if( zUuid && reviewList ){ const char *p; int nTotal = 0; int nOk = 0; @ <table class="shun-review"><tbody><tr><td> for( p = zUuid ; *p ; p += strlen(p)+1 ){ int rid = symbolic_name_to_rid(p, 0); nTotal++; if( rid < 0 ){ @ Ambiguous<br> }else if( rid == 0 ){ if( !hname_validate(p, strlen(p)) ){ @ Bad artifact<br> }else if(db_int(0, "SELECT 1 FROM shun WHERE uuid=%Q", p)){ @ Already shunned<br> }else{ @ Unknown<br> } }else{ char *zCmpUuid = db_text(0, "SELECT uuid" " FROM blob, rcvfrom" " WHERE rid=%d" " AND rcvfrom.rcvid=blob.rcvid", rid); if( fossil_strcmp(p, zCmpUuid)==0 ){ nOk++; @ OK</br> }else{ @ Abbreviated<br> } } } @ </td><td> for( p = zUuid ; *p ; p += strlen(p)+1 ){ int rid = symbolic_name_to_rid(p, 0); if( rid > 0 ){ @ <a href="%R/artifact/%s(p)">%s(p)</a><br> }else{ @ %s(p)<br> } } @ </td></tr></tbody></table> @ <p class="shunned"> if( nOk < nTotal){ @ <b>Warning:</b> Not all artifacts }else if( nTotal==1 ){ @ The artifact is present and }else{ @ All %i(nOk) artifacts are present and } @ can be shunned with its hash above.</p> } if( zRcvid ){ nRcvid = atoi(zRcvid); numRows = db_int(0, "SELECT min(count(), 10) FROM blob WHERE rcvid=%d", nRcvid); } @ <p>A shunned artifact will not be pushed nor accepted in a pull and the @ artifact content will be purged from the repository the next time the |
︙ | ︙ | |||
194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | }else if( nRcvid ){ db_prepare(&q, "SELECT uuid FROM blob WHERE rcvid=%d", nRcvid); while( db_step(&q)==SQLITE_ROW ){ @ %s(db_column_text(&q, 0)) } db_finalize(&q); } } @ </textarea> @ <input type="submit" name="add" value="Shun"> @ </div></form> @ </blockquote> @ @ <a name="delshun"></a> @ <p>Enter the UUIDs of previously shunned artifacts to cause them to be @ accepted again in the repository. The artifacts content is not @ restored because the content is unknown. The only change is that | > > > > > > | 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 | }else if( nRcvid ){ db_prepare(&q, "SELECT uuid FROM blob WHERE rcvid=%d", nRcvid); while( db_step(&q)==SQLITE_ROW ){ @ %s(db_column_text(&q, 0)) } db_finalize(&q); } }else if( zUuid && reviewList ){ const char *p; for( p = zUuid ; *p ; p += strlen(p)+1 ){ @ %s(p) } } @ </textarea> @ <input type="submit" name="add" value="Shun"> @ <input type="submit" name="review" value="Review"> @ </div></form> @ </blockquote> @ @ <a name="delshun"></a> @ <p>Enter the UUIDs of previously shunned artifacts to cause them to be @ accepted again in the repository. The artifacts content is not @ restored because the content is unknown. The only change is that |
︙ | ︙ |
Changes to src/sitemap.c.
︙ | ︙ | |||
79 80 81 82 83 84 85 | g.jsHref = 0; } srchFlags = search_restrict(SRCH_ALL); if( !isPopup ){ style_header("Site Map"); style_adunit_config(ADUNIT_RIGHT_OK); } | | | 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | g.jsHref = 0; } srchFlags = search_restrict(SRCH_ALL); if( !isPopup ){ style_header("Site Map"); style_adunit_config(ADUNIT_RIGHT_OK); } @ <ul id="sitemap" class="columns" style="column-width:20em"> if( (e&1)==0 ){ @ <li>%z(href("%R/home"))Home Page</a> } #if 0 /* Removed 2021-01-26 */ for(i=0; i<sizeof(aExtra)/sizeof(aExtra[0]); i++){ |
︙ | ︙ | |||
150 151 152 153 154 155 156 | } @ <li>%z(href("%R/docsrch"))Documentation Search</a></li> } #endif if( inSublist ){ @ </ul> | | | 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | } @ <li>%z(href("%R/docsrch"))Documentation Search</a></li> } #endif if( inSublist ){ @ </ul> inSublist = 0; } @ </li> if( g.perm.Read ){ const char *zEditGlob = db_get("fileedit-glob",""); @ <li>%z(href("%R/tree"))File Browser</a> @ <ul> @ <li>%z(href("%R/tree?type=tree&ci=trunk"))Tree-view, |
︙ | ︙ |
Changes to src/skins.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ** ** Implementation of the Setup page for "skins". */ #include "config.h" #include <assert.h> #include "skins.h" /* ** An array of available built-in skins. ** ** To add new built-in skins: ** ** 1. Pick a name for the new skin. (Here we use "xyzzy"). ** | > > > > > > > | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | ** ** Implementation of the Setup page for "skins". */ #include "config.h" #include <assert.h> #include "skins.h" /* ** SETTING: default-skin width=16 ** ** If the text value if this setting is the name of a built-in skin ** then the named skin becomes the default skin for the repository. */ /* ** An array of available built-in skins. ** ** To add new built-in skins: ** ** 1. Pick a name for the new skin. (Here we use "xyzzy"). ** |
︙ | ︙ | |||
43 44 45 46 47 48 49 50 51 52 53 54 55 56 | } aBuiltinSkin[] = { { "Default", "default", 0 }, { "Ardoise", "ardoise", 0 }, { "Black & White", "black_and_white", 0 }, { "Blitz", "blitz", 0 }, { "Dark Mode", "darkmode", 0 }, { "Eagle", "eagle", 0 }, { "Khaki", "khaki", 0 }, { "Original", "original", 0 }, { "Plain Gray", "plain_gray", 0 }, { "Xekri", "xekri", 0 }, }; /* | > | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | } aBuiltinSkin[] = { { "Default", "default", 0 }, { "Ardoise", "ardoise", 0 }, { "Black & White", "black_and_white", 0 }, { "Blitz", "blitz", 0 }, { "Dark Mode", "darkmode", 0 }, { "Eagle", "eagle", 0 }, { "Étienne", "etienne", 0 }, { "Khaki", "khaki", 0 }, { "Original", "original", 0 }, { "Plain Gray", "plain_gray", 0 }, { "Xekri", "xekri", 0 }, }; /* |
︙ | ︙ | |||
73 74 75 76 77 78 79 | static char *zAltSkinDir = 0; static int iDraftSkin = 0; /* ** Used by skin_use_alternative() to store the current skin rank skin ** so that the /skins page can, if warranted, warn the user that skin ** changes won't have any effect. */ | | > > > > > > > > > > > > > > > > > | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | static char *zAltSkinDir = 0; static int iDraftSkin = 0; /* ** Used by skin_use_alternative() to store the current skin rank skin ** so that the /skins page can, if warranted, warn the user that skin ** changes won't have any effect. */ static int nSkinRank = 6; /* ** How the specific skin being used was chosen */ #if INTERFACE #define SKIN_FROM_DRAFT 0 /* The "draftN" prefix on the PATH_INFO */ #define SKIN_FROM_CMDLINE 1 /* --skin option to server command-line */ #define SKIN_FROM_CGI 2 /* skin: parameter in CGI script */ #define SKIN_FROM_QPARAM 3 /* skin= query parameter */ #define SKIN_FROM_COOKIE 4 /* skin= from fossil_display_settings cookie*/ #define SKIN_FROM_SETTING 5 /* Built-in named by "default-skin" setting */ #define SKIN_FROM_CUSTOM 6 /* Skin values in CONFIG table */ #define SKIN_FROM_DEFAULT 7 /* The built-in named "default" */ #define SKIN_FROM_UNKNOWN 8 /* Do not yet know which skin to use */ #endif /* INTERFACE */ static int iSkinSource = SKIN_FROM_UNKNOWN; /* ** Skin details are a set of key/value pairs that define display ** attributes of the skin that cannot be easily specified using CSS ** or that need to be known on the server-side. ** ** The following array holds the value for all known skin details. |
︙ | ︙ | |||
122 123 124 125 126 127 128 | ** preferred ranking, making it otherwise more invasive to tell the ** internals "the --skin flag ranks higher than a URL parameter" (the ** former gets initialized before both URL parameters and the /draft ** path determination). ** ** The rankings were initially defined in ** https://fossil-scm.org/forum/forumpost/caf8c9a8bb | | | > | > | | | > > > > | > | > | > > > > > > > > > > > > | > | 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 | ** preferred ranking, making it otherwise more invasive to tell the ** internals "the --skin flag ranks higher than a URL parameter" (the ** former gets initialized before both URL parameters and the /draft ** path determination). ** ** The rankings were initially defined in ** https://fossil-scm.org/forum/forumpost/caf8c9a8bb ** but where subsequently revised: ** ** 0) A skin name matching the glob pattern "draft[1-9]" at the start of ** the PATH_INFO. ** ** 1) The --skin flag for commands like "fossil ui", "fossil server", or ** "fossil http", or the "skin:" CGI config setting. ** ** 2) The "skin" display setting cookie or URL argument, in that ** order. If the "skin" URL argument is provided and refers to a legal ** skin then that will update the display cookie. If the skin name is ** illegal it is silently ignored. ** ** 3) The built-in skin identfied by the "default-skin" setting, if such ** a setting exists and matches one of the built-in skin names. ** ** 4) Skin properties (settings "css", "details", "footer", "header", ** and "js") from the CONFIG db table ** ** 5) The built-in skin named "default" ** ** The iSource integer privides additional detail about where the skin ** ** As a special case, a NULL or empty name resets zAltSkinDir and ** pAltSkin to 0 to indicate that the current config-side skin should ** be used (rank 3, above), then returns 0. */ char *skin_use_alternative(const char *zName, int rank, int iSource){ int i; Blob err = BLOB_INITIALIZER; if(rank > nSkinRank) return 0; nSkinRank = rank; if( zName && 1==rank && strchr(zName, '/')!=0 ){ zAltSkinDir = fossil_strdup(zName); iSkinSource = iSource; return 0; } if( zName && sqlite3_strglob("draft[1-9]", zName)==0 ){ skin_use_draft(zName[5] - '0'); iSkinSource = iSource; return 0; } if(!zName || !*zName){ pAltSkin = 0; zAltSkinDir = 0; return 0; } if( fossil_strcmp(zName, "custom")==0 ){ pAltSkin = 0; zAltSkinDir = 0; iSkinSource = iSource; return 0; } for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zLabel, zName)==0 ){ pAltSkin = &aBuiltinSkin[i]; iSkinSource = iSource; return 0; } } blob_appendf(&err, "available skins: %s", aBuiltinSkin[0].zLabel); for(i=1; i<count(aBuiltinSkin); i++){ blob_append(&err, " ", 1); blob_append(&err, aBuiltinSkin[i].zLabel, -1); } return blob_str(&err); } /* ** Look for the --skin command-line option and process it. Or ** call fossil_fatal() if an unknown skin is specified. ** ** This routine is called during command-line parsing for commands ** like "fossil ui" and "fossil http". */ void skin_override(void){ const char *zSkin = find_option("skin",0,1); if( zSkin ){ char *zErr = skin_use_alternative(zSkin, 1, SKIN_FROM_CMDLINE); if( zErr ) fossil_fatal("%s", zErr); } } /* ** Use one of the draft skins. */ void skin_use_draft(int i){ iDraftSkin = i; iSkinSource = SKIN_FROM_DRAFT; } /* ** The following routines return the various components of the skin ** that should be used for the current run. ** ** zWhat is one of: "css", "header", "footer", "details", "js" |
︙ | ︙ | |||
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 | if( file_isfile(z, ExtFILE) ){ Blob x; blob_read_from_file(&x, z, ExtFILE); fossil_free(z); return blob_str(&x); } fossil_free(z); } if( pAltSkin ){ z = mprintf("skins/%s/%s.txt", pAltSkin->zLabel, zWhat); zOut = builtin_text(z); fossil_free(z); }else{ zOut = db_get(zWhat, 0); if( zOut==0 ){ z = mprintf("skins/default/%s.txt", zWhat); zOut = builtin_text(z); fossil_free(z); } } return zOut; } /* ** Return the command-line option used to set the skin, or return NULL | > > > > > > > > > > > > > > > > | 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 | if( file_isfile(z, ExtFILE) ){ Blob x; blob_read_from_file(&x, z, ExtFILE); fossil_free(z); return blob_str(&x); } fossil_free(z); } if( iSkinSource==SKIN_FROM_UNKNOWN ){ const char *zDflt = db_get("default-skin", 0); iSkinSource = SKIN_FROM_DEFAULT; if( zDflt!=0 ){ int i; for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zLabel, zDflt)==0 ){ pAltSkin = &aBuiltinSkin[i]; iSkinSource = SKIN_FROM_SETTING; break; } } } } if( pAltSkin ){ z = mprintf("skins/%s/%s.txt", pAltSkin->zLabel, zWhat); zOut = builtin_text(z); fossil_free(z); }else{ zOut = db_get(zWhat, 0); if( zOut==0 ){ z = mprintf("skins/default/%s.txt", zWhat); zOut = builtin_text(z); fossil_free(z); }else if( iSkinSource==SKIN_FROM_DEFAULT ){ iSkinSource = SKIN_FROM_CUSTOM; } } return zOut; } /* ** Return the command-line option used to set the skin, or return NULL |
︙ | ︙ | |||
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 | "VALUES('skin:%q',%Q,now())", zNewName, zCurrent ); db_protect_pop(); return 0; } /* ** WEBPAGE: setup_skin_admin ** ** Administrative actions on skins. For administrators only. */ void setup_skin_admin(void){ const char *z; char *zName; char *zErr = 0; const char *zCurrent = 0; /* Current skin */ int i; /* Loop counter */ Stmt q; | > > > > > > > > > > < > > > > | | > > > > | > > > > > > > > > > > > > > > > > > > | | | | | | | | | | | | | | | | | | | | > > > > > | > > > > > > | < < < < | > > > | | | > > > | | > | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | < | < < < < < | | < < < < < < < < < < < | < > > > | | | | | | > | > | 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 | "VALUES('skin:%q',%Q,now())", zNewName, zCurrent ); db_protect_pop(); return 0; } /* ** Return true if a custom skin exists */ static int skin_exists_custom(void){ return db_exists("SELECT 1 FROM config WHERE name IN" " ('css','details','footer','header','js')"); } static void skin_publish(int); /* Forward reference */ /* ** WEBPAGE: setup_skin_admin ** ** Administrative actions on skins. For administrators only. */ void setup_skin_admin(void){ const char *z; char *zName; char *zErr = 0; const char *zCurrent = 0; /* Current skin */ int i; /* Loop counter */ Stmt q; int once; const char *zOverride = 0; const char *zDfltSkin = 0; int seenDefault = 0; int hasCustom; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } db_begin_transaction(); zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ aBuiltinSkin[i].zSQL = getSkin(aBuiltinSkin[i].zLabel); } style_set_current_feature("skins"); if( cgi_csrf_safe(2) ){ /* Process requests to delete a user-defined skin */ if( P("del1") && P("sn")!=0 ){ style_header("Confirm Custom Skin Delete"); @ <form action="%R/setup_skin_admin" method="post"><div> @ <p>Deletion of a custom skin is a permanent action that cannot @ be undone. Please confirm that this is what you want to do:</p> @ <input type="hidden" name="sn" value="%h(P("sn"))"> @ <input type="submit" name="del2" value="Confirm - Delete The Skin"> @ <input type="submit" name="cancel" value="Cancel - Do Not Delete"> login_insert_csrf_secret(); @ </div></form> style_finish_page(); db_end_transaction(1); return; } if( P("del2")!=0 ){ db_unprotect(PROTECT_CONFIG); if( fossil_strcmp(P("sn"),"custom")==0 ){ db_multi_exec("DELETE FROM config WHERE name IN" "('css','details','footer','header','js')"); }else if( (zName = skinVarName(P("sn"), 1))!=0 ){ db_multi_exec("DELETE FROM config WHERE name=%Q", zName); } db_protect_pop(); } if( P("draftdel")!=0 ){ const char *zDraft = P("name"); if( sqlite3_strglob("draft[1-9]",zDraft)==0 ){ db_unprotect(PROTECT_CONFIG); db_multi_exec("DELETE FROM config WHERE name GLOB '%q-*'", zDraft); db_protect_pop(); } } if( P("editdraft")!=0 ){ db_end_transaction(0); cgi_redirectf("%R/setup_skin"); return; } if( skinRename() || skinSave(zCurrent) ){ db_end_transaction(0); return; } if( P("setdflt") && (z = P("bisl"))!=0 ){ if( z[0] ){ db_set("default-skin", z, 0); }else{ db_unset("default-skin", 0); } db_end_transaction(0); cgi_redirectf("%R/setup_skin_admin"); return; } /* The user pressed one of the "Install" buttons. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; /* Check to see if the current skin is already saved. If it is, there ** is no need to create a backup */ hasCustom = skin_exists_custom(); if( hasCustom ){ zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ seen = 1; break; } } if( !seen ){ seen = db_exists("SELECT 1 FROM config WHERE name GLOB 'skin:*'" " AND value=%Q", zCurrent); if( !seen ){ db_unprotect(PROTECT_CONFIG); db_multi_exec( "INSERT INTO config(name,value,mtime) VALUES(" " strftime('skin:Backup On %%Y-%%m-%%d %%H:%%M:%%S')," " %Q,now())", zCurrent ); db_protect_pop(); } } } seen = 0; if( z[0]>='1' && z[0]<='9' && z[1]==0 ){ skin_publish(z[0]-'0'); seen = 1; } for(i=0; seen==0 && i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zDesc, z)==0 ){ seen = 1; zCurrent = aBuiltinSkin[i].zSQL; db_unprotect(PROTECT_CONFIG); db_multi_exec("%s", zCurrent/*safe-for-%s*/); db_protect_pop(); break; } } if( !seen ){ zName = skinVarName(z,0); zCurrent = db_get(zName, 0); db_unprotect(PROTECT_CONFIG); db_multi_exec("%s", zCurrent/*safe-for-%s*/); db_protect_pop(); } } } zDfltSkin = db_get("default-skin",0); hasCustom = skin_exists_custom(); if( !hasCustom && zDfltSkin==0 ){ zDfltSkin = "default"; } style_header("Skins"); if( zErr ){ @ <p style="color:red">%h(zErr)</p> } @ <table border="0"> @ <tr><td colspan=4><h2>Built-in Skins:</h2></td></tr> for(i=0; i<count(aBuiltinSkin); i++){ z = aBuiltinSkin[i].zDesc; @ <tr><td>%d(i+1).<td>%h(z)<td> <td> @ <form action="%R/setup_skin_admin" method="POST"> login_insert_csrf_secret(); if( zDfltSkin==0 || fossil_strcmp(aBuiltinSkin[i].zLabel, zDfltSkin)!=0 ){ /* vvvv--- mnemonic: Built-In Skin Label */ @ <input type="hidden" name="bisl" value="%h(aBuiltinSkin[i].zLabel)"> @ <input type="submit" name="setdflt" value="Set"> }else{ @ (Selected) seenDefault = 1; } if( pAltSkin==&aBuiltinSkin[i] && iSkinSource!=SKIN_FROM_SETTING ){ @ (Override) zOverride = z; } @ </form></td></tr> } if( zOverride ){ @ <tr><td> <td colspan="3"> @ <p>Note: Built-in skin "%h(zOverride)" is currently being used because of switch( iSkinSource ){ case SKIN_FROM_CMDLINE: @ the --skin command-line option. break; case SKIN_FROM_CGI: @ the "skin:" option on CGI script. break; case SKIN_FROM_QPARAM: @ the "skin=NAME" query parameter. break; case SKIN_FROM_COOKIE: @ the "skin" value of the @ <a href='./fdscookie'>fossil_display_setting</a> cookie. break; case SKIN_FROM_SETTING: @ the "default-skin" setting. break; default: @ reasons unknown. (Fix me!) break; } @ </tr> } i++; @ <tr><td colspan=4><h2>Custom skin:</h2></td></tr> @ <tr><td>%d(i). if( hasCustom ){ @ <td>Custom<td> <td> }else{ @ <td><i>(None)</i><td> <td> } @ <form method="post"> login_insert_csrf_secret(); if( hasCustom ){ @ <input type="submit" name="save" value="Backup"> @ <input type="submit" name="editdraft" value="Edit"> if( !seenDefault ){ @ (Selected) }else{ @ <input type="hidden" name="bisl" value=""> @ <input type="submit" name="setdflt" value="Set"> @ <input type="submit" name="del1" value="Delete"> @ <input type="hidden" name="sn" value="custom"> } }else{ @ <input type="submit" name="editdraft" value="Create"> } @ </form> @ </td></tr> db_prepare(&q, "SELECT substr(name, 6) FROM config" " WHERE name GLOB 'skin:*'" " ORDER BY name" ); once = 1; while( db_step(&q)==SQLITE_ROW ){ const char *zN = db_column_text(&q, 0); i++; if( once ){ once = 0; @ <tr><td colspan=4><h2>Backups of past custom skins:</h2></td></tr> } @ <tr><td>%d(i).<td>%h(zN)<td> <td> @ <form action="%R/setup_skin_admin" method="post"> login_insert_csrf_secret(); @ <input type="submit" name="load" value="Install"> @ <input type="submit" name="del1" value="Delete"> @ <input type="submit" name="rename" value="Rename"> @ <input type="hidden" name="sn" value="%h(zN)"> @ </form></tr> } db_finalize(&q); db_prepare(&q, "SELECT DISTINCT substr(name, 1, 6) FROM config" " WHERE name GLOB 'draft[1-9]-*'" " ORDER BY name" ); once = 1; while( db_step(&q)==SQLITE_ROW ){ const char *zN = db_column_text(&q, 0); i++; if( once ){ once = 0; @ <tr><td colspan=4><h2>Draft skins:</h2></td></tr> } @ <tr><td>%d(i).<td>%h(zN)<td> <td> @ <form action="%R/setup_skin_admin" method="post"> login_insert_csrf_secret(); @ <input type="submit" name="load" value="Install"> @ <input type="submit" name="draftdel" value="Delete"> @ <input type="hidden" name="name" value="%h(zN)"> @ <input type="hidden" name="sn" value="%h(zN+5)"> @ </form></tr> } db_finalize(&q); @ </table> style_finish_page(); db_end_transaction(0); } /* ** Generate HTML for a <select> that lists all the available skin names, ** except for zExcept if zExcept!=NULL. */ static void skin_emit_skin_selector( const char *zVarName, /* Variable name for the <select> */ const char *zDefault, /* The default value, if not NULL */ const char *zExcept /* Omit this skin if not NULL */ ){ int i; Stmt s; @ <select size='1' name='%s(zVarName)'> if( fossil_strcmp(zExcept, "current")!=0 && skin_exists_custom() ){ @ <option value='current'>Current Custom Skin</option> } for(i=0; i<count(aBuiltinSkin); i++){ const char *zName = aBuiltinSkin[i].zLabel; if( fossil_strcmp(zName, zExcept)==0 ) continue; if( fossil_strcmp(zDefault, zName)==0 ){ @ <option value='%s(zName)' selected>\ @ %h(aBuiltinSkin[i].zDesc)</option> }else{ @ <option value='%s(zName)'>\ @ %h(aBuiltinSkin[i].zDesc)</option> } } db_prepare(&s, "SELECT DISTINCT substr(name,1,6) FROM config" " WHERE name GLOB 'draft[1-9]-*' ORDER BY 1"); while( db_step(&s)==SQLITE_ROW ){ const char *zName = db_column_text(&s, 0); if( fossil_strcmp(zName, zExcept)==0 ) continue; if( fossil_strcmp(zDefault, zName)==0 ){ @ <option value='%s(zName)' selected>%s(zName)</option> }else{ @ <option value='%s(zName)'>%s(zName)</option> } } db_finalize(&s); @ </select> } /* ** Return the text of one of the skin files. */ static const char *skin_file_content(const char *zLabel, const char *zFile){ |
︙ | ︙ | |||
883 884 885 886 887 888 889 | DiffConfig DCfg; construct_diff_flags(1, &DCfg); DCfg.diffFlags |= DIFF_STRIP_EOLCR; if( P("sbsdiff")!=0 ) DCfg.diffFlags |= DIFF_SIDEBYSIDE; blob_init(&to, zContent, -1); blob_init(&from, skin_file_content(zBasis, zFile), -1); blob_zero(&out); | | | 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 | DiffConfig DCfg; construct_diff_flags(1, &DCfg); DCfg.diffFlags |= DIFF_STRIP_EOLCR; if( P("sbsdiff")!=0 ) DCfg.diffFlags |= DIFF_SIDEBYSIDE; blob_init(&to, zContent, -1); blob_init(&from, skin_file_content(zBasis, zFile), -1); blob_zero(&out); DCfg.diffFlags |= DIFF_HTML | DIFF_NOTTOOBIG; if( DCfg.diffFlags & DIFF_SIDEBYSIDE ){ text_diff(&from, &to, &out, &DCfg); @ %s(blob_str(&out)) }else{ DCfg.diffFlags |= DIFF_LINENO; text_diff(&from, &to, &out, &DCfg); @ <pre class="udiff"> |
︙ | ︙ | |||
954 955 956 957 958 959 960 961 962 963 964 965 | } /* Publish draft iSkin */ for(i=0; i<count(azSkinFile); i++){ char *zNew = db_get_mprintf("", "draft%d-%s", iSkin, azSkinFile[i]); db_set(azSkinFile[i]/*works-like:"x"*/, zNew, 0); } } /* ** WEBPAGE: setup_skin ** | > | > | 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 | } /* Publish draft iSkin */ for(i=0; i<count(azSkinFile); i++){ char *zNew = db_get_mprintf("", "draft%d-%s", iSkin, azSkinFile[i]); db_set(azSkinFile[i]/*works-like:"x"*/, zNew, 0); } db_unset("default-skin", 0); } /* ** WEBPAGE: setup_skin ** ** Generate a page showing the steps needed to create or edit ** a custom skin. */ void setup_skin(void){ int i; /* Loop counter */ int iSkin; /* Which draft skin is being edited */ int isSetup; /* True for an administrator */ int isEditor; /* Others authorized to make edits */ char *zAllowedEditors; /* Who may edit the draft skin */ |
︙ | ︙ | |||
1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 | /* Publish the draft skin */ if( P("pub7")!=0 && PB("pub7ck1") && PB("pub7ck2") ){ skin_publish(iSkin); } style_set_current_feature("skins"); style_header("Customize Skin"); @ <p>Customize the look of this Fossil repository by making changes @ to the CSS, Header, Footer, and Detail Settings in one of nine "draft" @ configurations. Then, after verifying that all is working correctly, @ publish the draft to become the new main Skin. Users can select a skin @ of their choice from the built-in ones or the locally-edited one via @ <a href='%R/skins'>the /skins page</a>.</p> | > > > | 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 | /* Publish the draft skin */ if( P("pub7")!=0 && PB("pub7ck1") && PB("pub7ck2") ){ skin_publish(iSkin); } style_set_current_feature("skins"); style_header("Customize Skin"); if( g.perm.Admin ){ style_submenu_element("Skin-Admin", "%R/setup_skin_admin"); } @ <p>Customize the look of this Fossil repository by making changes @ to the CSS, Header, Footer, and Detail Settings in one of nine "draft" @ configurations. Then, after verifying that all is working correctly, @ publish the draft to become the new main Skin. Users can select a skin @ of their choice from the built-in ones or the locally-edited one via @ <a href='%R/skins'>the /skins page</a>.</p> |
︙ | ︙ | |||
1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 | @ <a name='step3'></a> @ <h1>Step 3: Initialize The Draft</h1> @ if( !isEditor ){ @ <p>You are not allowed to initialize draft%d(iSkin). Contact @ the administrator for this repository for more information. }else{ @ <p>Initialize the draft%d(iSkin) skin to one of the built-in skins @ or a preexisting skin, to use as a baseline.</p> @ @ <form method='POST' action='%R/setup_skin#step4' id='f03'> @ <p class='skinInput'> @ <input type='hidden' name='sk' value='%d(iSkin)'> @ Initialize skin <b>draft%d(iSkin)</b> using | > | > | 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 | @ <a name='step3'></a> @ <h1>Step 3: Initialize The Draft</h1> @ if( !isEditor ){ @ <p>You are not allowed to initialize draft%d(iSkin). Contact @ the administrator for this repository for more information. }else{ char *zDraft = mprintf("draft%d", iSkin); @ <p>Initialize the draft%d(iSkin) skin to one of the built-in skins @ or a preexisting skin, to use as a baseline.</p> @ @ <form method='POST' action='%R/setup_skin#step4' id='f03'> @ <p class='skinInput'> @ <input type='hidden' name='sk' value='%d(iSkin)'> @ Initialize skin <b>draft%d(iSkin)</b> using skin_emit_skin_selector("initskin", 0, zDraft); fossil_free(zDraft); @ <input type='submit' name='init3' value='Go'> @ </p> @ </form> } @ @ <a name='step4'></a> @ <h1>Step 4: Make Edits</h1> |
︙ | ︙ | |||
1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 | ** Show a list of all of the built-in skins, plus the responsitory skin, ** and provide the user with an opportunity to change to any of them. */ void skins_page(void){ int i; char *zBase = fossil_strdup(g.zTop); size_t nBase = strlen(zBase); if( iDraftSkin && sqlite3_strglob("*/draft?", zBase)==0 ){ nBase -= 7; zBase[nBase] = 0; }else if( pAltSkin ){ char *zPattern = mprintf("*/skn_%s", pAltSkin->zLabel); if( sqlite3_strglob(zPattern, zBase)==0 ){ nBase -= strlen(zPattern)-1; zBase[nBase] = 0; } fossil_free(zPattern); | > | < | | | > < < < < < > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 | ** Show a list of all of the built-in skins, plus the responsitory skin, ** and provide the user with an opportunity to change to any of them. */ void skins_page(void){ int i; char *zBase = fossil_strdup(g.zTop); size_t nBase = strlen(zBase); login_check_credentials(); if( iDraftSkin && sqlite3_strglob("*/draft?", zBase)==0 ){ nBase -= 7; zBase[nBase] = 0; }else if( pAltSkin ){ char *zPattern = mprintf("*/skn_%s", pAltSkin->zLabel); if( sqlite3_strglob(zPattern, zBase)==0 ){ nBase -= strlen(zPattern)-1; zBase[nBase] = 0; } fossil_free(zPattern); } style_header("Skins"); if( iDraftSkin || nSkinRank<=1 ){ @ <p class="warning">Warning: if( iDraftSkin>0 ){ @ you are using a draft skin, }else{ @ this fossil instance was started with a hard-coded skin @ value } @ which supercedes any option selected below. A skin selected @ below will be recorded in your @ "%z(href("%R/fdscookie"))fossil_display_settings</a>" cookie @ but will not be used so long as the site has a @ higher-priority skin in place. @ </p> } @ <p>The following skins are available for this repository:</p> @ <ul> for(i=0; i<count(aBuiltinSkin); i++){ if( pAltSkin==&aBuiltinSkin[i] ){ @ <li> %h(aBuiltinSkin[i].zDesc) ← <i>Currently in use</i> }else{ char *zUrl = href("%R/skins?skin=%T", aBuiltinSkin[i].zLabel); @ <li> %z(zUrl)%h(aBuiltinSkin[i].zDesc)</a> } } if( skin_exists_custom() ){ if( pAltSkin==0 && zAltSkinDir==0 && iDraftSkin==0 ){ @ <li> Custom skin for this repository ← <i>Currently in use</i> }else{ @ <li> %z(href("%R/skins?skin=custom"))\ @ Custom skin for this repository</a> } } @ </ul> if( iSkinSource<SKIN_FROM_CUSTOM ){ @ <p>The current skin is selected by switch( iSkinSource ){ case SKIN_FROM_DRAFT: @ the "debugN" prefix on the PATH_INFO portion of the URL. break; case SKIN_FROM_CMDLINE: @ the "--skin" command-line option on the Fossil server. break; case SKIN_FROM_CGI: @ the "skin:" property in the CGI script that runs the Fossil server. break; case SKIN_FROM_QPARAM: @ the "skin=NAME" query parameter on the URL. break; case SKIN_FROM_COOKIE: @ the "skin" property in the @ "%z(href("%R/fdscookie"))fossil_display_settings</a>" cookie. break; case SKIN_FROM_SETTING: @ the "default-skin" setting on the repository. break; } } if( iSkinSource==SKIN_FROM_COOKIE || iSkinSource==SKIN_FROM_QPARAM ){ @ <ul> @ <li> %z(href("%R/skins?skin="))<i>Let Fossil choose \ @ which skin to use</i></a> @ </ul> } style_finish_page(); if( P("skin")!=0 ){ sqlite3_uint64 x; sqlite3_randomness(sizeof(x), &x); cgi_redirectf("%R/skins/%llx", x); } fossil_free(zBase); } |
Changes to src/smtp.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Implementation of SMTP (Simple Mail Transport Protocol) according ** to RFC 5321. */ #include "config.h" #include "smtp.h" #include <assert.h> | | | | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | ** ** Implementation of SMTP (Simple Mail Transport Protocol) according ** to RFC 5321. */ #include "config.h" #include "smtp.h" #include <assert.h> #if (HAVE_DN_EXPAND || HAVE___NS_NAME_UNCOMPRESS || HAVE_NS_NAME_UNCOMPRESS) \ && (HAVE_NS_PARSERR || HAVE___NS_PARSERR) && !defined(FOSSIL_OMIT_DNS) # include <sys/types.h> # include <netinet/in.h> # if defined(HAVE_BIND_RESOLV_H) # include <bind/resolv.h> # include <bind/arpa/nameser_compat.h> # else # include <arpa/nameser.h> |
︙ | ︙ |
Changes to src/sqlcmd.c.
︙ | ︙ | |||
382 383 384 385 386 387 388 | ** files_of_checkin(X) A table-valued function that returns info on ** all files contained in check-in X. Example: ** ** SELECT * FROM files_of_checkin('trunk'); ** ** helptext A virtual table with one row for each command, ** webpage, and setting together with the built-in | | | 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 | ** files_of_checkin(X) A table-valued function that returns info on ** all files contained in check-in X. Example: ** ** SELECT * FROM files_of_checkin('trunk'); ** ** helptext A virtual table with one row for each command, ** webpage, and setting together with the built-in ** help text. ** ** now() Return the number of seconds since 1970. ** ** obscure(T) Obfuscate the text password T so that its ** original value is not readily visible. Fossil ** uses this same algorithm when storing passwords ** of remote URLs. |
︙ | ︙ |
Changes to src/stash.c.
︙ | ︙ | |||
425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 | int rid = db_column_int(&q, 0); int isRemoved = db_column_int(&q, 1); int isLink = db_column_int(&q, 3); const char *zOrig = db_column_text(&q, 4); const char *zNew = db_column_text(&q, 5); char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); Blob a, b; if( rid==0 ){ db_ephemeral_blob(&q, 6, &a); if( !bWebpage ) fossil_print("ADDED %s\n", zNew); diff_print_index(zNew, pCfg, 0); diff_file_mem(&empty, &a, zNew, pCfg); }else if( isRemoved ){ if( !bWebpage) fossil_print("DELETE %s\n", zOrig); diff_print_index(zNew, pCfg, 0); if( fBaseline ){ content_get(rid, &a); diff_file_mem(&a, &empty, zOrig, pCfg); } }else{ Blob delta; | > > > | 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 | int rid = db_column_int(&q, 0); int isRemoved = db_column_int(&q, 1); int isLink = db_column_int(&q, 3); const char *zOrig = db_column_text(&q, 4); const char *zNew = db_column_text(&q, 5); char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); Blob a, b; pCfg->diffFlags &= (~DIFF_FILE_MASK); if( rid==0 ){ db_ephemeral_blob(&q, 6, &a); if( !bWebpage ) fossil_print("ADDED %s\n", zNew); pCfg->diffFlags |= DIFF_FILE_ADDED; diff_print_index(zNew, pCfg, 0); diff_file_mem(&empty, &a, zNew, pCfg); }else if( isRemoved ){ if( !bWebpage) fossil_print("DELETE %s\n", zOrig); pCfg->diffFlags |= DIFF_FILE_DELETED; diff_print_index(zNew, pCfg, 0); if( fBaseline ){ content_get(rid, &a); diff_file_mem(&a, &empty, zOrig, pCfg); } }else{ Blob delta; |
︙ | ︙ | |||
567 568 569 570 571 572 573 | stash_tables_exist_and_current(); if( g.argc<=2 ){ zCmd = "save"; }else{ zCmd = g.argv[2]; } nCmd = strlen(zCmd); | | | 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 | stash_tables_exist_and_current(); if( g.argc<=2 ){ zCmd = "save"; }else{ zCmd = g.argv[2]; } nCmd = strlen(zCmd); if( strncmp(zCmd, "save", nCmd)==0 ){ if( unsaved_changes(0)==0 ){ fossil_fatal("nothing to stash"); } stashid = stash_create(); undo_disable(); if( g.argc>=2 ){ int nFile = db_int(0, "SELECT count(*) FROM stashfile WHERE stashid=%d", |
︙ | ︙ | |||
598 599 600 601 602 603 604 | ** we have a copy of the changes before deleting them. */ db_commit_transaction(); g.argv[1] = "revert"; revert_cmd(); fossil_print("stash %d saved\n", stashid); return; }else | | | | 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 | ** we have a copy of the changes before deleting them. */ db_commit_transaction(); g.argv[1] = "revert"; revert_cmd(); fossil_print("stash %d saved\n", stashid); return; }else if( strncmp(zCmd, "snapshot", nCmd)==0 ){ stash_create(); }else if( strncmp(zCmd, "list", nCmd)==0 || strncmp(zCmd, "ls", nCmd)==0 ){ Stmt q, q2; int n = 0, width; int verboseFlag = find_option("verbose","v",0)!=0; const char *zWidth = find_option("width","W",1); if( zWidth ){ width = atoi(zWidth); |
︙ | ︙ | |||
665 666 667 668 669 670 671 | db_reset(&q2); } } db_finalize(&q); if( verboseFlag ) db_finalize(&q2); if( n==0 ) fossil_print("empty stash\n"); }else | | | 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 | db_reset(&q2); } } db_finalize(&q); if( verboseFlag ) db_finalize(&q2); if( n==0 ) fossil_print("empty stash\n"); }else if( strncmp(zCmd, "drop", nCmd)==0 || strncmp(zCmd, "rm", nCmd)==0 ){ int allFlag = find_option("all", "a", 0)!=0; if( allFlag ){ Blob ans; char cReply; prompt_user("This action is not undoable. Continue (y/N)? ", &ans); cReply = blob_str(&ans)[0]; if( cReply=='y' || cReply=='Y' ){ |
︙ | ︙ | |||
691 692 693 694 695 696 697 | }else{ undo_begin(); undo_save_stash(0); stash_drop(stashid); undo_finish(); } }else | | | 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 | }else{ undo_begin(); undo_save_stash(0); stash_drop(stashid); undo_finish(); } }else if( strncmp(zCmd, "pop", nCmd)==0 || strncmp(zCmd, "apply", nCmd)==0 ){ char *zCom = 0, *zDate = 0, *zHash = 0; int popped = *zCmd=='p'; if( popped ){ if( g.argc>3 ) usage("pop"); stashid = stash_get_id(0); }else{ if( g.argc>4 ) usage("apply STASHID"); |
︙ | ︙ | |||
720 721 722 723 724 725 726 | } fossil_free(zCom); fossil_free(zDate); fossil_free(zHash); undo_finish(); if( popped ) stash_drop(stashid); }else | | | | | | | | | | 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 | } fossil_free(zCom); fossil_free(zDate); fossil_free(zHash); undo_finish(); if( popped ) stash_drop(stashid); }else if( strncmp(zCmd, "goto", nCmd)==0 ){ int nConflict; int vid; if( g.argc>4 ) usage("apply STASHID"); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); undo_begin(); vid = db_int(0, "SELECT blob.rid FROM stash,blob" " WHERE stashid=%d AND blob.uuid=stash.hash", stashid); nConflict = update_to(vid); stash_apply(stashid, nConflict); db_multi_exec("UPDATE vfile SET mtime=0 WHERE pathname IN " "(SELECT origname FROM stashfile WHERE stashid=%d)", stashid); undo_finish(); }else if( strncmp(zCmd, "diff", nCmd)==0 || strncmp(zCmd, "gdiff", nCmd)==0 || strncmp(zCmd, "show", nCmd)==0 || strncmp(zCmd, "gshow", nCmd)==0 || strncmp(zCmd, "cat", nCmd)==0 || strncmp(zCmd, "gcat", nCmd)==0 ){ int fBaseline = 0; DiffConfig DCfg; if( strstr(zCmd,"show")!=0 || strstr(zCmd,"cat")!=0 ){ fBaseline = 1; } if( find_option("tk",0,0)!=0 ){ db_close(0); diff_tk(fBaseline ? "stash show" : "stash diff", 3); return; } diff_options(&DCfg, zCmd[0]=='g', 0); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); stash_diff(stashid, fBaseline, &DCfg); }else if( strncmp(zCmd, "help", nCmd)==0 ){ g.argv[1] = "help"; g.argv[2] = "stash"; g.argc = 3; help_cmd(); }else { usage("SUBCOMMAND ARGS..."); } db_end_transaction(0); } |
Changes to src/stat.c.
︙ | ︙ | |||
555 556 557 558 559 560 561 | }else{ @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> @ <td><nobr>%h(db_column_text(&q,1))</nobr></td></tr> } cnt++; } db_finalize(&q); | | | 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 | }else{ @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> @ <td><nobr>%h(db_column_text(&q,1))</nobr></td></tr> } cnt++; } db_finalize(&q); if( nOmitted ){ @ <tr><td><a href="urllist?all"><i>Show %d(nOmitted) more...</i></a> } if( cnt ){ @ </table> total += cnt; } |
︙ | ︙ | |||
712 713 714 715 716 717 718 719 720 721 722 723 724 725 | */ void repo_schema_page(void){ Stmt q; Blob sql; const char *zArg = P("n"); login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_set_current_feature("stat"); style_header("Repository Schema"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("URLs", "urllist"); if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ | > > > > > > > > > > > > > > > > | 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 | */ void repo_schema_page(void){ Stmt q; Blob sql; const char *zArg = P("n"); login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } if( zArg!=0 && db_table_exists("repository",zArg) && cgi_csrf_safe(1) ){ if( P("analyze")!=0 ){ db_multi_exec("ANALYZE \"%w\"", zArg); }else if( P("analyze200")!=0 ){ db_multi_exec("PRAGMA analysis_limit=200; ANALYZE \"%w\"", zArg); }else if( P("deanalyze")!=0 ){ db_unprotect(PROTECT_ALL); db_multi_exec("DELETE FROM repository.sqlite_stat1" " WHERE tbl LIKE %Q", zArg); db_protect_pop(); } } style_set_current_feature("stat"); style_header("Repository Schema"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("URLs", "urllist"); if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ |
︙ | ︙ | |||
757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 | } @ </pre> db_finalize(&q); }else{ style_submenu_element("Stat1","repo_stat1"); } } style_finish_page(); } /* ** WEBPAGE: repo_stat1 ** ** Show the sqlite_stat1 table for the repository schema */ void repo_stat1_page(void){ login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_set_current_feature("stat"); style_header("Repository STAT1 Table"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("Schema", "repo_schema"); if( db_table_exists("repository","sqlite_stat1") ){ Stmt q; db_prepare(&q, "SELECT tbl, idx, stat FROM repository.sqlite_stat1" " ORDER BY tbl, idx"); | > > > > > > > > > > > > > > > > > > > > > > > | > > > > > | | > > > > | > > > > > > > > > > > | 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 | } @ </pre> db_finalize(&q); }else{ style_submenu_element("Stat1","repo_stat1"); } } @ <hr><form method="POST"> @ <input type="submit" name="analyze" value="Run ANALYZE"><br /> @ <input type="submit" name="analyze200"\ @ value="Run ANALYZE with limit=200"><br /> @ <input type="submit" name="deanalyze" value="De-ANALYZE"> @ </form> style_finish_page(); } /* ** WEBPAGE: repo_stat1 ** ** Show the sqlite_stat1 table for the repository schema */ void repo_stat1_page(void){ int bTabular; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } bTabular = PB("tabular"); if( P("analyze")!=0 && cgi_csrf_safe(1) ){ db_multi_exec("ANALYZE"); }else if( P("analyze200")!=0 && cgi_csrf_safe(1) ){ db_multi_exec("PRAGMA analysis_limit=200; ANALYZE;"); }else if( P("deanalyze")!=0 && cgi_csrf_safe(1) ){ db_unprotect(PROTECT_ALL); db_multi_exec("DELETE FROM repository.sqlite_stat1;"); db_protect_pop(); } style_set_current_feature("stat"); style_header("Repository STAT1 Table"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("Schema", "repo_schema"); style_submenu_checkbox("tabular", "Tabular", 0, 0); if( db_table_exists("repository","sqlite_stat1") ){ Stmt q; db_prepare(&q, "SELECT tbl, idx, stat FROM repository.sqlite_stat1" " ORDER BY tbl, idx"); if( bTabular ){ @ <table border="1" cellpadding="0" cellspacing="0"> @ <tr><th>Table<th>Index<th>Stat }else{ @ <pre> } while( db_step(&q)==SQLITE_ROW ){ const char *zTab = db_column_text(&q,0); const char *zIdx = db_column_text(&q,1); const char *zStat = db_column_text(&q,2); char *zUrl = href("%R/repo_schema?n=%t",zTab); if( bTabular ){ @ <tr><td>%z(zUrl)%h(zTab)</a><td>%h(zIdx)<td>%h(zStat) }else{ @ INSERT INTO sqlite_stat1 \ @ VALUES('%z(zUrl)%h(zTab)</a>','%h(zIdx)','%h(zStat)'); } } if( bTabular ){ @ </table> }else{ @ </pre> } db_finalize(&q); } @ <p><form method="POST"> if( bTabular ){ @ <input type="hidden" name="tabular" value="1"> } @ <input type="submit" name="analyze" value="Run ANALYZE"><br /> @ <input type="submit" name="analyze200"\ @ value="Run ANALYZE with limit=200"><br> @ <input type="submit" name="deanalyze"\ @ value="De-ANALYZE"> @ </form> style_finish_page(); } /* ** WEBPAGE: repo-tabsize ** ** Show relative sizes of tables in the repository database. |
︙ | ︙ | |||
873 874 875 876 877 878 879 | /* ** Gather statistics on artifact types, counts, and sizes. ** ** Only populate the artstat.atype field if the bWithTypes parameter is true. */ void gather_artifact_stats(int bWithTypes){ | | | | | | | | | | | | 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 | /* ** Gather statistics on artifact types, counts, and sizes. ** ** Only populate the artstat.atype field if the bWithTypes parameter is true. */ void gather_artifact_stats(int bWithTypes){ static const char zSql[] = @ CREATE TEMP TABLE artstat( @ id INTEGER PRIMARY KEY, -- Corresponds to BLOB.RID @ atype TEXT, -- 'data', 'manifest', 'tag', 'wiki', etc. @ isDelta BOOLEAN, -- true if stored as a delta @ szExp, -- expanded, uncompressed size @ szCmpr -- size as stored on disk @ ); @ INSERT INTO artstat(id,atype,isDelta,szExp,szCmpr) @ SELECT blob.rid, NULL, @ delta.rid IS NOT NULL, @ size, octet_length(content) @ FROM blob LEFT JOIN delta ON blob.rid=delta.rid @ WHERE content IS NOT NULL; ; static const char zSql2[] = @ UPDATE artstat SET atype='file' @ WHERE +id IN (SELECT fid FROM mlink); @ UPDATE artstat SET atype='manifest' @ WHERE id IN (SELECT objid FROM event WHERE type='ci') AND atype IS NULL; @ UPDATE artstat SET atype='forum' @ WHERE id IN (SELECT objid FROM event WHERE type='f') AND atype IS NULL; @ UPDATE artstat SET atype='cluster' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid=(SELECT tagid FROM tag @ WHERE tagname='cluster')); @ UPDATE artstat SET atype='ticket' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid IN (SELECT tagid FROM tag @ WHERE tagname GLOB 'tkt-*')); @ UPDATE artstat SET atype='wiki' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid IN (SELECT tagid FROM tag @ WHERE tagname GLOB 'wiki-*')); @ UPDATE artstat SET atype='technote' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid IN (SELECT tagid FROM tag @ WHERE tagname GLOB 'event-*')); @ UPDATE artstat SET atype='attachment' @ WHERE atype IS NULL @ AND id IN (SELECT attachid FROM attachment UNION @ SELECT blob.rid FROM attachment JOIN blob ON uuid=src); @ UPDATE artstat SET atype='tag' @ WHERE atype IS NULL @ AND id IN (SELECT srcid FROM tagxref); @ UPDATE artstat SET atype='tag' @ WHERE atype IS NULL @ AND id IN (SELECT objid FROM event WHERE type='g'); @ UPDATE artstat SET atype='unused' WHERE atype IS NULL; ; db_multi_exec("%s", zSql/*safe-for-%s*/); if( bWithTypes ){ db_multi_exec("%s", zSql2/*safe-for-%s*/); } |
︙ | ︙ |
Changes to src/statrep.c.
︙ | ︙ | |||
128 129 130 131 132 133 134 | const char *zNot = rc=='n' ? "NOT" : ""; statsReportTimelineYFlag = "ci"; db_multi_exec( "CREATE TEMP VIEW v_reports AS " "SELECT * FROM event WHERE type='ci' AND %s" " AND objid %s IN (SELECT cid FROM plink WHERE NOT isprim)", zTimeSpan/*safe-for-%s*/, zNot/*safe-for-%s*/ | | | 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | const char *zNot = rc=='n' ? "NOT" : ""; statsReportTimelineYFlag = "ci"; db_multi_exec( "CREATE TEMP VIEW v_reports AS " "SELECT * FROM event WHERE type='ci' AND %s" " AND objid %s IN (SELECT cid FROM plink WHERE NOT isprim)", zTimeSpan/*safe-for-%s*/, zNot/*safe-for-%s*/ ); } return statsReportType = rc; } /* ** Returns a string suitable (for a given value of suitable) for ** use in a label with the header of the /reports pages, dependent |
︙ | ︙ | |||
307 308 309 310 311 312 313 | zTimeframe, (char)statsReportType); if( zUserName ){ cgi_printf("&u=%t", zUserName); } cgi_printf("'>%s</a>", zTimeframe); } @ </td><td>%d(nCount)</td> | | | 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 | zTimeframe, (char)statsReportType); if( zUserName ){ cgi_printf("&u=%t", zUserName); } cgi_printf("'>%s</a>", zTimeframe); } @ </td><td>%d(nCount)</td> @ <td style='white-space: nowrap;'> if( strcmp(zTimeframe, zCurrentTF)==0 && rNowFraction>0.05 && nCount>0 && nMaxEvents>0 ){ /* If the timespan covered by this row contains "now", then project ** the number of changes until the completion of the timespan and |
︙ | ︙ | |||
738 739 740 741 742 743 744 | statsReportTimelineYFlag); if( zUserName ){ cgi_printf("&u=%t",zUserName); } cgi_printf("'>%s</a></td>",zWeek); cgi_printf("<td>%d</td>",nCount); | | | 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 | statsReportTimelineYFlag); if( zUserName ){ cgi_printf("&u=%t",zUserName); } cgi_printf("'>%s</a></td>",zWeek); cgi_printf("<td>%d</td>",nCount); cgi_printf("<td style='white-space: nowrap;'>"); if( nCount ){ if( zCurrentWeek!=0 && strcmp(zWeek, zCurrentWeek)==0 && rNowFraction>0.05 && nMaxEvents>0 ){ /* If the covered covered by this row contains "now", then project |
︙ | ︙ |
Changes to src/style.c.
︙ | ︙ | |||
450 451 452 453 454 455 456 | ** or after any updates to the CSS files */ blob_appendf(&url, "?id=%x", skin_id("css")); if( P("once")!=0 && P("skin")!=0 ){ blob_appendf(&url, "&skin=%s&once", skin_in_use()); } | | | | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 | ** or after any updates to the CSS files */ blob_appendf(&url, "?id=%x", skin_id("css")); if( P("once")!=0 && P("skin")!=0 ){ blob_appendf(&url, "&skin=%s&once", skin_in_use()); } /* Generate the CSS URL variable */ Th_Store("stylesheet_url", blob_str(&url)); blob_reset(&url); } /* ** Create a TH1 variable containing the URL for the specified image. ** The resulting variable name will be of the form $[zImageName]_image_url. ** The value will be a URL that includes an id= query parameter that ** changes if the underlying resource changes or if a different skin ** is selected. */ static void image_url_var(const char *zImageName){ char *zVarName; /* Name of the new TH1 variable */ char *zResource; /* Name of CONFIG entry holding content */ char *zUrl; /* The URL */ zResource = mprintf("%s-image", zImageName); zUrl = mprintf("%R/%s?id=%x", zImageName, skin_id(zResource)); free(zResource); zVarName = mprintf("%s_image_url", zImageName); Th_Store(zVarName, zUrl); free(zVarName); free(zUrl); } /* ** Output TEXT with a click-to-copy button next to it. Loads the copybtn.js |
︙ | ︙ | |||
595 596 597 598 599 600 601 | ** The text '$nonce' is replaced by style_nonce() if and whereever it ** occurs in the input string. ** ** The string returned is obtained from fossil_malloc() and ** should be released by the caller. */ char *style_csp(int toHeader){ | | | 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 | ** The text '$nonce' is replaced by style_nonce() if and whereever it ** occurs in the input string. ** ** The string returned is obtained from fossil_malloc() and ** should be released by the caller. */ char *style_csp(int toHeader){ static const char zBackupCSP[] = "default-src 'self' data:; " "script-src 'self' 'nonce-$nonce'; " "style-src 'self' 'unsafe-inline'; " "img-src * data:"; const char *zFormat; Blob csp; char *zNonce; |
︙ | ︙ | |||
631 632 633 634 635 636 637 | return zCsp; } /* ** Disable content security policy for the current page. ** WARNING: Do not do this lightly! ** | | | | 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 | return zCsp; } /* ** Disable content security policy for the current page. ** WARNING: Do not do this lightly! ** ** This routine must be called before the CSP is sued by ** style_header(). */ void style_disable_csp(void){ disableCSP = 1; } /* ** Default HTML page header text through <body>. If the repository-specific ** header template lacks a <body> tag, then all of the following is ** prepended. */ static const char zDfltHeader[] = @ <html> @ <head> @ <meta charset="UTF-8"> @ <base href="$baseurl/$current_page"> @ <meta http-equiv="Content-Security-Policy" content="$default_csp"> @ <meta name="viewport" content="width=device-width, initial-scale=1.0"> @ <title>$<project_name>: $<title></title> |
︙ | ︙ | |||
668 669 670 671 672 673 674 | const char *get_default_header(){ return zDfltHeader; } /* ** The default TCL list that defines the main menu. */ | | | 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 | const char *get_default_header(){ return zDfltHeader; } /* ** The default TCL list that defines the main menu. */ static const char zDfltMainMenu[] = @ Home /home * {} @ Timeline /timeline {o r j} {} @ Files /dir?ci=tip oh desktoponly @ Branches /brlist o wideonly @ Tags /taglist o wideonly @ Forum /forum {@2 3 4 5 6} wideonly @ Chat /chat C wideonly |
︙ | ︙ | |||
793 794 795 796 797 798 799 | if( !login_is_nobody() ){ Th_Store("login", g.zLogin); } Th_MaybeStore("current_feature", feature_from_page_path(local_zCurrentPage) ); if( g.ftntsIssues[0] || g.ftntsIssues[1] || g.ftntsIssues[2] || g.ftntsIssues[3] ){ char buf[80]; | | | | 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 | if( !login_is_nobody() ){ Th_Store("login", g.zLogin); } Th_MaybeStore("current_feature", feature_from_page_path(local_zCurrentPage) ); if( g.ftntsIssues[0] || g.ftntsIssues[1] || g.ftntsIssues[2] || g.ftntsIssues[3] ){ char buf[80]; sqlite3_snprintf(sizeof(buf), buf, "%i %i %i %i", g.ftntsIssues[0], g.ftntsIssues[1], g.ftntsIssues[2], g.ftntsIssues[3]); Th_Store("footnotes_issues_counters", buf); } } /* ** Draw the header. */ |
︙ | ︙ | |||
1283 1284 1285 1286 1287 1288 1289 | ** * $basename ** * $secureurl ** * $home ** * $logo ** * $background ** ** The output from TH1 becomes the style sheet. Fossil always reports | | | 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 | ** * $basename ** * $secureurl ** * $home ** * $logo ** * $background ** ** The output from TH1 becomes the style sheet. Fossil always reports ** that the style sheet is cacheable. */ void page_style_css(void){ Blob css = empty_blob; int i; const char * zDefaults; const char *zSkin; |
︙ | ︙ | |||
1323 1324 1325 1326 1327 1328 1329 | /* Tell CGI that the content returned by this page is considered cacheable */ g.isConst = 1; } /* ** All possible capabilities */ | | | 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 | /* Tell CGI that the content returned by this page is considered cacheable */ g.isConst = 1; } /* ** All possible capabilities */ static const char allCap[] = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKL"; /* ** Compute the current login capabilities */ static char *find_capabilities(char *zCap){ int i, j; |
︙ | ︙ | |||
1477 1478 1479 1480 1481 1482 1483 | break; } default: { @ CSRF safety = unsafe<br> break; } } | | | 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 | break; } default: { @ CSRF safety = unsafe<br> break; } } @ fossil_exe_id() = %h(fossil_exe_id())<br> if( g.perm.Admin ){ int k; for(k=0; g.argvOrig[k]; k++){ Blob t; blob_init(&t, 0, 0); blob_append_escaped_arg(&t, g.argvOrig[k], 0); |
︙ | ︙ | |||
1649 1650 1651 1652 1653 1654 1655 | ** Example: ** ** style_select_list_int("my-grapes", "my_grapes", "Grapes", ** "Select the number of grapes", ** atoi(PD("my_field","0")), ** "", 1, "2", 2, "Three", 3, ** NULL); | | | 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 | ** Example: ** ** style_select_list_int("my-grapes", "my_grapes", "Grapes", ** "Select the number of grapes", ** atoi(PD("my_field","0")), ** "", 1, "2", 2, "Three", 3, ** NULL); ** */ void style_select_list_int(const char * zWrapperId, const char *zFieldName, const char * zLabel, const char * zToolTip, int selectedVal, ... ){ char * zLabelID = style_next_input_id(); va_list vargs; |
︙ | ︙ | |||
1773 1774 1775 1776 1777 1778 1779 | if( z[0]=='/' || z[0]=='\\' ){ zOrigin = z+1; } } CX("<script nonce='%s'>/* %s:%d */\n", style_nonce(), zOrigin, iLine); } | | | 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 | if( z[0]=='/' || z[0]=='\\' ){ zOrigin = z+1; } } CX("<script nonce='%s'>/* %s:%d */\n", style_nonce(), zOrigin, iLine); } /* Generate the closing </script> tag */ void style_script_end(void){ CX("</script>\n"); } /* ** Emits a NOSCRIPT tag with an error message stating that JS is |
︙ | ︙ |
Changes to src/style.fileedit.css.
︙ | ︙ | |||
74 75 76 77 78 79 80 81 82 83 84 85 86 87 | overflow: auto; } body.fileedit #fileedit-tab-preview-wrapper > pre { margin: 0; } body.fileedit #fileedit-tab-fileselect > h1 { margin: 0; } body.fileedit .fileedit-options.commit-message > div { display: flex; flex-direction: column; align-items: stretch; font-family: monospace; } | > > > | 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | overflow: auto; } body.fileedit #fileedit-tab-preview-wrapper > pre { margin: 0; } body.fileedit #fileedit-tab-fileselect > h1 { margin: 0; } body.fileedit .fileedit-options > div > * { margin: 0.25em; } body.fileedit .fileedit-options.commit-message > div { display: flex; flex-direction: column; align-items: stretch; font-family: monospace; } |
︙ | ︙ | |||
103 104 105 106 107 108 109 | margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > input { vertical-align: middle; margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > .input-with-label { | < | | 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > input { vertical-align: middle; margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > .input-with-label { margin: 0 0.5em 0.25em 0.5em; } body.fileedit .fileedit-options > div > * { margin: 0.25em; } body.fileedit .fileedit-options.flex-container.flex-row { align-items: first baseline; } |
︙ | ︙ |
Changes to src/style.wikiedit.css.
︙ | ︙ | |||
41 42 43 44 45 46 47 | margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > input { vertical-align: middle; margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > .input-with-label { | < | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > input { vertical-align: middle; margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > .input-with-label { margin: 0 0.5em 0.25em 0.5em; } body.wikiedit label { display: inline; /* some skins set label display to block! */ } body.wikiedit .wikiedit-options > div > * { margin: 0.25em; |
︙ | ︙ |
Changes to src/sync.c.
︙ | ︙ | |||
50 51 52 53 54 55 56 | */ static int client_sync_all_urls( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ const char *zAltPCode /* Alternative project code (usually NULL) */ ){ | | | | | > > > > | > > > | | | > > > > > > > > > | | > > | > > > | > | | > > > > > > | > > > | | | | | | | | | > > > > > > > > > > > | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | */ static int client_sync_all_urls( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ const char *zAltPCode /* Alternative project code (usually NULL) */ ){ int nErr = 0; /* Number of errors seen */ int nOther; /* Number of extra remote URLs */ char **azOther; /* Text of extra remote URLs */ int i; /* Loop counter */ int iEnd; /* Loop termination point */ int nextIEnd; /* Loop termination point for next pass */ int iPass; /* Which pass through the remotes. 0 or 1 */ int nPass; /* Number of passes to make. 1 or 2 */ Stmt q; /* An SQL statement */ UrlData baseUrl; /* Saved parse of the default remote */ sync_explain(syncFlags); if( (syncFlags & SYNC_ALLURL)==0 ){ /* Common-case: Only sync with the remote identified by g.url */ nErr = client_sync(syncFlags, configRcvMask, configSendMask, zAltPCode, 0); if( nErr==0 ) url_remember(); return nErr; } /* If we reach this point, it means we want to sync with all remotes */ memset(&baseUrl, 0, sizeof(baseUrl)); url_move_parse(&baseUrl, &g.url); nOther = 0; azOther = 0; db_prepare(&q, "SELECT substr(name,10) FROM config" " WHERE name glob 'sync-url:*'" " AND value<>(SELECT value FROM config WHERE name='last-sync-url')" ); while( db_step(&q)==SQLITE_ROW ){ const char *zUrl = db_column_text(&q, 0); azOther = fossil_realloc(azOther, sizeof(*azOther)*(nOther+1)); azOther[nOther++] = fossil_strdup(zUrl); } db_finalize(&q); iEnd = nOther+1; nextIEnd = 0; nPass = 1 + ((syncFlags & (SYNC_PUSH|SYNC_PULL))==(SYNC_PUSH|SYNC_PULL)); for(iPass=0; iPass<nPass; iPass++){ for(i=0; i<iEnd; i++){ int rc; int nRcvd; if( i==0 ){ url_move_parse(&g.url, &baseUrl); /* Load canonical URL */ }else{ /* Load an auxiliary remote URL */ url_parse(azOther[i-1], URL_PROMPT_PW|URL_ASK_REMEMBER_PW|URL_USE_CONFIG); } if( i>0 || iPass>0 ) sync_explain(syncFlags); rc = client_sync(syncFlags, configRcvMask, configSendMask, zAltPCode, &nRcvd); if( nRcvd>0 ){ /* If new artifacts were received, we want to repeat all prior ** remotes on the second pass */ nextIEnd = i; } nErr += rc; if( rc==0 && iPass==0 ){ if( i==0 ){ url_remember(); }else if( (g.url.flags & URL_REMEMBER_PW)!=0 ){ char *zKey = mprintf("sync-pw:%s", azOther[i-1]); char *zPw = obscure(g.url.passwd); if( zPw && zPw[0] ){ db_set(zKey/*works-like:""*/, zPw, 0); } fossil_free(zPw); fossil_free(zKey); } } if( i==0 ){ url_move_parse(&baseUrl, &g.url); /* Don't forget canonical URL */ }else{ url_unparse(&g.url); /* Delete auxiliary URL parses */ } } iEnd = nextIEnd; } for(i=0; i<nOther; i++){ fossil_free(azOther[i]); azOther[i] = 0; } fossil_free(azOther); url_move_parse(&g.url, &baseUrl); /* Restore the canonical URL parse */ return nErr; } /* ** If the repository is configured for autosyncing, then do an ** autosync. Bits of the "flags" parameter determine details of behavior: |
︙ | ︙ | |||
126 127 128 129 130 131 132 | int configSync = 0; /* configuration changes transferred */ if( g.fNoSync ){ return 0; } zAutosync = db_get_for_subsystem("autosync", zSubsys); if( zAutosync==0 ) zAutosync = "on"; /* defend against misconfig */ if( is_false(zAutosync) ) return 0; | | | | 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | int configSync = 0; /* configuration changes transferred */ if( g.fNoSync ){ return 0; } zAutosync = db_get_for_subsystem("autosync", zSubsys); if( zAutosync==0 ) zAutosync = "on"; /* defend against misconfig */ if( is_false(zAutosync) ) return 0; if( db_get_boolean("dont-push",0) || sqlite3_strglob("*pull*", zAutosync)==0 ){ flags &= ~SYNC_CKIN_LOCK; if( flags & SYNC_PUSH ) return 0; } if( find_option("verbose","v",0)!=0 ) flags |= SYNC_VERBOSE; url_parse(0, URL_REMEMBER|URL_USE_CONFIG); if( g.url.protocol==0 ) return 0; if( g.url.user!=0 && g.url.passwd==0 ){ g.url.passwd = unobscure(db_get("last-sync-pw", 0)); g.url.flags |= URL_PROMPT_PW; url_prompt_for_password(); } g.zHttpAuth = get_httpauth(); if( sqlite3_strglob("*all*", zAutosync)==0 ){ rc = client_sync_all_urls(flags|SYNC_ALLURL, configSync, 0, 0); }else{ url_remember(); sync_explain(flags); url_enable_proxy("via proxy: "); rc = client_sync(flags, configSync, 0, 0, 0); } return rc; } /* ** This routine will try a number of times to perform autosync with a ** 0.5 second sleep between attempts. The number of attempts is determined |
︙ | ︙ | |||
232 233 234 235 236 237 238 239 240 241 242 243 244 245 | } } if( find_option("private",0,0)!=0 ){ *pSyncFlags |= SYNC_PRIVATE; } if( find_option("verbose","v",0)!=0 ){ *pSyncFlags |= SYNC_VERBOSE; } if( find_option("no-http-compression",0,0)!=0 ){ *pSyncFlags |= SYNC_NOHTTPCOMPRESS; } if( find_option("all",0,0)!=0 ){ *pSyncFlags |= SYNC_ALLURL; } | > > > | 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 | } } if( find_option("private",0,0)!=0 ){ *pSyncFlags |= SYNC_PRIVATE; } if( find_option("verbose","v",0)!=0 ){ *pSyncFlags |= SYNC_VERBOSE; if( find_option("verbose","v",0)!=0 ){ *pSyncFlags |= SYNC_XVERBOSE; } } if( find_option("no-http-compression",0,0)!=0 ){ *pSyncFlags |= SYNC_NOHTTPCOMPRESS; } if( find_option("all",0,0)!=0 ){ *pSyncFlags |= SYNC_ALLURL; } |
︙ | ︙ | |||
298 299 300 301 302 303 304 305 306 307 308 309 310 311 | if( g.url.protocol==0 ){ if( urlOptional ) fossil_exit(0); usage("URL"); } user_select(); url_enable_proxy("via proxy: "); *pConfigFlags |= configSync; } /* ** COMMAND: pull ** ** Usage: %fossil pull ?URL? ?options? | > > > > > > | 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 | if( g.url.protocol==0 ){ if( urlOptional ) fossil_exit(0); usage("URL"); } user_select(); url_enable_proxy("via proxy: "); *pConfigFlags |= configSync; if( (*pSyncFlags & SYNC_ALLURL)==0 && zUrl==0 ){ const char *zAutosync = db_get_for_subsystem("autosync", "sync"); if( sqlite3_strglob("*all*", zAutosync)==0 ){ *pSyncFlags |= SYNC_ALLURL; } } } /* ** COMMAND: pull ** ** Usage: %fossil pull ?URL? ?options? |
︙ | ︙ | |||
332 333 334 335 336 337 338 | ** --project-code CODE Use CODE as the project code ** --proxy PROXY Use the specified HTTP proxy ** -R|--repository REPO Local repository to pull into ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move messages ** between client and server | | > | 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | ** --project-code CODE Use CODE as the project code ** --proxy PROXY Use the specified HTTP proxy ** -R|--repository REPO Local repository to pull into ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move messages ** between client and server ** -v|--verbose Additional (debugging) output - use twice to ** also trace network traffic. ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: [[clone]], [[config]], [[push]], [[remote]], [[sync]] */ void pull_cmd(void){ unsigned configFlags = 0; |
︙ | ︙ | |||
384 385 386 387 388 389 390 | ** --proxy PROXY Use the specified HTTP proxy ** --private Push private branches too ** -R|--repository REPO Local repository to push from ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to communicate with ** the server | | > | 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 | ** --proxy PROXY Use the specified HTTP proxy ** --private Push private branches too ** -R|--repository REPO Local repository to push from ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to communicate with ** the server ** -v|--verbose Additional (debugging) output - use twice for ** network debugging ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: [[clone]], [[config]], [[pull]], [[remote]], [[sync]] */ void push_cmd(void){ unsigned configFlags = 0; |
︙ | ︙ | |||
433 434 435 436 437 438 439 | ** --private Sync private branches too ** -R|--repository REPO Local repository to sync with ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move message ** between the client and the server ** -u|--unversioned Also sync unversioned content | | > | 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 | ** --private Sync private branches too ** -R|--repository REPO Local repository to sync with ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move message ** between the client and the server ** -u|--unversioned Also sync unversioned content ** -v|--verbose Additional (debugging) output - use twice to ** get network debug info ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: [[clone]], [[pull]], [[push]], [[remote]] */ void sync_cmd(void){ unsigned configFlags = 0; |
︙ | ︙ | |||
466 467 468 469 470 471 472 | ** commands. */ void sync_unversioned(unsigned syncFlags){ unsigned configFlags = 0; (void)find_option("uv-noop",0,0); process_sync_args(&configFlags, &syncFlags, 1, 0); verify_all_options(); | | | 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 | ** commands. */ void sync_unversioned(unsigned syncFlags){ unsigned configFlags = 0; (void)find_option("uv-noop",0,0); process_sync_args(&configFlags, &syncFlags, 1, 0); verify_all_options(); client_sync(syncFlags, 0, 0, 0, 0); } /* ** COMMAND: remote ** COMMAND: remote-url* ** ** Usage: %fossil remote ?SUBCOMMAND ...? |
︙ | ︙ | |||
519 520 521 522 523 524 525 | ** ** > fossil remote list|ls ** ** Show all remote repository URLs. ** ** > fossil remote off ** | | | 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 | ** ** > fossil remote list|ls ** ** Show all remote repository URLs. ** ** > fossil remote off ** ** Forget the default URL. This disables autosync. ** ** This is a convenient way to enter "airplane mode". To enter ** airplane mode, first save the current default URL, then turn the ** default off. Perhaps like this: ** ** fossil remote add main default ** fossil remote off |
︙ | ︙ | |||
581 582 583 584 585 586 587 | ** ** The last-sync-url is called "default" for the display list. ** ** The last-sync-url might be duplicated into one of the sync-url:NAME ** entries. Thus, when doing a "fossil sync --all" or an autosync with ** autosync=all, each sync-url:NAME entry is checked to see if it is the ** same as last-sync-url and if it is then that entry is skipped. | | | 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 | ** ** The last-sync-url is called "default" for the display list. ** ** The last-sync-url might be duplicated into one of the sync-url:NAME ** entries. Thus, when doing a "fossil sync --all" or an autosync with ** autosync=all, each sync-url:NAME entry is checked to see if it is the ** same as last-sync-url and if it is then that entry is skipped. */ if( g.argc==2 ){ /* "fossil remote" with no arguments: Show the last sync URL. */ zUrl = db_get("last-sync-url", 0); if( zUrl==0 ){ fossil_print("off\n"); }else{ |
︙ | ︙ |
Changes to src/tag.c.
︙ | ︙ | |||
42 43 44 45 46 47 48 | PQueue queue; /* Queue of check-ins to be tagged */ Stmt s; /* Query the children of :pid to which to propagate */ Stmt ins; /* INSERT INTO tagxref */ Stmt eventupdate; /* UPDATE event */ assert( tagType==0 || tagType==2 ); pqueuex_init(&queue); | | | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | PQueue queue; /* Queue of check-ins to be tagged */ Stmt s; /* Query the children of :pid to which to propagate */ Stmt ins; /* INSERT INTO tagxref */ Stmt eventupdate; /* UPDATE event */ assert( tagType==0 || tagType==2 ); pqueuex_init(&queue); pqueuex_insert(&queue, pid, 0.0); /* Query for children of :pid to which to propagate the tag. ** Three returns: (1) rid of the child. (2) timestamp of child. ** (3) True to propagate or false to block. */ db_prepare(&s, "SELECT cid, plink.mtime," |
︙ | ︙ | |||
77 78 79 80 81 82 83 | ); } if( tagid==TAG_BGCOLOR ){ db_prepare(&eventupdate, "UPDATE event SET bgcolor=%Q WHERE objid=:rid", zValue ); } | | | | 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | ); } if( tagid==TAG_BGCOLOR ){ db_prepare(&eventupdate, "UPDATE event SET bgcolor=%Q WHERE objid=:rid", zValue ); } while( (pid = pqueuex_extract(&queue))!=0 ){ db_bind_int(&s, ":pid", pid); while( db_step(&s)==SQLITE_ROW ){ int doit = db_column_int(&s, 2); if( doit ){ int cid = db_column_int(&s, 0); double mtime = db_column_double(&s, 1); pqueuex_insert(&queue, cid, mtime); db_bind_int(&ins, ":rid", cid); db_step(&ins); db_reset(&ins); if( tagid==TAG_BGCOLOR ){ db_bind_int(&eventupdate, ":rid", cid); db_step(&eventupdate); db_reset(&eventupdate); |
︙ | ︙ | |||
638 639 640 641 642 643 644 | const char *zTagPrefix = find_option("prefix","",1); int nTagType = fRaw ? -1 : 0; if( zTagType!=0 ){ int l = strlen(zTagType); if( strncmp(zTagType,"cancel",l)==0 ){ nTagType = 0; | | | | 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 | const char *zTagPrefix = find_option("prefix","",1); int nTagType = fRaw ? -1 : 0; if( zTagType!=0 ){ int l = strlen(zTagType); if( strncmp(zTagType,"cancel",l)==0 ){ nTagType = 0; }else if( strncmp(zTagType,"singleton",l)==0 ){ nTagType = 1; }else if( strncmp(zTagType,"propagated",l)==0 ){ nTagType = 2; }else{ fossil_fatal("unrecognized tag type"); } } if( g.argc==3 ){ const int nTagPrefix = zTagPrefix ? (int)strlen(zTagPrefix) : 0; |
︙ | ︙ |
Changes to src/tar.c.
︙ | ︙ | |||
242 243 244 245 246 247 248 | n /= 10; } /* adding the length extended the length field? */ if(blen > next10){ blen++; } /* build the string */ | | > | 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 | n /= 10; } /* adding the length extended the length field? */ if(blen > next10){ blen++; } /* build the string */ blob_appendf(&tball.pax, "%d %s=%*.*s\n", blen, zField, nValue, nValue, zValue); /* this _must_ be right */ if((int)blob_size(&tball.pax) != blen){ fossil_panic("internal error: PAX tar header has bad length"); } } |
︙ | ︙ |
Changes to src/terminal.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 27 28 29 30 31 32 33 | #include "config.h" #include "terminal.h" #include <assert.h> #ifdef _WIN32 # include <windows.h> #else #include <sys/ioctl.h> #include <stdio.h> #include <unistd.h> #endif | > > > | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | #include "config.h" #include "terminal.h" #include <assert.h> #ifdef _WIN32 # include <windows.h> #else #ifdef __EXTENSIONS__ #include <termio.h> #endif #include <sys/ioctl.h> #include <stdio.h> #include <unistd.h> #endif |
︙ | ︙ |
Changes to src/th.c.
︙ | ︙ | |||
2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 | /* ** Set the result of the interpreter to the th1 representation of ** the integer iVal and return TH_OK. */ int Th_SetResultInt(Th_Interp *interp, int iVal){ int isNegative = 0; char zBuf[32]; char *z = &zBuf[32]; if( iVal<0 ){ isNegative = 1; | > | | | | | 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 | /* ** Set the result of the interpreter to the th1 representation of ** the integer iVal and return TH_OK. */ int Th_SetResultInt(Th_Interp *interp, int iVal){ int isNegative = 0; unsigned int uVal = iVal; char zBuf[32]; char *z = &zBuf[32]; if( iVal<0 ){ isNegative = 1; uVal = iVal * -1; } *(--z) = '\0'; *(--z) = (char)(48+(uVal%10)); while( (uVal = (uVal/10))>0 ){ *(--z) = (char)(48+(uVal%10)); assert(z>zBuf); } if( isNegative ){ *(--z) = '-'; } return Th_SetResult(interp, z, -1); |
︙ | ︙ |
Changes to src/th_main.c.
︙ | ︙ | |||
29 30 31 32 33 34 35 | */ #define TH_INIT_NONE ((u32)0x00000000) /* No flags. */ #define TH_INIT_NEED_CONFIG ((u32)0x00000001) /* Open configuration first? */ #define TH_INIT_FORCE_TCL ((u32)0x00000002) /* Force Tcl to be enabled? */ #define TH_INIT_FORCE_RESET ((u32)0x00000004) /* Force TH1 commands re-added? */ #define TH_INIT_FORCE_SETUP ((u32)0x00000008) /* Force eval of setup script? */ #define TH_INIT_NO_REPO ((u32)0x00000010) /* Skip opening repository. */ | | > | 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | */ #define TH_INIT_NONE ((u32)0x00000000) /* No flags. */ #define TH_INIT_NEED_CONFIG ((u32)0x00000001) /* Open configuration first? */ #define TH_INIT_FORCE_TCL ((u32)0x00000002) /* Force Tcl to be enabled? */ #define TH_INIT_FORCE_RESET ((u32)0x00000004) /* Force TH1 commands re-added? */ #define TH_INIT_FORCE_SETUP ((u32)0x00000008) /* Force eval of setup script? */ #define TH_INIT_NO_REPO ((u32)0x00000010) /* Skip opening repository. */ #define TH_INIT_NO_ENCODE ((u32)0x00000020) /* Do not html-encode sendText()*/ /* output. */ #define TH_INIT_MASK ((u32)0x0000003F) /* All possible init flags. */ /* ** Useful and/or "well-known" combinations of flag values. */ #define TH_INIT_DEFAULT (TH_INIT_NONE) /* Default flags. */ #define TH_INIT_HOOK (TH_INIT_NEED_CONFIG | TH_INIT_FORCE_SETUP) |
︙ | ︙ |
Changes to src/th_tcl.c.
︙ | ︙ | |||
1162 1163 1164 1165 1166 1167 1168 | Tcl_DeleteInterp(tclInterp); /* TODO: Redundant? */ tclInterp = 0; return TH_ERROR; } tclContext->interp = tclInterp; if( Tcl_Init(tclInterp)!=TCL_OK ){ Th_ErrorMessage(interp, | | > | > | 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 | Tcl_DeleteInterp(tclInterp); /* TODO: Redundant? */ tclInterp = 0; return TH_ERROR; } tclContext->interp = tclInterp; if( Tcl_Init(tclInterp)!=TCL_OK ){ Th_ErrorMessage(interp, "Tcl initialization error:", Tcl_GetString(Tcl_GetObjResult(tclInterp)), -1); Tcl_DeleteInterp(tclInterp); tclContext->interp = tclInterp = 0; return TH_ERROR; } if( setTclArguments(tclInterp, argc, argv)!=TCL_OK ){ Th_ErrorMessage(interp, "Tcl error setting arguments:", Tcl_GetString(Tcl_GetObjResult(tclInterp)), -1); Tcl_DeleteInterp(tclInterp); tclContext->interp = tclInterp = 0; return TH_ERROR; } /* ** Determine (and cache) if an objProc can be called directly for a Tcl ** command invoked via the tclInvoke TH1 command. |
︙ | ︙ | |||
1192 1193 1194 1195 1196 1197 1198 | Tcl_CallWhenDeleted(tclInterp, Th1DeleteProc, interp); Tcl_CreateObjCommand(tclInterp, "th1Eval", Th1EvalObjCmd, interp, NULL); Tcl_CreateObjCommand(tclInterp, "th1Expr", Th1ExprObjCmd, interp, NULL); /* If necessary, evaluate the custom Tcl setup script. */ setup = tclContext->setup; if( setup && Tcl_EvalEx(tclInterp, setup, -1, 0)!=TCL_OK ){ Th_ErrorMessage(interp, | | > | 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 | Tcl_CallWhenDeleted(tclInterp, Th1DeleteProc, interp); Tcl_CreateObjCommand(tclInterp, "th1Eval", Th1EvalObjCmd, interp, NULL); Tcl_CreateObjCommand(tclInterp, "th1Expr", Th1ExprObjCmd, interp, NULL); /* If necessary, evaluate the custom Tcl setup script. */ setup = tclContext->setup; if( setup && Tcl_EvalEx(tclInterp, setup, -1, 0)!=TCL_OK ){ Th_ErrorMessage(interp, "Tcl setup script error:", Tcl_GetString(Tcl_GetObjResult(tclInterp)), -1); Tcl_DeleteInterp(tclInterp); tclContext->interp = tclInterp = 0; return TH_ERROR; } return TH_OK; } |
︙ | ︙ |
Changes to src/timeline.c.
︙ | ︙ | |||
33 34 35 36 37 38 39 40 41 42 43 44 45 46 | */ #define TIMELINE_MODE_NONE 0 #define TIMELINE_MODE_BEFORE 1 #define TIMELINE_MODE_AFTER 2 #define TIMELINE_MODE_CHILDREN 3 #define TIMELINE_MODE_PARENTS 4 /* ** Add an appropriate tag to the output if "rid" is unpublished (private) */ #define UNPUB_TAG "<em>(unpublished)</em>" void tag_private_status(int rid){ if( content_is_private(rid) ){ cgi_printf(" %s", UNPUB_TAG); | > > > > > > > | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | */ #define TIMELINE_MODE_NONE 0 #define TIMELINE_MODE_BEFORE 1 #define TIMELINE_MODE_AFTER 2 #define TIMELINE_MODE_CHILDREN 3 #define TIMELINE_MODE_PARENTS 4 #define TIMELINE_FMT_ONELINE \ "%h %c" #define TIMELINE_FMT_MEDIUM \ "Commit: %h%nDate: %d%nAuthor: %a%nComment: %c" #define TIMELINE_FMT_FULL \ "Commit: %H%nDate: %d%nAuthor: %a%nComment: %c%n"\ "Branch: %b%nTags: %t%nPhase: %p" /* ** Add an appropriate tag to the output if "rid" is unpublished (private) */ #define UNPUB_TAG "<em>(unpublished)</em>" void tag_private_status(int rid){ if( content_is_private(rid) ){ cgi_printf(" %s", UNPUB_TAG); |
︙ | ︙ | |||
144 145 146 147 148 149 150 | db_bind_int(&q, "$rid", rid); res = db_step(&q)==SQLITE_ROW; db_reset(&q); return res; } /* | | | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | db_bind_int(&q, "$rid", rid); res = db_step(&q)==SQLITE_ROW; db_reset(&q); return res; } /* ** Return the text of the unformatted ** forum post given by the RID in the argument. */ static void forum_post_content_function( sqlite3_context *context, int argc, sqlite3_value **argv ){ |
︙ | ︙ | |||
357 358 359 360 361 362 363 | int isClosed = 0; if( is_ticket(zTktid, &isClosed) && isClosed ){ zExtraClass = " tktTlClosed"; }else{ zExtraClass = " tktTlOpen"; } fossil_free(zTktid); | | | 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 | int isClosed = 0; if( is_ticket(zTktid, &isClosed) && isClosed ){ zExtraClass = " tktTlClosed"; }else{ zExtraClass = " tktTlOpen"; } fossil_free(zTktid); } } if( zType[0]=='e' && tagid ){ if( bTimestampLinksToInfo ){ char *zId; zId = db_text(0, "SELECT substr(tagname, 7) FROM tag WHERE tagid=%d", tagid); zDateLink = href("%R/technote/%s",zId); |
︙ | ︙ | |||
667 668 669 670 671 672 673 | cgi_printf(" tags: %h", zTagList); } } if( tmFlags & TIMELINE_SHOWRID ){ int srcId = delta_source_rid(rid); if( srcId ){ | | | 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 | cgi_printf(" tags: %h", zTagList); } } if( tmFlags & TIMELINE_SHOWRID ){ int srcId = delta_source_rid(rid); if( srcId ){ cgi_printf(" id: %z%d←%d</a>", href("%R/deltachain/%d",rid), rid, srcId); }else{ cgi_printf(" id: %z%d</a>", href("%R/deltachain/%d",rid), rid); } } tag_private_status(rid); |
︙ | ︙ | |||
1415 1416 1417 1418 1419 1420 1421 | zIntro = "regular expression "; }else/* if( matchStyle==MS_BRLIST )*/{ zStart = "tagname IN ('sym-"; zDelimiter = "','sym-"; zEnd = "')"; zPrefix = ""; zSuffix = ""; | | | 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 | zIntro = "regular expression "; }else/* if( matchStyle==MS_BRLIST )*/{ zStart = "tagname IN ('sym-"; zDelimiter = "','sym-"; zEnd = "')"; zPrefix = ""; zSuffix = ""; zIntro = ""; } /* Convert the list of matches into an SQL expression and text description. */ blob_zero(&expr); blob_zero(&desc); blob_zero(&err); while( 1 ){ |
︙ | ︙ | |||
1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 | } zEDate[j] = 0; /* It looks like this may be a date. Return it with punctuation added. */ return zEDate; } /* ** WEBPAGE: timeline ** ** Query parameters: ** ** a=TIMEORTAG Show events after TIMEORTAG ** b=TIMEORTAG Show events before TIMEORTAG ** c=TIMEORTAG Show events that happen "circa" TIMEORTAG ** cf=FILEHASH Show events around the time of the first use of ** the file with FILEHASH ** m=TIMEORTAG Highlight the event at TIMEORTAG, or the closest available ** event if TIMEORTAG is not part of the timeline. If ** the t= or r= is used, the m event is added to the timeline ** if it isn't there already. ** sel1=TIMEORTAG Highlight the check-in at TIMEORTAG if it is part of ** the timeline. Similar to m= except TIMEORTAG must ** match a check-in that is already in the timeline. ** sel2=TIMEORTAG Like sel1= but use the secondary highlight. ** n=COUNT Maximum number of events. "all" for no limit ** n1=COUNT Same as "n" but doesn't set the display-preference cookie ** Use "n1=COUNT" for a one-time display change ** p=CHECKIN Parents and ancestors of CHECKIN ** bt=PRIOR ... going back to PRIOR ** d=CHECKIN Children and descendants of CHECKIN ** ft=DESCENDANT ... going forward to DESCENDANT ** dp=CHECKIN Same as 'd=CHECKIN&p=CHECKIN' ** df=CHECKIN Same as 'd=CHECKIN&n1=all&nd'. Mnemonic: "Derived From" | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > | > > > > > > | 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 | } zEDate[j] = 0; /* It looks like this may be a date. Return it with punctuation added. */ return zEDate; } /* ** Find the first check-in encountered with a particular tag ** when moving either forwards are backwards in time from a ** particular starting point (iFrom). Return the rid of that ** first check-in. If there are no check-ins in the decendent ** or ancestor set of check-in iFrom that match the tag, then ** return 0. */ static int timeline_endpoint( int iFrom, /* Starting point */ const char *zEnd, /* Tag we are searching for */ int bForward /* 1: forwards in time (descendents) 0: backwards */ ){ int tagId; int endId = 0; Stmt q; int ans = 0; tagId = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zEnd); if( tagId==0 ){ endId = symbolic_name_to_rid(zEnd, "ci"); if( endId==0 ) return 0; } if( bForward ){ if( tagId ){ db_prepare(&q, "WITH RECURSIVE dx(id,mtime) AS (" " SELECT %d, event.mtime FROM event WHERE objid=%d" " UNION" " SELECT plink.cid, plink.mtime" " FROM dx, plink" " WHERE plink.pid=dx.id" " AND plink.mtime<=(SELECT max(event.mtime) FROM tagxref, event" " WHERE tagxref.tagid=%d AND tagxref.tagtype>0" " AND event.objid=tagxref.rid)" " ORDER BY plink.mtime)" "SELECT id FROM dx, tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=id LIMIT 1", iFrom, iFrom, tagId, tagId ); }else{ db_prepare(&q, "WITH RECURSIVE dx(id,mtime) AS (" " SELECT %d, event.mtime FROM event WHERE objid=%d" " UNION" " SELECT plink.cid, plink.mtime" " FROM dx, plink" " WHERE plink.pid=dx.id" " AND plink.mtime<=(SELECT mtime FROM event WHERE objid=%d)" " ORDER BY plink.mtime)" "SELECT id FROM dx WHERE id=%d", iFrom, iFrom, endId, endId ); } }else{ if( tagId ){ db_prepare(&q, "WITH RECURSIVE dx(id,mtime) AS (" " SELECT %d, event.mtime FROM event WHERE objid=%d" " UNION" " SELECT plink.pid, event.mtime" " FROM dx, plink, event" " WHERE plink.cid=dx.id AND event.objid=plink.pid" " AND event.mtime>=(SELECT min(event.mtime) FROM tagxref, event" " WHERE tagxref.tagid=%d AND tagxref.tagtype>0" " AND event.objid=tagxref.rid)" " ORDER BY event.mtime DESC)" "SELECT id FROM dx, tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=id LIMIT 1", iFrom, iFrom, tagId, tagId ); }else{ db_prepare(&q, "WITH RECURSIVE dx(id,mtime) AS (" " SELECT %d, event.mtime FROM event WHERE objid=%d" " UNION" " SELECT plink.pid, event.mtime" " FROM dx, plink, event" " WHERE plink.cid=dx.id AND event.objid=plink.pid" " AND event.mtime>=(SELECT mtime FROM event WHERE objid=%d)" " ORDER BY event.mtime DESC)" "SELECT id FROM dx WHERE id=%d", iFrom, iFrom, endId, endId ); } } if( db_step(&q)==SQLITE_ROW ){ ans = db_column_int(&q, 0); } db_finalize(&q); return ans; } /* ** COMMAND: test-endpoint ** ** Usage: fossil test-endpoint BASE TAG ?OPTIONS? ** ** Show the first check-in with TAG that is a descendent or ancestor ** of BASE. The first descendent checkin is shown by default. Use ** the --backto to see the first ancestor checkin. ** ** Options: ** ** --backto Show ancestor. Others defaults to descendents. */ void timeline_test_endpoint(void){ int bForward = find_option("backto",0,0)==0; int from_rid; int ans; db_find_and_open_repository(0, 0); verify_all_options(); if( g.argc!=4 ){ usage("BASE-CHECKIN TAG ?--backto?"); } from_rid = symbolic_name_to_rid(g.argv[2],"ci"); ans = timeline_endpoint(from_rid, g.argv[3], bForward); if( ans ){ fossil_print("Result: %d (%S)\n", ans, rid_to_uuid(ans)); }else{ fossil_print("No path found\n"); } } /* ** WEBPAGE: timeline ** ** Query parameters: ** ** a=TIMEORTAG Show events after TIMEORTAG ** b=TIMEORTAG Show events before TIMEORTAG ** c=TIMEORTAG Show events that happen "circa" TIMEORTAG ** cf=FILEHASH Show events around the time of the first use of ** the file with FILEHASH ** m=TIMEORTAG Highlight the event at TIMEORTAG, or the closest available ** event if TIMEORTAG is not part of the timeline. If ** the t= or r= is used, the m event is added to the timeline ** if it isn't there already. ** x=HASHLIST Show all check-ins in the comma-separated HASHLIST ** in addition to check-ins specified by t= or r= ** sel1=TIMEORTAG Highlight the check-in at TIMEORTAG if it is part of ** the timeline. Similar to m= except TIMEORTAG must ** match a check-in that is already in the timeline. ** sel2=TIMEORTAG Like sel1= but use the secondary highlight. ** n=COUNT Maximum number of events. "all" for no limit ** n1=COUNT Same as "n" but doesn't set the display-preference cookie ** Use "n1=COUNT" for a one-time display change ** p=CHECKIN Parents and ancestors of CHECKIN ** bt=PRIOR ... going back to PRIOR ** d=CHECKIN Children and descendants of CHECKIN ** ft=DESCENDANT ... going forward to DESCENDANT ** dp=CHECKIN Same as 'd=CHECKIN&p=CHECKIN' ** df=CHECKIN Same as 'd=CHECKIN&n1=all&nd'. Mnemonic: "Derived From" ** bt=CHECKIN "Back To". Show ancenstors going back to CHECKIN ** p=CX ... from CX back to time of CHECKIN ** from=CX ... shortest path from CX back to CHECKIN ** ft=CHECKIN "Forward To": Show decendents forward to CHECKIN ** d=CX ... from CX up to the time of CHECKIN ** from=CX ... shortest path from CX up to CHECKIN ** t=TAG Show only check-ins with the given TAG ** r=TAG Show check-ins related to TAG, equivalent to t=TAG&rel ** tl=TAGLIST Shorthand for t=TAGLIST&ms=brlist ** rl=TAGLIST Shorthand for r=TAGLIST&ms=brlist ** rel Show related check-ins as well as those matching t=TAG ** mionly Limit rel to show ancestors but not descendants ** nowiki Do not show wiki associated with branch or tag ** ms=MATCHSTYLE Set tag match style to EXACT, GLOB, LIKE, REGEXP ** u=USER Only show items associated with USER ** y=TYPE 'ci', 'w', 't', 'n', 'e', 'f', or 'all'. ** ss=VIEWSTYLE c: "Compact", v: "Verbose", m: "Modern", j: "Columnar", ** x: "Classic". ** advm Use the "Advanced" or "Busy" menu design. ** ng No Graph. ** ncp Omit cherrypick merges ** nd Do not highlight the focus check-in ** nsm Omit the submenu ** nc Omit all graph colors other than highlights ** v Show details of files changed ** vfx Show complete text of forum messages ** f=CHECKIN Show family (immediate parents and children) of CHECKIN ** from=CHECKIN Path from... ** to=CHECKIN ... to this ** to2=CHECKIN ... backup name if to= doesn't resolve ** shortest ... show only the shortest path ** rel ... also show related checkins ** bt=PRIOR ... path from CHECKIN back to PRIOR ** ft=LATER ... path from CHECKIN forward to LATER ** uf=FILE_HASH Show only check-ins that contain the given file version ** All qualifying check-ins are shown unless there is ** also an n= or n1= query parameter. ** chng=GLOBLIST Show only check-ins that involve changes to a file whose ** name matches one of the comma-separate GLOBLIST ** brbg Background color determined by branch name ** ubg Background color determined by user |
︙ | ︙ | |||
1696 1697 1698 1699 1700 1701 1702 | const char *zBisect = P("bid"); /* Bisect description */ int cpOnly = PB("cherrypicks"); /* Show all cherrypick checkins */ int tmFlags = 0; /* Timeline flags */ const char *zThisTag = 0; /* Suppress links to this tag */ const char *zThisUser = 0; /* Suppress links to this user */ HQuery url; /* URL for various branch links */ int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */ | > | > | 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 | const char *zBisect = P("bid"); /* Bisect description */ int cpOnly = PB("cherrypicks"); /* Show all cherrypick checkins */ int tmFlags = 0; /* Timeline flags */ const char *zThisTag = 0; /* Suppress links to this tag */ const char *zThisUser = 0; /* Suppress links to this user */ HQuery url; /* URL for various branch links */ int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */ const char *zTo2 = 0; int to_rid = name_choice("to","to2",&zTo2); /* to= for path timelines */ int noMerge = P("shortest")==0; /* Follow merge links if shorter */ int me_rid = name_to_typed_rid(P("me"),"ci"); /* me= for common ancestory */ int you_rid = name_to_typed_rid(P("you"),"ci");/* you= for common ancst */ int pd_rid; double rBefore, rAfter, rCirca; /* Boundary times */ const char *z; char *zOlderButton = 0; /* URL for Older button at the bottom */ char *zOlderButtonLabel = 0; /* Label for the Older Button */ char *zNewerButton = 0; /* URL for Newer button at the top */ char *zNewerButtonLabel = 0; /* Label for the Newer button */ int selectedRid = 0; /* Show a highlight on this RID */ int secondaryRid = 0; /* Show secondary highlight */ int disableY = 0; /* Disable type selector on submenu */ int advancedMenu = 0; /* Use the advanced menu design */ char *zPlural; /* Ending for plural forms */ int showCherrypicks = 1; /* True to show cherrypick merges */ int haveParameterN; /* True if n= query parameter present */ int from_to_mode = 0; /* 0: from,to. 1: from,ft 2: from,bt */ url_initialize(&url, "timeline"); cgi_query_parameters_to_url(&url); (void)P_NoBot("ss") /* "ss" is processed via the udc but at least one spider likes to ** try to SQL inject via this argument, so let's catch that. */; |
︙ | ︙ | |||
1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 | } /* Undocumented query parameter to set JS mode */ builtin_set_js_delivery_mode(P("jsmode"),1); secondaryRid = name_to_typed_rid(P("sel2"),"ci"); selectedRid = name_to_typed_rid(P("sel1"),"ci"); tmFlags |= timeline_ss_submenu(); cookie_link_parameter("advm","advm","0"); advancedMenu = atoi(PD("advm","0")); /* Omit all cherry-pick merge lines if the "ncp" query parameter is ** present or if this repository lacks a "cherrypick" table. */ if( PB("ncp") || !db_table_exists("repository","cherrypick") ){ | > > > > | 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 | } /* Undocumented query parameter to set JS mode */ builtin_set_js_delivery_mode(P("jsmode"),1); secondaryRid = name_to_typed_rid(P("sel2"),"ci"); selectedRid = name_to_typed_rid(P("sel1"),"ci"); if( from_rid!=0 && to_rid!=0 ){ if( selectedRid==0 ) selectedRid = from_rid; if( secondaryRid==0 ) secondaryRid = to_rid; } tmFlags |= timeline_ss_submenu(); cookie_link_parameter("advm","advm","0"); advancedMenu = atoi(PD("advm","0")); /* Omit all cherry-pick merge lines if the "ncp" query parameter is ** present or if this repository lacks a "cherrypick" table. */ if( PB("ncp") || !db_table_exists("repository","cherrypick") ){ |
︙ | ︙ | |||
1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 | " FROM mlink, event" " WHERE mlink.fid=(SELECT rid FROM blob WHERE uuid LIKE '%q%%')" " AND event.objid=mlink.mid" " ORDER BY event.mtime LIMIT 1", P("cf") ); } /* Convert r=TAG to t=TAG&rel in order to populate the UI style widgets. */ if( zBrName && !related ){ cgi_delete_query_parameter("r"); cgi_set_query_parameter("t", zBrName); (void)P("t"); cgi_set_query_parameter("rel", "1"); zTagName = zBrName; | > > > > > > > > > > > > > > | 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 | " FROM mlink, event" " WHERE mlink.fid=(SELECT rid FROM blob WHERE uuid LIKE '%q%%')" " AND event.objid=mlink.mid" " ORDER BY event.mtime LIMIT 1", P("cf") ); } /* Check for tl=TAGLIST and rl=TAGLIST which are abbreviations for ** t=TAGLIST&ms=brlist and r=TAGLIST&ms=brlist repectively. */ if( zBrName==0 && zTagName==0 ){ const char *z; if( (z = P("tl"))!=0 ){ zTagName = z; zMatchStyle = "brlist"; } if( (z = P("rl"))!=0 ){ zBrName = z; zMatchStyle = "brlist"; } } /* Convert r=TAG to t=TAG&rel in order to populate the UI style widgets. */ if( zBrName && !related ){ cgi_delete_query_parameter("r"); cgi_set_query_parameter("t", zBrName); (void)P("t"); cgi_set_query_parameter("rel", "1"); zTagName = zBrName; |
︙ | ︙ | |||
2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 | } if( (tmFlags & TIMELINE_UNHIDE)==0 ){ blob_append_sql(&sql, " AND NOT EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", TAG_HIDDEN ); } if( ((from_rid && to_rid) || (me_rid && you_rid)) && g.perm.Read ){ /* If from= and to= are present, display all nodes on a path connecting ** the two */ PathNode *p = 0; const char *zFrom = 0; const char *zTo = 0; Blob ins; int nNodeOnPath = 0; if( from_rid && to_rid ){ | > > > > > > > > > > > > > > > > > > > > > | > > > > > | | 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 | } if( (tmFlags & TIMELINE_UNHIDE)==0 ){ blob_append_sql(&sql, " AND NOT EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", TAG_HIDDEN ); } if( from_rid && !to_rid && (P("ft")!=0 || P("bt")!=0) ){ const char *zTo = P("ft"); if( zTo ){ from_to_mode = 1; to_rid = timeline_endpoint(from_rid, zTo, 1); }else{ from_to_mode = 2; zTo = P("bt"); to_rid = timeline_endpoint(from_rid, zTo, 0); } if( to_rid ){ cgi_replace_parameter("to", zTo); if( selectedRid==0 ) selectedRid = from_rid; if( secondaryRid==0 ) secondaryRid = to_rid; }else{ to_rid = from_rid; blob_appendf(&desc, "There is no path from %h %s to %h.<br>Instead: ", P("from"), from_to_mode==1 ? "forward" : "back", zTo); } } if( ((from_rid && to_rid) || (me_rid && you_rid)) && g.perm.Read ){ /* If from= and to= are present, display all nodes on a path connecting ** the two */ PathNode *p = 0; const char *zFrom = 0; const char *zTo = 0; Blob ins; int nNodeOnPath = 0; if( from_rid && to_rid ){ if( from_to_mode==0 ){ p = path_shortest(from_rid, to_rid, noMerge, 0, 0); }else if( from_to_mode==1 ){ p = path_shortest(from_rid, to_rid, 0, 1, 0); }else{ p = path_shortest(to_rid, from_rid, 0, 1, 0); } zFrom = P("from"); zTo = zTo2 ? zTo2 : P("to"); }else{ if( path_common_ancestor(me_rid, you_rid) ){ p = path_first(); } zFrom = P("me"); zTo = P("you"); } |
︙ | ︙ | |||
2091 2092 2093 2094 2095 2096 2097 | } tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; db_multi_exec("%s", blob_sql_text(&sql)); if( advancedMenu ){ style_submenu_checkbox("v", "Files", (zType[0]!='a' && zType[0]!='c'),0); } nNodeOnPath = db_int(0, "SELECT count(*) FROM temp.pathnode"); | > > > > > | > > > > > > > > | > > > | > | | | | | > | 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 | } tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; db_multi_exec("%s", blob_sql_text(&sql)); if( advancedMenu ){ style_submenu_checkbox("v", "Files", (zType[0]!='a' && zType[0]!='c'),0); } nNodeOnPath = db_int(0, "SELECT count(*) FROM temp.pathnode"); if( nNodeOnPath==1 && from_to_mode>0 ){ blob_appendf(&desc,"Check-in "); }else if( from_to_mode>0 ){ blob_appendf(&desc, "%d check-ins on the shorted path from ",nNodeOnPath); }else{ blob_appendf(&desc, "%d check-ins going from ", nNodeOnPath); } if( from_rid==selectedRid ){ blob_appendf(&desc, "<span class='timelineSelected'>"); } blob_appendf(&desc, "%z%h</a>", href("%R/info/%h", zFrom), zFrom); if( from_rid==selectedRid ) blob_appendf(&desc, "</span>"); if( nNodeOnPath==1 && from_to_mode>0 ){ blob_appendf(&desc, " only"); }else{ blob_append(&desc, " to ", -1); if( to_rid==secondaryRid ){ blob_appendf(&desc,"<span class='timelineSelected timelineSecondary'>"); } blob_appendf(&desc, "%z%h</a>", href("%R/info/%h",zTo), zTo); if( to_rid==secondaryRid ) blob_appendf(&desc, "</span>"); if( related ){ int nRelated = db_int(0, "SELECT count(*) FROM timeline") - nNodeOnPath; if( nRelated>0 ){ blob_appendf(&desc, " and %d related check-in%s", nRelated, nRelated>1 ? "s" : ""); } } } addFileGlobDescription(zChng, &desc); }else if( (p_rid || d_rid) && g.perm.Read && zTagSql==0 ){ /* If p= or d= is present, ignore all other parameters other than n= */ char *zUuid; const char *zCiName; |
︙ | ︙ | |||
2195 2196 2197 2198 2199 2200 2201 | } blob_appendf(&desc, " of %z%h</a>", href("%R/info?name=%h", zCiName), zCiName); if( ridBackTo ){ if( np==0 ){ blob_reset(&desc); | | | | 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 | } blob_appendf(&desc, " of %z%h</a>", href("%R/info?name=%h", zCiName), zCiName); if( ridBackTo ){ if( np==0 ){ blob_reset(&desc); blob_appendf(&desc, "Check-in %z%h</a> only (%z%h</a> is not an ancestor)", href("%R/info?name=%h",zCiName), zCiName, href("%R/info?name=%h",zBackTo), zBackTo); }else{ blob_appendf(&desc, " back to %z%h</a>", href("%R/info?name=%h",zBackTo), zBackTo); if( ridFwdTo && zFwdTo ){ blob_appendf(&desc, " and up to %z%h</a>", href("%R/info?name=%h",zFwdTo), zFwdTo); } } }else if( ridFwdTo ){ if( nd==0 ){ blob_reset(&desc); blob_appendf(&desc, "Check-in %z%h</a> only (%z%h</a> is not an descendant)", href("%R/info?name=%h",zCiName), zCiName, href("%R/info?name=%h",zFwdTo), zFwdTo); }else{ blob_appendf(&desc, " up to %z%h</a>", href("%R/info?name=%h",zFwdTo), zFwdTo); } |
︙ | ︙ | |||
2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 | ); if( zMark ){ /* If the t=release option is used with m=UUID, then also ** include the UUID check-in in the display list */ int ridMark = name_to_rid(zMark); db_multi_exec( "INSERT OR IGNORE INTO selected_nodes(rid) VALUES(%d)", ridMark); } if( !related ){ blob_append_sql(&cond, " AND blob.rid IN selected_nodes"); }else{ db_multi_exec( "CREATE TEMP TABLE related_nodes(rid INTEGER PRIMARY KEY);" "INSERT INTO related_nodes SELECT rid FROM selected_nodes;" | > > > > > > > > > > > > > > > > > | 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 | ); if( zMark ){ /* If the t=release option is used with m=UUID, then also ** include the UUID check-in in the display list */ int ridMark = name_to_rid(zMark); db_multi_exec( "INSERT OR IGNORE INTO selected_nodes(rid) VALUES(%d)", ridMark); } if( P("x")!=0 ){ char *zX = fossil_strdup(P("x")); int ii; int ridX; while( zX[0] ){ char c; if( zX[0]==',' || zX[0]==' ' ){ zX++; continue; } for(ii=1; zX[ii] && zX[ii]!=',' && zX[ii]!=' '; ii++){} c = zX[ii]; zX[ii] = 0; ridX = name_to_rid(zX); db_multi_exec( "INSERT OR IGNORE INTO selected_nodes(rid) VALUES(%d)", ridX); zX[ii] = c; zX += ii; } } if( !related ){ blob_append_sql(&cond, " AND blob.rid IN selected_nodes"); }else{ db_multi_exec( "CREATE TEMP TABLE related_nodes(rid INTEGER PRIMARY KEY);" "INSERT INTO related_nodes SELECT rid FROM selected_nodes;" |
︙ | ︙ | |||
2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 | } if( PB("showsql") ){ @ <pre>%h(blob_sql_text(&sql2))</pre> } db_multi_exec("%s", blob_sql_text(&sql2)); if( nEntry>0 ){ nEntry -= db_int(0,"select count(*) from timeline"); } blob_reset(&sql2); blob_append_sql(&sql, " AND event.mtime<=%f ORDER BY event.mtime DESC", rCirca ); if( zMark==0 ) zMark = zCirca; | > | 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 | } if( PB("showsql") ){ @ <pre>%h(blob_sql_text(&sql2))</pre> } db_multi_exec("%s", blob_sql_text(&sql2)); if( nEntry>0 ){ nEntry -= db_int(0,"select count(*) from timeline"); if( nEntry<=0 ) nEntry = 1; } blob_reset(&sql2); blob_append_sql(&sql, " AND event.mtime<=%f ORDER BY event.mtime DESC", rCirca ); if( zMark==0 ) zMark = zCirca; |
︙ | ︙ | |||
2690 2691 2692 2693 2694 2695 2696 | tmFlags |= TIMELINE_CHPICK|TIMELINE_DISJOINT; } if( zUser ){ blob_appendf(&desc, " by user %h", zUser); tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; } if( zTagSql ){ | | | 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 | tmFlags |= TIMELINE_CHPICK|TIMELINE_DISJOINT; } if( zUser ){ blob_appendf(&desc, " by user %h", zUser); tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; } if( zTagSql ){ if( matchStyle==MS_EXACT || matchStyle==MS_BRLIST ){ if( related ){ blob_appendf(&desc, " related to %h", zMatchDesc); }else{ blob_appendf(&desc, " tagged with %h", zMatchDesc); } }else{ if( related ){ |
︙ | ︙ | |||
2973 2974 2975 2976 2977 2978 2979 | ** 6. mtime ** 7. branch ** 8. event-type: 'ci', 'w', 't', 'f', and so forth. ** 9. comment ** 10. user ** 11. tags */ | | > > > > > > > > | 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 | ** 6. mtime ** 7. branch ** 8. event-type: 'ci', 'w', 't', 'f', and so forth. ** 9. comment ** 10. user ** 11. tags */ void print_timeline(Stmt *q, int nLimit, int width, const char *zFormat, int verboseFlag){ int nAbsLimit = (nLimit >= 0) ? nLimit : -nLimit; int nLine = 0; int nEntry = 0; char zPrevDate[20]; const char *zCurrentUuid = 0; int fchngQueryInit = 0; /* True if fchngQuery is initialized */ Stmt fchngQuery; /* Query for file changes on check-ins */ int rc; /* True: separate entries with a newline after file listing */ int bVerboseNL = (zFormat && (fossil_strcmp(zFormat, TIMELINE_FMT_ONELINE)!=0)); /* True: separate entries with a newline even with no file listing */ int bNoVerboseNL = (zFormat && (fossil_strcmp(zFormat, TIMELINE_FMT_MEDIUM)==0 || fossil_strcmp(zFormat, TIMELINE_FMT_FULL)==0)); zPrevDate[0] = 0; if( g.localOpen ){ int rid = db_lget_int("checkout", 0); zCurrentUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); } |
︙ | ︙ | |||
3073 3074 3075 3076 3077 3078 3079 | if( zFormat ){ char *zEntry; int nEntryLine = 0; if( nChild==0 ){ sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], "*LEAF* "); } | | | > | 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 | if( zFormat ){ char *zEntry; int nEntryLine = 0; if( nChild==0 ){ sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], "*LEAF* "); } zEntry = timeline_entry_subst(zFormat, &nEntryLine, zId, zDate, zUserShort, zComShort, zBranch, zTags, zPrefix); nLine += nEntryLine; fossil_print("%s\n", zEntry); fossil_free(zEntry); } else{ /* record another X lines */ nLine += comment_print(zFree, zCom, 9, width, get_comment_format()); |
︙ | ︙ | |||
3114 3115 3116 3117 3118 3119 3120 3121 | fossil_print(" DELETED %s\n",zFilename); }else{ fossil_print(" EDITED %s\n", zFilename); } nLine++; /* record another line */ } db_reset(&fchngQuery); } | > > > < < < < | | 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 | fossil_print(" DELETED %s\n",zFilename); }else{ fossil_print(" EDITED %s\n", zFilename); } nLine++; /* record another line */ } db_reset(&fchngQuery); if( bVerboseNL ) fossil_print("\n"); }else{ if( bNoVerboseNL ) fossil_print("\n"); } nEntry++; /* record another complete entry */ } if( rc==SQLITE_DONE ){ /* Did the underlying query actually have all entries? */ if( nAbsLimit==0 ){ fossil_print("+++ end of timeline (%d) +++\n", nEntry); }else{ |
︙ | ︙ | |||
3163 3164 3165 3166 3167 3168 3169 | @ event.type @ , coalesce(ecomment,comment) AS comment0 @ , coalesce(euser,user,'?') AS user0 @ , (SELECT case when length(x)>0 then x else '' end @ FROM (SELECT group_concat(substr(tagname,5), ', ') AS x @ FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid | | | 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 | @ event.type @ , coalesce(ecomment,comment) AS comment0 @ , coalesce(euser,user,'?') AS user0 @ , (SELECT case when length(x)>0 then x else '' end @ FROM (SELECT group_concat(substr(tagname,5), ', ') AS x @ FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0)) AS tags @ FROM tag CROSS JOIN event CROSS JOIN blob @ LEFT JOIN tagxref ON tagxref.tagid=tag.tagid @ AND tagxref.tagtype>0 @ AND tagxref.rid=blob.rid @ WHERE blob.rid=event.objid @ AND tag.tagname='branch' ; |
︙ | ︙ | |||
3224 3225 3226 3227 3228 3229 3230 | ** means UTC. ** ** ** Options: ** -b|--branch BRANCH Show only items on the branch named BRANCH ** -c|--current-branch Show only items on the current branch ** -F|--format Entry format. Values "oneline", "medium", and "full" | | | | 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 | ** means UTC. ** ** ** Options: ** -b|--branch BRANCH Show only items on the branch named BRANCH ** -c|--current-branch Show only items on the current branch ** -F|--format Entry format. Values "oneline", "medium", and "full" ** get mapped to the full options below. Otherwise a ** string which can contain these placeholders: ** %n newline ** %% a raw % ** %H commit hash ** %h abbreviated commit hash ** %a author name ** %d date ** %c comment (NL, TAB replaced by space, LF erased) ** %b branch ** %t tags ** %p phase: zero or more of *CURRENT*, *MERGE*, ** *FORK*, *UNPUBLISHED*, *LEAF*, *BRANCH* ** --oneline Show only short hash and comment for each entry ** --medium Medium-verbose entry formatting ** --full Extra verbose entry formatting |
︙ | ︙ | |||
3304 3305 3306 3307 3308 3309 3310 | fossil_fatal("not within an open check-out"); }else{ int vid = db_lget_int("checkout", 0); zBr = db_text(0, "SELECT value FROM tagxref WHERE rid=%d AND tagid=%d", vid, TAG_BRANCH); } } | | | > | | > | | < > | 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 | fossil_fatal("not within an open check-out"); }else{ int vid = db_lget_int("checkout", 0); zBr = db_text(0, "SELECT value FROM tagxref WHERE rid=%d AND tagid=%d", vid, TAG_BRANCH); } } if( find_option("oneline",0,0)!= 0 || fossil_strcmp(zFormat,"oneline")==0 ){ zFormat = TIMELINE_FMT_ONELINE; } if( find_option("medium",0,0)!= 0 || fossil_strcmp(zFormat,"medium")==0 ){ zFormat = TIMELINE_FMT_MEDIUM; } if( find_option("full",0,0)!= 0 || fossil_strcmp(zFormat,"full")==0 ){ zFormat = TIMELINE_FMT_FULL; } showSql = find_option("sql",0,0)!=0; if( !zLimit ){ zLimit = find_option("count",0,1); } if( zLimit ){ n = atoi(zLimit); |
︙ | ︙ |
Changes to src/tkt.c.
︙ | ︙ | |||
555 556 557 558 559 560 561 | case SQLITE_CREATE_VIEW: case SQLITE_CREATE_TABLE: { if( sqlite3_stricmp(z2,"main")!=0 && sqlite3_stricmp(z2,"repository")!=0 ){ goto ticket_schema_error; } | | | 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 | case SQLITE_CREATE_VIEW: case SQLITE_CREATE_TABLE: { if( sqlite3_stricmp(z2,"main")!=0 && sqlite3_stricmp(z2,"repository")!=0 ){ goto ticket_schema_error; } if( sqlite3_strnicmp(z0,"ticket",6)!=0 && sqlite3_strnicmp(z0,"fx_",3)!=0 ){ goto ticket_schema_error; } break; } case SQLITE_DROP_INDEX: |
︙ | ︙ | |||
1211 1212 1213 1214 1215 1216 1217 | } /* ** WEBPAGE: tkttimeline ** URL: /tkttimeline/TICKETUUID ** ** Show the change history for a single ticket in timeline format. | | | 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 | } /* ** WEBPAGE: tkttimeline ** URL: /tkttimeline/TICKETUUID ** ** Show the change history for a single ticket in timeline format. ** ** Query parameters: ** ** y=ci Show only check-ins associated with the ticket */ void tkttimeline_page(void){ char *zTitle; const char *zUuid; |
︙ | ︙ |
Changes to src/unicode.c.
︙ | ︙ | |||
238 239 240 241 242 243 244 | iLo = iTest+1; }else{ iHi = iTest-1; } } assert( key>=aDia[iRes] ); if( bComplex==0 && (aChar[iRes] & 0x80) ) return c; | | > | 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | iLo = iTest+1; }else{ iHi = iTest-1; } } assert( key>=aDia[iRes] ); if( bComplex==0 && (aChar[iRes] & 0x80) ) return c; return (c > (aDia[iRes]>>3) + (aDia[iRes]&0x07)) ? c : ((int)aChar[iRes] & 0x7F); } /* ** Return true if the argument interpreted as a unicode codepoint ** is a diacritical modifier character. */ |
︙ | ︙ |
Changes to src/unversioned.c.
︙ | ︙ | |||
306 307 308 309 310 311 312 | nCmd = (int)strlen(zCmd); if( zMtime==0 ){ mtime = time(0); }else{ mtime = db_int(0, "SELECT strftime('%%s',%Q)", zMtime); if( mtime<=0 ) fossil_fatal("bad timestamp: %Q", zMtime); } | | | 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | nCmd = (int)strlen(zCmd); if( zMtime==0 ){ mtime = time(0); }else{ mtime = db_int(0, "SELECT strftime('%%s',%Q)", zMtime); if( mtime<=0 ) fossil_fatal("bad timestamp: %Q", zMtime); } if( strncmp(zCmd, "add", nCmd)==0 ){ const char *zError = 0; const char *zIn; const char *zAs; Blob file; int i; zAs = find_option("as",0,1); |
︙ | ︙ | |||
338 339 340 341 342 343 344 | } blob_init(&file,0,0); blob_read_from_file(&file, g.argv[i], ExtFILE); unversioned_write(zIn, &file, mtime); blob_reset(&file); } db_end_transaction(0); | | | | 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 | } blob_init(&file,0,0); blob_read_from_file(&file, g.argv[i], ExtFILE); unversioned_write(zIn, &file, mtime); blob_reset(&file); } db_end_transaction(0); }else if( strncmp(zCmd, "cat", nCmd)==0 ){ int i; verify_all_options(); db_begin_transaction(); for(i=3; i<g.argc; i++){ Blob content; if( unversioned_content(g.argv[i], &content)!=0 ){ blob_write_to_file(&content, "-"); } blob_reset(&content); } db_end_transaction(0); }else if( strncmp(zCmd, "edit", nCmd)==0 ){ const char *zEditor; /* Name of the text-editor command */ const char *zTFile; /* Temporary file */ const char *zUVFile; /* Name of the unversioned file */ char *zCmd; /* Command to run the text editor */ Blob content; /* Content of the unversioned file */ verify_all_options(); |
︙ | ︙ | |||
393 394 395 396 397 398 399 | blob_to_lf_only(&content); #endif file_delete(zTFile); if( zMtime==0 ) mtime = time(0); unversioned_write(zUVFile, &content, mtime); db_end_transaction(0); blob_reset(&content); | | | | | 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | blob_to_lf_only(&content); #endif file_delete(zTFile); if( zMtime==0 ) mtime = time(0); unversioned_write(zUVFile, &content, mtime); db_end_transaction(0); blob_reset(&content); }else if( strncmp(zCmd, "export", nCmd)==0 ){ Blob content; verify_all_options(); if( g.argc!=5 ) usage("export UVFILE OUTPUT"); if( unversioned_content(g.argv[3], &content)==0 ){ fossil_fatal("no such uv-file: %Q", g.argv[3]); } blob_write_to_file(&content, g.argv[4]); blob_reset(&content); }else if( strncmp(zCmd, "hash", nCmd)==0 ){ /* undocumented */ /* Show the hash value used during uv sync */ int debugFlag = find_option("debug",0,0)!=0; fossil_print("%s\n", unversioned_content_hash(debugFlag)); }else if( strncmp(zCmd, "list", nCmd)==0 || strncmp(zCmd, "ls", nCmd)==0 ){ Stmt q; int allFlag = find_option("all","a",0)!=0; int longFlag = find_option("l",0,0)!=0 || (nCmd>1 && zCmd[1]=='i'); char *zPattern = sqlite3_mprintf("true"); const char *zGlob; zGlob = find_option("glob",0,1); if( zGlob ){ |
︙ | ︙ | |||
462 463 464 465 466 467 468 | db_column_text(&q,4), zNoContent ); } } db_finalize(&q); sqlite3_free(zPattern); | | | | | 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 | db_column_text(&q,4), zNoContent ); } } db_finalize(&q); sqlite3_free(zPattern); }else if( strncmp(zCmd, "revert", nCmd)==0 ){ unsigned syncFlags = unversioned_sync_flags(SYNC_UNVERSIONED|SYNC_UV_REVERT); g.argv[1] = "sync"; g.argv[2] = "--uv-noop"; sync_unversioned(syncFlags); }else if( strncmp(zCmd, "remove", nCmd)==0 || strncmp(zCmd, "rm", nCmd)==0 || strncmp(zCmd, "delete", nCmd)==0 ){ int i; const char *zGlob; db_begin_transaction(); while( (zGlob = find_option("glob",0,1))!=0 ){ db_multi_exec( "UPDATE unversioned" " SET hash=NULL, content=NULL, mtime=%lld, sz=0 WHERE name GLOB %Q", |
︙ | ︙ | |||
497 498 499 500 501 502 503 | "UPDATE unversioned" " SET hash=NULL, content=NULL, mtime=%lld, sz=0 WHERE name=%Q", mtime, g.argv[i] ); } db_unset("uv-hash", 0); db_end_transaction(0); | | | | 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 | "UPDATE unversioned" " SET hash=NULL, content=NULL, mtime=%lld, sz=0 WHERE name=%Q", mtime, g.argv[i] ); } db_unset("uv-hash", 0); db_end_transaction(0); }else if( strncmp(zCmd,"sync",nCmd)==0 ){ unsigned syncFlags = unversioned_sync_flags(SYNC_UNVERSIONED); g.argv[1] = "sync"; g.argv[2] = "--uv-noop"; sync_unversioned(syncFlags); }else if( strncmp(zCmd, "touch", nCmd)==0 ){ int i; verify_all_options(); db_begin_transaction(); for(i=3; i<g.argc; i++){ db_multi_exec( "UPDATE unversioned SET mtime=%lld WHERE name=%Q", mtime, g.argv[i] |
︙ | ︙ | |||
569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 | ); iNow = db_int64(0, "SELECT strftime('%%s','now');"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); sqlite3_int64 mtime = db_column_int(&q, 1); const char *zHash = db_column_text(&q, 2); int isDeleted = zHash==0; int fullSize = db_column_int(&q, 3); char *zAge = human_readable_age((iNow - mtime)/86400.0); const char *zLogin = db_column_text(&q, 4); int rcvid = db_column_int(&q,5); if( zLogin==0 ) zLogin = ""; if( (n++)==0 ){ style_table_sorter(); @ <div class="uvlist"> @ <table cellpadding="2" cellspacing="0" border="1" class='sortable' \ @ data-column-types='tkKttn' data-init-sort='1'> @ <thead><tr> @ <th> Name @ <th> Age @ <th> Size @ <th> User @ <th> Hash if( g.perm.Admin ){ @ <th> rcvid } @ </tr></thead> @ <tbody> } @ <tr> | > > > > | 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 | ); iNow = db_int64(0, "SELECT strftime('%%s','now');"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); sqlite3_int64 mtime = db_column_int(&q, 1); const char *zHash = db_column_text(&q, 2); int isDeleted = zHash==0; const char *zAlgo; int fullSize = db_column_int(&q, 3); char *zAge = human_readable_age((iNow - mtime)/86400.0); const char *zLogin = db_column_text(&q, 4); int rcvid = db_column_int(&q,5); if( isDeleted ) zAlgo = "deleted"; else zAlgo = hname_alg(strlen(zHash)); if( zLogin==0 ) zLogin = ""; if( (n++)==0 ){ style_table_sorter(); @ <div class="uvlist"> @ <table cellpadding="2" cellspacing="0" border="1" class='sortable' \ @ data-column-types='tkKttn' data-init-sort='1'> @ <thead><tr> @ <th> Name @ <th> Age @ <th> Size @ <th> User @ <th> Hash @ <th> Algo if( g.perm.Admin ){ @ <th> rcvid } @ </tr></thead> @ <tbody> } @ <tr> |
︙ | ︙ | |||
606 607 608 609 610 611 612 | iTotalSz += fullSize; cnt++; @ <td> <a href='%R/uv/%T(zName)'>%h(zName)</a> </td> } @ <td data-sortkey='%016llx(-mtime)'> %s(zAge) </td> @ <td data-sortkey='%08x(fullSize)'> %s(zSzName) </td> @ <td> %h(zLogin) </td> | | > > | 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 | iTotalSz += fullSize; cnt++; @ <td> <a href='%R/uv/%T(zName)'>%h(zName)</a> </td> } @ <td data-sortkey='%016llx(-mtime)'> %s(zAge) </td> @ <td data-sortkey='%08x(fullSize)'> %s(zSzName) </td> @ <td> %h(zLogin) </td> @ <td><code> %h(zHash) </code></td> @ <td> %s(zAlgo) </td> if( g.perm.Admin ){ if( rcvid ){ @ <td> <a href="%R/rcvfrom?rcvid=%d(rcvid)">%d(rcvid)</a> }else{ @ <td> } } @ </tr> fossil_free(zAge); } db_finalize(&q); if( n ){ approxSizeName(sizeof(zSzName), zSzName, iTotalSz); @ </tbody> @ <tfoot><tr><td><b>Total for %d(cnt) files</b><td><td>%s(zSzName) @ <td><td> if( g.perm.Admin ){ @ <td> } @ <td> @ </tfoot> @ </table></div> }else{ @ No unversioned files on this server. } style_finish_page(); } |
︙ | ︙ |
Changes to src/update.c.
︙ | ︙ | |||
565 566 567 568 569 570 571 | db_finalize(&q); db_finalize(&mtimeXfer); fossil_print("%.79c\n",'-'); if( nUpdate==0 ){ show_common_info(tid, "checkout:", 1, 0); fossil_print("%-13s None. Already up-to-date\n", "changes:"); }else{ | | | 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | db_finalize(&q); db_finalize(&mtimeXfer); fossil_print("%.79c\n",'-'); if( nUpdate==0 ){ show_common_info(tid, "checkout:", 1, 0); fossil_print("%-13s None. Already up-to-date\n", "changes:"); }else{ fossil_print("%-13s %.40s %s\n", "updated-from:", rid_to_uuid(vid), db_text("", "SELECT datetime(mtime) || ' UTC' FROM event " " WHERE objid=%d", vid)); show_common_info(tid, "updated-to:", 1, 0); fossil_print("%-13s %d file%s modified.\n", "changes:", nUpdate, nUpdate>1 ? "s" : ""); } |
︙ | ︙ |
Changes to src/url.c.
︙ | ︙ | |||
31 32 33 34 35 36 37 | #endif #endif #if INTERFACE /* ** Flags for url_parse() */ | | | | | | | | | > > > | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | #endif #endif #if INTERFACE /* ** Flags for url_parse() */ #define URL_PROMPT_PW 0x0001 /* Prompt for password if needed */ #define URL_REMEMBER 0x0002 /* Remember the url for later reuse */ #define URL_ASK_REMEMBER_PW 0x0004 /* Ask whether to remember prompted pw */ #define URL_REMEMBER_PW 0x0008 /* Should remember pw */ #define URL_PROMPTED 0x0010 /* Prompted for PW already */ #define URL_OMIT_USER 0x0020 /* Omit the user name from URL */ #define URL_USE_CONFIG 0x0040 /* Use remembered URLs from CONFIG table */ #define URL_USE_PARENT 0x0080 /* Use the URL of the parent project */ #define URL_SSH_PATH 0x0100 /* Include PATH= on SSH syncs */ #define URL_SSH_RETRY 0x0200 /* This a retry of an SSH */ #define URL_SSH_EXE 0x0400 /* ssh: URL contains fossil= query param*/ /* ** The URL related data used with this subsystem. */ struct UrlData { int isFile; /* True if a "file:" url */ int isHttps; /* True if a "https:" url */ |
︙ | ︙ | |||
87 88 89 90 91 92 93 | ** path Path name for HTTP or HTTPS. ** user Userid. ** passwd Password. ** hostname HOST:PORT or just HOST if port is the default. ** canonical The URL in canonical form, omitting the password ** ** If URL_USECONFIG is set and zUrl is NULL or "default", then parse the | | | | 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 | ** path Path name for HTTP or HTTPS. ** user Userid. ** passwd Password. ** hostname HOST:PORT or just HOST if port is the default. ** canonical The URL in canonical form, omitting the password ** ** If URL_USECONFIG is set and zUrl is NULL or "default", then parse the ** URL stored in last-sync-url and last-sync-pw of the CONFIG table. Or if ** URL_USE_PARENT is also set, then use parent-project-url and ** parent-project-pw from the CONFIG table instead of last-sync-url ** and last-sync-pw. ** ** If URL_USE_CONFIG is set and zUrl is a symbolic name, then look up ** the URL in sync-url:%Q and sync-pw:%Q elements of the CONFIG table where ** %Q is the symbolic name. ** ** This routine differs from url_parse() in that this routine stores the ** results in pUrlData and does not change the values of global variables. ** The url_parse() routine puts its result in g.url. */ void url_parse_local( const char *zUrl, unsigned int urlFlags, UrlData *pUrlData ){ int i, j, c; char *zFile = 0; memset(pUrlData, 0, sizeof(*pUrlData)); if( urlFlags & URL_USE_CONFIG ){ if( zUrl==0 || strcmp(zUrl,"default")==0 ){ const char *zPwConfig = "last-sync-pw"; if( urlFlags & URL_USE_PARENT ){ zUrl = db_get("parent-project-url", 0); if( zUrl==0 ){ zUrl = db_get("last-sync-url",0); |
︙ | ︙ | |||
157 158 159 160 161 162 163 | || strncmp(zUrl, "ssh://", 6)==0 ){ int iStart; char *zLogin; char *zExe; char cQuerySep = '?'; | < < | 160 161 162 163 164 165 166 167 168 169 170 171 172 173 | || strncmp(zUrl, "ssh://", 6)==0 ){ int iStart; char *zLogin; char *zExe; char cQuerySep = '?'; if( zUrl[4]=='s' ){ pUrlData->isHttps = 1; pUrlData->protocol = "https"; pUrlData->dfltPort = 443; iStart = 8; }else if( zUrl[0]=='s' ){ pUrlData->isSsh = 1; |
︙ | ︙ | |||
253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | while( pUrlData->path[i] && pUrlData->path[i]!='&' ){ i++; } } if( pUrlData->path[i] ){ pUrlData->path[i] = 0; i++; } if( fossil_strcmp(zName,"fossil")==0 ){ pUrlData->fossil = fossil_strdup(zValue); dehttpize(pUrlData->fossil); fossil_free(zExe); zExe = mprintf("%cfossil=%T", cQuerySep, pUrlData->fossil); cQuerySep = '&'; } } dehttpize(pUrlData->path); if( pUrlData->dfltPort==pUrlData->port ){ pUrlData->canonical = mprintf( "%s://%s%T%T%z", | > > | 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 | while( pUrlData->path[i] && pUrlData->path[i]!='&' ){ i++; } } if( pUrlData->path[i] ){ pUrlData->path[i] = 0; i++; } if( fossil_strcmp(zName,"fossil")==0 ){ fossil_free(pUrlData->fossil); pUrlData->fossil = fossil_strdup(zValue); dehttpize(pUrlData->fossil); fossil_free(zExe); zExe = mprintf("%cfossil=%T", cQuerySep, pUrlData->fossil); cQuerySep = '&'; urlFlags |= URL_SSH_EXE; } } dehttpize(pUrlData->path); if( pUrlData->dfltPort==pUrlData->port ){ pUrlData->canonical = mprintf( "%s://%s%T%T%z", |
︙ | ︙ | |||
315 316 317 318 319 320 321 | free(zFile); zFile = 0; pUrlData->protocol = "file"; pUrlData->path = mprintf(""); pUrlData->name = mprintf("%b", &cfile); pUrlData->canonical = mprintf("file://%T", pUrlData->name); blob_reset(&cfile); | | | 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 | free(zFile); zFile = 0; pUrlData->protocol = "file"; pUrlData->path = mprintf(""); pUrlData->name = mprintf("%b", &cfile); pUrlData->canonical = mprintf("file://%T", pUrlData->name); blob_reset(&cfile); }else if( pUrlData->user!=0 && pUrlData->passwd==0 && (urlFlags & URL_PROMPT_PW)!=0 ){ url_prompt_for_password_local(pUrlData); }else if( pUrlData->user!=0 && ( urlFlags & URL_ASK_REMEMBER_PW ) ){ if( isatty(fileno(stdin)) && ( urlFlags & URL_REMEMBER_PW )==0 ){ if( save_password_prompt(pUrlData->passwd) ){ pUrlData->flags = urlFlags |= URL_REMEMBER_PW; }else{ |
︙ | ︙ | |||
409 410 411 412 413 414 415 416 417 418 419 420 421 422 | fossil_free(p->path); fossil_free(p->user); fossil_free(p->passwd); fossil_free(p->fossil); fossil_free(p->pwConfig); memset(p, 0, sizeof(*p)); } /* ** Parse the given URL, which describes a sync server. Populate variables ** in the global "g.url" structure as shown below. If zUrl is NULL, then ** parse the URL given in the last-sync-url setting, taking the password ** form last-sync-pw. ** | > > > > > > > > > | 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 | fossil_free(p->path); fossil_free(p->user); fossil_free(p->passwd); fossil_free(p->fossil); fossil_free(p->pwConfig); memset(p, 0, sizeof(*p)); } /* ** Move a URL parse from one UrlData object to another. */ void url_move_parse(UrlData *pTo, UrlData *pFrom){ url_unparse(pTo); memcpy(pTo, pFrom, sizeof(*pTo)); memset(pFrom, 0, sizeof(*pFrom)); } /* ** Parse the given URL, which describes a sync server. Populate variables ** in the global "g.url" structure as shown below. If zUrl is NULL, then ** parse the URL given in the last-sync-url setting, taking the password ** form last-sync-pw. ** |
︙ | ︙ | |||
451 452 453 454 455 456 457 458 459 460 461 462 463 464 | ** password is taken from the CONFIG table, the g.url.pwConfig field is ** set to the CONFIG.NAME value from which that password is taken. Otherwise, ** g.url.pwConfig is NULL. */ void url_parse(const char *zUrl, unsigned int urlFlags){ url_parse_local(zUrl, urlFlags, &g.url); } /* ** COMMAND: test-urlparser ** ** Usage: %fossil test-urlparser URL ?options? ** ** --prompt-pw Prompt for password if missing | > > > > > > > > > > > > > > > > > > > > > > > > > > | 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 | ** password is taken from the CONFIG table, the g.url.pwConfig field is ** set to the CONFIG.NAME value from which that password is taken. Otherwise, ** g.url.pwConfig is NULL. */ void url_parse(const char *zUrl, unsigned int urlFlags){ url_parse_local(zUrl, urlFlags, &g.url); } /* ** Print the content of g.url */ void urlparse_print(int showPw){ fossil_print("g.url.isFile = %d\n", g.url.isFile); fossil_print("g.url.isHttps = %d\n", g.url.isHttps); fossil_print("g.url.isSsh = %d\n", g.url.isSsh); fossil_print("g.url.protocol = %s\n", g.url.protocol); fossil_print("g.url.name = %s\n", g.url.name); fossil_print("g.url.port = %d\n", g.url.port); fossil_print("g.url.dfltPort = %d\n", g.url.dfltPort); fossil_print("g.url.hostname = %s\n", g.url.hostname); fossil_print("g.url.path = %s\n", g.url.path); fossil_print("g.url.user = %s\n", g.url.user); if( showPw || g.url.pwConfig==0 ){ fossil_print("g.url.passwd = %s\n", g.url.passwd); }else{ fossil_print("g.url.passwd = ************\n"); } fossil_print("g.url.pwConfig = %s\n", g.url.pwConfig); fossil_print("g.url.canonical = %s\n", g.url.canonical); fossil_print("g.url.fossil = %s\n", g.url.fossil); fossil_print("g.url.flags = 0x%04x\n", g.url.flags); fossil_print("url_full(g.url) = %z\n", url_full(&g.url)); } /* ** COMMAND: test-urlparser ** ** Usage: %fossil test-urlparser URL ?options? ** ** --prompt-pw Prompt for password if missing |
︙ | ︙ | |||
480 481 482 483 484 485 486 | if( find_option("show-pw",0,0) ) showPw = 1; if( (fg & URL_USE_CONFIG)==0 ) showPw = 1; if( g.argc!=3 && g.argc!=4 ){ usage("URL"); } url_parse(g.argv[2], fg); for(i=0; i<2; i++){ | < < < < < < < < < < | < < < < < < < < < | 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 | if( find_option("show-pw",0,0) ) showPw = 1; if( (fg & URL_USE_CONFIG)==0 ) showPw = 1; if( g.argc!=3 && g.argc!=4 ){ usage("URL"); } url_parse(g.argv[2], fg); for(i=0; i<2; i++){ urlparse_print(showPw); if( g.url.isFile || g.url.isSsh ) break; if( i==0 ){ fossil_print("********\n"); url_enable_proxy("Using proxy: "); } url_unparse(0); } |
︙ | ︙ | |||
789 790 791 792 793 794 795 | ** Given a URL for a remote repository clone point, try to come up with a ** reasonable basename of a local clone of that repository. ** ** * If the URL has a path, use the tail of the path, with any suffix ** elided. ** ** * If the URL is just a domain name, without a path, then use the | | | 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 | ** Given a URL for a remote repository clone point, try to come up with a ** reasonable basename of a local clone of that repository. ** ** * If the URL has a path, use the tail of the path, with any suffix ** elided. ** ** * If the URL is just a domain name, without a path, then use the ** first element of the domain name, except skip over "www." if ** present and if there is a ".com" or ".org" or similar suffix. ** ** The string returned is obtained from fossil_malloc(). NULL might be ** returned if there is an error. */ char *url_to_repo_basename(const char *zUrl){ const char *zTail = 0; |
︙ | ︙ |
Changes to src/user.c.
︙ | ︙ | |||
397 398 399 400 401 402 403 | } if( g.localOpen ){ db_lset("default-user", g.argv[3]); }else{ db_set("default-user", g.argv[3], 0); } } | | > | 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 | } if( g.localOpen ){ db_lset("default-user", g.argv[3]); }else{ db_set("default-user", g.argv[3], 0); } } }else if(( n>=2 && strncmp(g.argv[2],"list",n)==0 ) || ( n>=2 && strncmp(g.argv[2],"ls",n)==0 )){ Stmt q; db_prepare(&q, "SELECT login, info FROM user ORDER BY login"); while( db_step(&q)==SQLITE_ROW ){ fossil_print("%-12s %s\n", db_column_text(&q, 0), db_column_text(&q, 1)); } db_finalize(&q); }else if( n>=2 && strncmp(g.argv[2],"password",2)==0 ){ |
︙ | ︙ |
Changes to src/vfile.c.
︙ | ︙ | |||
408 409 410 411 412 413 414 | "original", "output", }; int i, j, n; if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ | > | | 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | "original", "output", }; int i, j, n; if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ if( zName[0]=='/' && sqlite3_strglob("/ci-comment-????????????.txt", zName)==0 ){ return 1; } if( zName[0]!='-' ) continue; for(i=0; i<count(azTemp); i++){ n = (int)strlen(azTemp[i]); if( memcmp(azTemp[i], zName+1, n) ) continue; if( zName[n+1]==0 ) return 1; |
︙ | ︙ | |||
752 753 754 755 756 757 758 | md5sum_step_text(" 0\n", -1); continue; } fseek(in, 0L, SEEK_END); sqlite3_snprintf(sizeof(zBuf), zBuf, " %ld\n", ftell(in)); fseek(in, 0L, SEEK_SET); md5sum_step_text(zBuf, -1); | | | 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 | md5sum_step_text(" 0\n", -1); continue; } fseek(in, 0L, SEEK_END); sqlite3_snprintf(sizeof(zBuf), zBuf, " %ld\n", ftell(in)); fseek(in, 0L, SEEK_SET); md5sum_step_text(zBuf, -1); /*printf("%s %s %s",md5sum_current_state(),zName,zBuf);fflush(stdout);*/ for(;;){ int n; n = fread(zBuf, 1, sizeof(zBuf), in); if( n<=0 ) break; md5sum_step_text(zBuf, n); } fclose(in); |
︙ | ︙ | |||
1039 1040 1041 1042 1043 1044 1045 | /* Add RID values for merged-in files */ db_multi_exec( "INSERT OR IGNORE INTO idMap(oldrid, newrid)" " SELECT vfile.mrid, blob.rid FROM vfile, blob" " WHERE blob.uuid=vfile.mhash;" ); | | | | 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 | /* Add RID values for merged-in files */ db_multi_exec( "INSERT OR IGNORE INTO idMap(oldrid, newrid)" " SELECT vfile.mrid, blob.rid FROM vfile, blob" " WHERE blob.uuid=vfile.mhash;" ); if( dryRun ){ Stmt q; db_prepare(&q, "SELECT oldrid, newrid, blob.uuid" " FROM idMap, blob WHERE blob.rid=idMap.newrid"); while( db_step(&q)==SQLITE_ROW ){ fossil_print("%8d -> %8d %.25s\n", db_column_int(&q,0), db_column_int(&q,1), db_column_text(&q,2)); } db_finalize(&q); } |
︙ | ︙ | |||
1067 1068 1069 1070 1071 1072 1073 | " UNION SELECT %d" ")" "SELECT group_concat(x,' ') FROM allrid" " WHERE x<>0 AND x NOT IN (SELECT oldrid FROM idMap);", oldVid ); if( zUnresolved[0] ){ | | > > > > > > > | 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 | " UNION SELECT %d" ")" "SELECT group_concat(x,' ') FROM allrid" " WHERE x<>0 AND x NOT IN (SELECT oldrid FROM idMap);", oldVid ); if( zUnresolved[0] ){ fossil_fatal("Unresolved RID values: %s\n" "\n" "Local check-out database is out of sync with repository file:\n" "\n" " %s\n" "\n" "Has the repository file been replaced?\n", zUnresolved, db_repository_filename()); } /* Make the changes to the VFILE and VMERGE tables */ if( !dryRun ){ db_multi_exec( "UPDATE vfile" " SET rid=(SELECT newrid FROM idMap WHERE oldrid=vfile.rid)" |
︙ | ︙ |
Changes to src/wiki.c.
︙ | ︙ | |||
83 84 85 86 87 88 89 | } int wiki_tagid2(const char *zPrefix, const char *zPageName){ return db_int(0, "SELECT tagid FROM tag WHERE tagname='wiki-%q/%q'", zPrefix, zPageName); } /* | | | 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | } int wiki_tagid2(const char *zPrefix, const char *zPageName){ return db_int(0, "SELECT tagid FROM tag WHERE tagname='wiki-%q/%q'", zPrefix, zPageName); } /* ** Return the RID of the next or previous version of a wiki page. ** Return 0 if rid is the last/first version. */ int wiki_next(int tagid, double mtime){ return db_int(0, "SELECT srcid FROM tagxref" " WHERE tagid=%d AND mtime>%.16g" " ORDER BY mtime ASC LIMIT 1", |
︙ | ︙ | |||
202 203 204 205 206 207 208 209 210 211 212 | }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ Blob tail = BLOB_INITIALIZER; markdown_to_html(pWiki, 0, &tail); safe_html(&tail); @ %s(blob_str(&tail)) blob_reset(&tail); }else if( fossil_strcmp(zMimetype, "text/x-pikchr")==0 ){ const char *zPikchr = blob_str(pWiki); int w, h; char *zOut = pikchr(zPikchr, "pikchr", 0, &w, &h); if( w>0 ){ | > > > | > > | > | 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 | }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ Blob tail = BLOB_INITIALIZER; markdown_to_html(pWiki, 0, &tail); safe_html(&tail); @ %s(blob_str(&tail)) blob_reset(&tail); }else if( fossil_strcmp(zMimetype, "text/x-pikchr")==0 ){ int isPopup = P("popup")!=0; const char *zPikchr = blob_str(pWiki); int w, h; char *zOut = pikchr(zPikchr, "pikchr", 0, &w, &h); if( w>0 ){ if( isPopup ) cgi_set_content_type("image/svg+xml"); else{ @ <div class="pikchr-svg" style="max-width:%d(w)px"> } @ %s(zOut) if( !isPopup){ @ </div> } }else{ @ <pre class='error'> @ %h(zOut) @ </pre> } free(zOut); }else{ |
︙ | ︙ | |||
411 412 413 414 415 416 417 | /* ** Figure out what type of wiki page we are dealing with. */ int wiki_page_type(const char *zPageName){ if( db_get_boolean("wiki-about",1)==0 ){ return WIKITYPE_NORMAL; }else | | | 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 | /* ** Figure out what type of wiki page we are dealing with. */ int wiki_page_type(const char *zPageName){ if( db_get_boolean("wiki-about",1)==0 ){ return WIKITYPE_NORMAL; }else if( sqlite3_strglob("checkin/*", zPageName)==0 && db_exists("SELECT 1 FROM blob WHERE uuid=%Q",zPageName+8) ){ return WIKITYPE_CHECKIN; }else if( sqlite3_strglob("branch/*", zPageName)==0 ){ return WIKITYPE_BRANCH; }else |
︙ | ︙ | |||
445 446 447 448 449 450 451 | /* ** Add an appropriate style_header() for either the /wiki or /wikiedit page ** for zPageName. zExtra is an empty string for /wiki but has the text ** "Edit: " for /wikiedit. ** ** If the page is /wiki and the page is one of the special times (check-in, | | | 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | /* ** Add an appropriate style_header() for either the /wiki or /wikiedit page ** for zPageName. zExtra is an empty string for /wiki but has the text ** "Edit: " for /wikiedit. ** ** If the page is /wiki and the page is one of the special times (check-in, ** branch, or tag) and the "p" query parameter is omitted, then do a ** redirect to the display of the check-in, branch, or tag rather than ** continuing to the plain wiki display. */ static int wiki_page_header( int eType, /* Page type. Might be WIKITYPE_UNKNOWN */ const char *zPageName, /* Name of the page */ const char *zExtra /* Extra prefix text on the page header */ |
︙ | ︙ | |||
467 468 469 470 471 472 473 | } case WIKITYPE_CHECKIN: { zPageName += 8; if( zExtra[0]==0 && !P("p") ){ cgi_redirectf("%R/info/%s",zPageName); }else{ style_header("Notes About Check-in %S", zPageName); | | > | 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 | } case WIKITYPE_CHECKIN: { zPageName += 8; if( zExtra[0]==0 && !P("p") ){ cgi_redirectf("%R/info/%s",zPageName); }else{ style_header("Notes About Check-in %S", zPageName); style_submenu_element("Check-in Timeline","%R/timeline?f=%s", zPageName); style_submenu_element("Check-in Info","%R/info/%s", zPageName); } break; } case WIKITYPE_BRANCH: { zPageName += 7; if( zExtra[0]==0 && !P("p") ){ |
︙ | ︙ | |||
549 550 551 552 553 554 555 556 557 558 559 560 561 562 | int isPopup = P("popup")!=0; char *zBody = mprintf("%s","<i>Empty Page</i>"); int noSubmenu = P("nsm")!=0 || g.isHome; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } zPageName = P("name"); cgi_check_for_malice(); if( zPageName==0 ){ if( search_restrict(SRCH_WIKI)!=0 ){ wiki_srchpage(); }else{ wiki_helppage(); } | > | 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 | int isPopup = P("popup")!=0; char *zBody = mprintf("%s","<i>Empty Page</i>"); int noSubmenu = P("nsm")!=0 || g.isHome; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } zPageName = P("name"); (void)P("s")/*for cgi_check_for_malice(). "s" == search stringy*/; cgi_check_for_malice(); if( zPageName==0 ){ if( search_restrict(SRCH_WIKI)!=0 ){ wiki_srchpage(); }else{ wiki_helppage(); } |
︙ | ︙ | |||
741 742 743 744 745 746 747 | ** Note that the sandbox is a special case: it is a pseudo-page with ** no rid and the /wikiajax API does not allow anyone to actually save ** a sandbox page, but it is reported as writable here (with rid 0). */ static int wiki_ajax_can_write(const char *zPageName, int * pRid){ int rid = 0; const char * zErr = 0; | | | 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 | ** Note that the sandbox is a special case: it is a pseudo-page with ** no rid and the /wikiajax API does not allow anyone to actually save ** a sandbox page, but it is reported as writable here (with rid 0). */ static int wiki_ajax_can_write(const char *zPageName, int * pRid){ int rid = 0; const char * zErr = 0; if(pRid) *pRid = 0; if(!zPageName || !*zPageName || !wiki_name_is_wellformed((unsigned const char *)zPageName)){ zErr = "Invalid page name."; }else if(is_sandbox(zPageName)){ return 1; }else{ |
︙ | ︙ | |||
764 765 766 767 768 769 770 | }else if(!rid && !g.perm.NewWiki){ zErr = "Requires new-wiki permissions."; }else{ zErr = "Cannot happen! Please report this as a bug."; } } ajax_route_error(403, "%s", zErr); | | | 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 | }else if(!rid && !g.perm.NewWiki){ zErr = "Requires new-wiki permissions."; }else{ zErr = "Cannot happen! Please report this as a bug."; } } ajax_route_error(403, "%s", zErr); return 0; } /* ** Emits an array of attachment info records for the given wiki page ** artifact. ** |
︙ | ︙ | |||
1010 1011 1012 1013 1014 1015 1016 | ** ** Responds with JSON. On error, an object in the form documented by ** ajax_route_error(). On success, an object in the form documented ** for wiki_ajax_emit_page_object(). */ static void wiki_ajax_route_fetch(void){ const char * zPageName = P("page"); | | | 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 | ** ** Responds with JSON. On error, an object in the form documented by ** ajax_route_error(). On success, an object in the form documented ** for wiki_ajax_emit_page_object(). */ static void wiki_ajax_route_fetch(void){ const char * zPageName = P("page"); if( zPageName==0 || zPageName[0]==0 ){ ajax_route_error(400,"Missing page name."); return; } cgi_set_content_type("application/json"); wiki_ajax_emit_page_object(zPageName, 1); } |
︙ | ︙ | |||
1201 1202 1203 1204 1205 1206 1207 | } /* ** WEBPAGE: wikiajax hidden ** ** An internal dispatcher for wiki AJAX operations. Not for direct ** client use. All routes defined by this interface are app-internal, | | | 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 | } /* ** WEBPAGE: wikiajax hidden ** ** An internal dispatcher for wiki AJAX operations. Not for direct ** client use. All routes defined by this interface are app-internal, ** subject to change */ void wiki_ajax_page(void){ const char * zName = P("name"); AjaxRoute routeName = {0,0,0,0}; const AjaxRoute * pRoute = 0; const AjaxRoute routes[] = { /* Keep these sorted by zName (for bsearch()) */ |
︙ | ︙ | |||
1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 | "CSRF violation (make sure sending of HTTP " "Referer headers is enabled for XHR " "connections)."); return; } pRoute->xCallback(); } /* ** WEBPAGE: wikiedit ** URL: /wikedit?name=PAGENAME ** ** The main front-end for the Ajax-based wiki editor app. Passing ** in the name of an unknown page will trigger the creation | > > > > > > > > > > > > > > > > | 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 | "CSRF violation (make sure sending of HTTP " "Referer headers is enabled for XHR " "connections)."); return; } pRoute->xCallback(); } /* ** Emits a preview-toggle option widget for /wikiedit and /fileedit. */ void wikiedit_emit_toggle_preview(void){ CX("<div class='input-with-label'>" "<input type='checkbox' id='edit-shift-enter-preview' " "></input><label for='edit-shift-enter-preview'>" "Shift-enter previews</label>" "<div class='help-buttonlet'>" "When enabled, shift-enter switches between preview and edit modes. " "Some software-based keyboards misinteract with this, so it can be " "disabled when needed." "</div>" "</div>"); } /* ** WEBPAGE: wikiedit ** URL: /wikedit?name=PAGENAME ** ** The main front-end for the Ajax-based wiki editor app. Passing ** in the name of an unknown page will trigger the creation |
︙ | ︙ | |||
1309 1310 1311 1312 1313 1314 1315 | "Status messages will go here.</div>\n" /* will be moved into the tab container via JS */); CX("<div id='wikiedit-edit-status''>" "<span class='name'></span>" "<span class='links'></span>" "</div>"); | | | | 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 | "Status messages will go here.</div>\n" /* will be moved into the tab container via JS */); CX("<div id='wikiedit-edit-status''>" "<span class='name'></span>" "<span class='links'></span>" "</div>"); /* Main tab container... */ CX("<div id='wikiedit-tabs' class='tab-container'>Loading...</div>"); /* The .hidden class on the following tab elements is to help lessen the FOUC effect of the tabs before JS re-assembles them. */ /******* Page list *******/ { CX("<div id='wikiedit-tab-pages' " "data-tab-parent='wikiedit-tabs' " "data-tab-label='Wiki Page List' " "class='hidden'" ">"); CX("<div>Loading wiki pages list...</div>"); CX("</div>"/*#wikiedit-tab-pages*/); } /******* Content tab *******/ { CX("<div id='wikiedit-tab-content' " "data-tab-parent='wikiedit-tabs' " "data-tab-label='Editor' " "class='hidden'" ">"); |
︙ | ︙ | |||
1369 1370 1371 1372 1373 1374 1375 | "<div class='help-buttonlet'>" "Reload the file from the server, discarding " "any local edits. To help avoid accidental loss of " "edits, it requires confirmation (a second click) within " "a few seconds or it will not reload." "</div>" "</div>"); | | | 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 | "<div class='help-buttonlet'>" "Reload the file from the server, discarding " "any local edits. To help avoid accidental loss of " "edits, it requires confirmation (a second click) within " "a few seconds or it will not reload." "</div>" "</div>"); wikiedit_emit_toggle_preview(); CX("</div>"); CX("<div class='flex-container flex-column stretch'>"); CX("<textarea name='content' id='wikiedit-content-editor' " "class='wikiedit' rows='25'>"); CX("</textarea>"); CX("</div>"/*textarea wrapper*/); CX("</div>"/*#tab-file-content*/); |
︙ | ︙ | |||
1887 1888 1889 1890 1891 1892 1893 | ** wsort Sort names by this label ** wrid rid of the most recent version of the page ** wmtime time most recent version was created ** wcnt Number of versions of this wiki page ** ** The wrid value is zero for deleted wiki pages. */ | | | 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 | ** wsort Sort names by this label ** wrid rid of the most recent version of the page ** wmtime time most recent version was created ** wcnt Number of versions of this wiki page ** ** The wrid value is zero for deleted wiki pages. */ static const char listAllWikiPages[] = @ SELECT @ substr(tag.tagname, 6) AS wname, @ lower(substr(tag.tagname, 6)) AS sortname, @ tagxref.value+0 AS wrid, @ max(tagxref.mtime) AS wmtime, @ count(*) AS wcnt @ FROM |
︙ | ︙ | |||
2115 2116 2117 2118 2119 2120 2121 | if( !rid ) { /* ** At present, technote tags are prefixed with 'sym-', which shouldn't ** be the case, so we check for both with and without the prefix until ** such time as tags have the errant prefix dropped. */ rid = db_int(0, "SELECT e.objid" | | | | | | | | | 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 | if( !rid ) { /* ** At present, technote tags are prefixed with 'sym-', which shouldn't ** be the case, so we check for both with and without the prefix until ** such time as tags have the errant prefix dropped. */ rid = db_int(0, "SELECT e.objid" " FROM event e, tag t, tagxref tx" " WHERE e.type='e'" " AND e.tagid IS NOT NULL" " AND e.objid IN" " (SELECT rid FROM tagxref" " WHERE tagid=(SELECT tagid FROM tag" " WHERE tagname GLOB '%q'))" " OR e.objid IN" " (SELECT rid FROM tagxref" " WHERE tagid=(SELECT tagid FROM tag" " WHERE tagname GLOB 'sym-%q'))" " ORDER BY e.mtime DESC LIMIT 1", zETime, zETime); } return rid; } /* ** COMMAND: wiki* ** |
︙ | ︙ | |||
2464 2465 2466 2467 2468 2469 2470 | } while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const int wrid = db_column_int(&q, 2); if(!showAll && !wrid){ continue; } | | | 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 | } while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const int wrid = db_column_int(&q, 2); if(!showAll && !wrid){ continue; } if( !showCkBr && (sqlite3_strglob("checkin/*", zName)==0 || sqlite3_strglob("branch/*", zName)==0) ){ continue; } if( showIds ){ const char *zUuid = db_column_text(&q, 1); fossil_print("%s ",zUuid); |
︙ | ︙ |
Changes to src/wikiformat.c.
︙ | ︙ | |||
458 459 460 461 462 463 464 465 466 467 468 469 470 471 | int state; /* Flag that govern rendering */ unsigned renderFlags; /* Flags from the client */ int wikiList; /* Current wiki list type */ int inVerbatim; /* True in <verbatim> mode */ int preVerbState; /* Value of state prior to verbatim */ int wantAutoParagraph; /* True if a <p> is desired */ int inAutoParagraph; /* True if within an automatic paragraph */ const char *zVerbatimId; /* The id= attribute of <verbatim> */ int nStack; /* Number of elements on the stack */ int nAlloc; /* Space allocated for aStack */ struct sStack { short iCode; /* Markup code */ short allowWiki; /* ALLOW_WIKI if wiki allowed before tag */ const char *zId; /* ID attribute or NULL */ | > | 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 | int state; /* Flag that govern rendering */ unsigned renderFlags; /* Flags from the client */ int wikiList; /* Current wiki list type */ int inVerbatim; /* True in <verbatim> mode */ int preVerbState; /* Value of state prior to verbatim */ int wantAutoParagraph; /* True if a <p> is desired */ int inAutoParagraph; /* True if within an automatic paragraph */ int pikchrHtmlFlags; /* Flags for pikchr_to_html() */ const char *zVerbatimId; /* The id= attribute of <verbatim> */ int nStack; /* Number of elements on the stack */ int nAlloc; /* Space allocated for aStack */ struct sStack { short iCode; /* Markup code */ short allowWiki; /* ALLOW_WIKI if wiki allowed before tag */ const char *zId; /* ID attribute or NULL */ |
︙ | ︙ | |||
1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 | }else if( markup.iType==MUTYPE_TD ){ if( backupToType(p, MUTYPE_TABLE|MUTYPE_TR) ){ if( stackTopType(p)==MUTYPE_TABLE ){ pushStack(p, MARKUP_TR); blob_append_string(p->pOut, "<tr>"); } pushStack(p, markup.iCode); renderMarkup(p->pOut, &markup); } }else if( markup.iType==MUTYPE_HYPERLINK ){ if( !isButtonHyperlink(p, &markup, z, &n) ){ popStackToTag(p, markup.iCode); | > | 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 | }else if( markup.iType==MUTYPE_TD ){ if( backupToType(p, MUTYPE_TABLE|MUTYPE_TR) ){ if( stackTopType(p)==MUTYPE_TABLE ){ pushStack(p, MARKUP_TR); blob_append_string(p->pOut, "<tr>"); } p->wantAutoParagraph = 0; pushStack(p, markup.iCode); renderMarkup(p->pOut, &markup); } }else if( markup.iType==MUTYPE_HYPERLINK ){ if( !isButtonHyperlink(p, &markup, z, &n) ){ popStackToTag(p, markup.iCode); |
︙ | ︙ | |||
1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 | ** Options: ** --buttons Set the WIKI_BUTTONS flag ** --htmlonly Set the WIKI_HTMLONLY flag ** --linksonly Set the WIKI_LINKSONLY flag ** --nobadlinks Set the WIKI_NOBADLINKS flag ** --inline Set the WIKI_INLINE flag ** --noblock Set the WIKI_NOBLOCK flag */ void test_wiki_render(void){ Blob in, out; int flags = 0; if( find_option("buttons",0,0)!=0 ) flags |= WIKI_BUTTONS; if( find_option("htmlonly",0,0)!=0 ) flags |= WIKI_HTMLONLY; if( find_option("linksonly",0,0)!=0 ) flags |= WIKI_LINKSONLY; if( find_option("nobadlinks",0,0)!=0 ) flags |= WIKI_NOBADLINKS; if( find_option("inline",0,0)!=0 ) flags |= WIKI_INLINE; if( find_option("noblock",0,0)!=0 ) flags |= WIKI_NOBLOCK; db_find_and_open_repository(OPEN_OK_NOT_FOUND|OPEN_SUBSTITUTE,0); verify_all_options(); if( g.argc!=3 ) usage("FILE"); blob_zero(&out); blob_read_from_file(&in, g.argv[2], ExtFILE); wiki_convert(&in, &out, flags); blob_write_to_file(&out, "-"); } /* ** COMMAND: test-markdown-render ** ** Usage: %fossil test-markdown-render FILE ... ** ** Render markdown in FILE as HTML on stdout. ** Options: ** ** --safe Restrict the output to use only "safe" HTML ** --lint-footnotes Print stats for footnotes-related issues */ void test_markdown_render(void){ Blob in, out; int i; int bSafe = 0, bFnLint = 0; db_find_and_open_repository(OPEN_OK_NOT_FOUND|OPEN_SUBSTITUTE,0); bSafe = find_option("safe",0,0)!=0; bFnLint = find_option("lint-footnotes",0,0)!=0; verify_all_options(); for(i=2; i<g.argc; i++){ blob_zero(&out); blob_read_from_file(&in, g.argv[i], ExtFILE); if( g.argc>3 ){ fossil_print("<!------ %h ------->\n", g.argv[i]); } | > > > > > > > > | 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 | ** Options: ** --buttons Set the WIKI_BUTTONS flag ** --htmlonly Set the WIKI_HTMLONLY flag ** --linksonly Set the WIKI_LINKSONLY flag ** --nobadlinks Set the WIKI_NOBADLINKS flag ** --inline Set the WIKI_INLINE flag ** --noblock Set the WIKI_NOBLOCK flag ** --dark-pikchr Render pikchrs in dark mode */ void test_wiki_render(void){ Blob in, out; int flags = 0; if( find_option("buttons",0,0)!=0 ) flags |= WIKI_BUTTONS; if( find_option("htmlonly",0,0)!=0 ) flags |= WIKI_HTMLONLY; if( find_option("linksonly",0,0)!=0 ) flags |= WIKI_LINKSONLY; if( find_option("nobadlinks",0,0)!=0 ) flags |= WIKI_NOBADLINKS; if( find_option("inline",0,0)!=0 ) flags |= WIKI_INLINE; if( find_option("noblock",0,0)!=0 ) flags |= WIKI_NOBLOCK; if( find_option("dark-pikchr",0,0)!=0 ){ pikchr_to_html_add_flags( PIKCHR_PROCESS_DARK_MODE ); } db_find_and_open_repository(OPEN_OK_NOT_FOUND|OPEN_SUBSTITUTE,0); verify_all_options(); if( g.argc!=3 ) usage("FILE"); blob_zero(&out); blob_read_from_file(&in, g.argv[2], ExtFILE); wiki_convert(&in, &out, flags); blob_write_to_file(&out, "-"); } /* ** COMMAND: test-markdown-render ** ** Usage: %fossil test-markdown-render FILE ... ** ** Render markdown in FILE as HTML on stdout. ** Options: ** ** --safe Restrict the output to use only "safe" HTML ** --lint-footnotes Print stats for footnotes-related issues ** --dark-pikchr Render pikchrs in dark mode */ void test_markdown_render(void){ Blob in, out; int i; int bSafe = 0, bFnLint = 0; db_find_and_open_repository(OPEN_OK_NOT_FOUND|OPEN_SUBSTITUTE,0); bSafe = find_option("safe",0,0)!=0; bFnLint = find_option("lint-footnotes",0,0)!=0; if( find_option("dark-pikchr",0,0)!=0 ){ pikchr_to_html_add_flags( PIKCHR_PROCESS_DARK_MODE ); } verify_all_options(); for(i=2; i<g.argc; i++){ blob_zero(&out); blob_read_from_file(&in, g.argv[i], ExtFILE); if( g.argc>3 ){ fossil_print("<!------ %h ------->\n", g.argv[i]); } |
︙ | ︙ | |||
2217 2218 2219 2220 2221 2222 2223 | iMatchCnt = 1; }else if( n==1 && zStart[0]=='=' && iMatchCnt==1 ){ iMatchCnt = 2; }else if( iMatchCnt==2 ){ if( (zStart[0]=='"' || zStart[0]=='\'') && zStart[n-1]==zStart[0] ){ zStart++; n -= 2; | | | 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 | iMatchCnt = 1; }else if( n==1 && zStart[0]=='=' && iMatchCnt==1 ){ iMatchCnt = 2; }else if( iMatchCnt==2 ){ if( (zStart[0]=='"' || zStart[0]=='\'') && zStart[n-1]==zStart[0] ){ zStart++; n -= 2; } *pLen = n; return zStart; }else{ iMatchCnt = 0; } } return 0; |
︙ | ︙ |
Changes to src/winhttp.c.
︙ | ︙ | |||
666 667 668 669 670 671 672 | fossil_panic("unable to get path to the temporary directory."); } /* Use a subdirectory for temp files (can then be excluded from virus scan) */ zTempSubDirPath = mprintf("%s%s\\",fossil_path_to_utf8(zTmpPath),zTempSubDir); if ( !file_mkdir(zTempSubDirPath, ExtFILE, 0) || file_isdir(zTempSubDirPath, ExtFILE)==1 ){ wcscpy(zTmpPath, fossil_utf8_to_path(zTempSubDirPath, 1)); | | | 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 | fossil_panic("unable to get path to the temporary directory."); } /* Use a subdirectory for temp files (can then be excluded from virus scan) */ zTempSubDirPath = mprintf("%s%s\\",fossil_path_to_utf8(zTmpPath),zTempSubDir); if ( !file_mkdir(zTempSubDirPath, ExtFILE, 0) || file_isdir(zTempSubDirPath, ExtFILE)==1 ){ wcscpy(zTmpPath, fossil_utf8_to_path(zTempSubDirPath, 1)); } if( g.fHttpTrace ){ zTempPrefix = mprintf("httptrace"); }else{ zTempPrefix = mprintf("%sfossil_server_P%d", fossil_unicode_to_utf8(zTmpPath), iPort); } fossil_print("Temporary files: %s*\n", zTempPrefix); |
︙ | ︙ | |||
1370 1371 1372 1373 1374 1375 1376 | if( !hScm ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), SERVICE_ALL_ACCESS); if( !hSvc ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); QueryServiceStatus(hSvc, &sstat); if( sstat.dwCurrentState!=SERVICE_RUNNING ){ fossil_print("Starting service '%s'", zSvcName); | | | | | | | | 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 | if( !hScm ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), SERVICE_ALL_ACCESS); if( !hSvc ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); QueryServiceStatus(hSvc, &sstat); if( sstat.dwCurrentState!=SERVICE_RUNNING ){ fossil_print("Starting service '%s'", zSvcName); if( sstat.dwCurrentState!=SERVICE_START_PENDING ){ if( !StartServiceW(hSvc, 0, NULL) ){ winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); } QueryServiceStatus(hSvc, &sstat); } while( sstat.dwCurrentState==SERVICE_START_PENDING || sstat.dwCurrentState==SERVICE_STOPPED ){ Sleep(100); fossil_print("."); QueryServiceStatus(hSvc, &sstat); } if( sstat.dwCurrentState==SERVICE_RUNNING ){ |
︙ | ︙ |
Changes to src/xfer.c.
︙ | ︙ | |||
353 354 355 356 357 358 359 | } }else{ nullContent = 1; } /* The isWriter flag must be true in order to land the new file */ if( !isWriter ){ | | | 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 | } }else{ nullContent = 1; } /* The isWriter flag must be true in order to land the new file */ if( !isWriter ){ blob_appendf(&pXfer->err,"Write permissions for unversioned files missing"); goto end_accept_unversioned_file; } /* Make sure we have a valid g.rcvid marker */ content_rcvid_init(0); /* Check to see if current content really should be overwritten. Ideally, |
︙ | ︙ | |||
1187 1188 1189 1190 1191 1192 1193 | /* ** The CGI/HTTP preprocessor always redirects requests with a content-type ** of application/x-fossil or application/x-fossil-debug to this page, ** regardless of what path was specified in the HTTP header. This allows ** clone clients to specify a URL that omits default pathnames, such ** as "http://fossil-scm.org/" instead of "http://fossil-scm.org/index.cgi". ** | | | 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 | /* ** The CGI/HTTP preprocessor always redirects requests with a content-type ** of application/x-fossil or application/x-fossil-debug to this page, ** regardless of what path was specified in the HTTP header. This allows ** clone clients to specify a URL that omits default pathnames, such ** as "http://fossil-scm.org/" instead of "http://fossil-scm.org/index.cgi". ** ** WEBPAGE: xfer raw-content loadavg-exempt ** ** This is the transfer handler on the server side. The transfer ** message has been uncompressed and placed in the g.cgiIn blob. ** Process this message and form an appropriate reply. */ void page_xfer(void){ int isPull = 0; |
︙ | ︙ | |||
1582 1583 1584 1585 1586 1587 1588 | xfer.nextIsPrivate = 1; } }else /* pragma NAME VALUE... ** | | | 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 | xfer.nextIsPrivate = 1; } }else /* pragma NAME VALUE... ** ** The client issues pragmas to try to influence the behavior of the ** server. These are requests only. Unknown pragmas are silently ** ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma send-private ** |
︙ | ︙ | |||
1832 1833 1834 1835 1836 1837 1838 | const char *zArg = db_column_text(&q, 1); i64 iMtime = db_column_int64(&q, 2); memset(&x, 0, sizeof(x)); url_parse_local(zUrl, URL_OMIT_USER, &x); if( x.name!=0 && sqlite3_strlike("%localhost%", x.name, 0)!=0 ){ @ pragma link %F(x.canonical) %F(zArg) %lld(iMtime) } | | | 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 | const char *zArg = db_column_text(&q, 1); i64 iMtime = db_column_int64(&q, 2); memset(&x, 0, sizeof(x)); url_parse_local(zUrl, URL_OMIT_USER, &x); if( x.name!=0 && sqlite3_strlike("%localhost%", x.name, 0)!=0 ){ @ pragma link %F(x.canonical) %F(zArg) %lld(iMtime) } url_unparse(&x); } db_finalize(&q); } /* Send the server timestamp last, in case prior processing happened ** to use up a significant fraction of our time window. */ |
︙ | ︙ | |||
1857 1858 1859 1860 1861 1862 1863 | ** ** Usage: %fossil test-xfer ?OPTIONS? XFERFILE ** ** Pass the sync-protocol input file XFERFILE into the server-side sync ** protocol handler. Generate a reply on standard output. ** ** This command was original created to help debug the server side of | | | 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 | ** ** Usage: %fossil test-xfer ?OPTIONS? XFERFILE ** ** Pass the sync-protocol input file XFERFILE into the server-side sync ** protocol handler. Generate a reply on standard output. ** ** This command was original created to help debug the server side of ** sync messages. The XFERFILE is the uncompressed content of an ** "xfer" HTTP request from client to server. This command interprets ** that message and generates the content of an HTTP reply (without any ** encoding and without the HTTP reply headers) and writes that reply ** on standard output. ** ** One possible usages scenario is to capture some XFERFILE examples ** using a command like: |
︙ | ︙ | |||
1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 | #define SYNC_UV_TRACE 0x00400 /* Describe UV activities */ #define SYNC_UV_DRYRUN 0x00800 /* Do not actually exchange files */ #define SYNC_IFABLE 0x01000 /* Inability to sync is not fatal */ #define SYNC_CKIN_LOCK 0x02000 /* Lock the current check-in */ #define SYNC_NOHTTPCOMPRESS 0x04000 /* Do not compression HTTP messages */ #define SYNC_ALLURL 0x08000 /* The --all flag - sync to all URLs */ #define SYNC_SHARE_LINKS 0x10000 /* Request alternate repo links */ #endif /* ** Floating-point absolute value */ static double fossil_fabs(double x){ return x>0.0 ? x : -x; | > | 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 | #define SYNC_UV_TRACE 0x00400 /* Describe UV activities */ #define SYNC_UV_DRYRUN 0x00800 /* Do not actually exchange files */ #define SYNC_IFABLE 0x01000 /* Inability to sync is not fatal */ #define SYNC_CKIN_LOCK 0x02000 /* Lock the current check-in */ #define SYNC_NOHTTPCOMPRESS 0x04000 /* Do not compression HTTP messages */ #define SYNC_ALLURL 0x08000 /* The --all flag - sync to all URLs */ #define SYNC_SHARE_LINKS 0x10000 /* Request alternate repo links */ #define SYNC_XVERBOSE 0x20000 /* Extra verbose. Network traffic */ #endif /* ** Floating-point absolute value */ static double fossil_fabs(double x){ return x>0.0 ? x : -x; |
︙ | ︙ | |||
1946 1947 1948 1949 1950 1951 1952 | ** are pulled if pullFlag is true. A full sync occurs if both are ** true. */ int client_sync( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ | | > | 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 | ** are pulled if pullFlag is true. A full sync occurs if both are ** true. */ int client_sync( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ const char *zAltPCode, /* Alternative project code (usually NULL) */ int *pnRcvd /* Set to # received artifacts, if not NULL */ ){ int go = 1; /* Loop until zero */ int nCardSent = 0; /* Number of cards sent */ int nCardRcvd = 0; /* Number of cards received */ int nCycle = 0; /* Number of round trips to the server */ int size; /* Size of a config value or uvfile */ int origConfigRcvMask; /* Original value of configRcvMask */ |
︙ | ︙ | |||
1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 | int nUvFileRcvd = 0; /* Number of uvfile cards received on this cycle */ sqlite3_int64 mtime; /* Modification time on a UV file */ int autopushFailed = 0; /* Autopush following commit failed if true */ const char *zCkinLock; /* Name of check-in to lock. NULL for none */ const char *zClientId; /* A unique identifier for this check-out */ unsigned int mHttpFlags;/* Flags for the http_exchange() subsystem */ if( db_get_boolean("dont-push", 0) ) syncFlags &= ~SYNC_PUSH; if( (syncFlags & (SYNC_PUSH|SYNC_PULL|SYNC_CLONE|SYNC_UNVERSIONED))==0 && configRcvMask==0 && configSendMask==0 ){ return 0; /* Nothing to do */ } | > | 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 | int nUvFileRcvd = 0; /* Number of uvfile cards received on this cycle */ sqlite3_int64 mtime; /* Modification time on a UV file */ int autopushFailed = 0; /* Autopush following commit failed if true */ const char *zCkinLock; /* Name of check-in to lock. NULL for none */ const char *zClientId; /* A unique identifier for this check-out */ unsigned int mHttpFlags;/* Flags for the http_exchange() subsystem */ if( pnRcvd ) *pnRcvd = 0; if( db_get_boolean("dont-push", 0) ) syncFlags &= ~SYNC_PUSH; if( (syncFlags & (SYNC_PUSH|SYNC_PULL|SYNC_CLONE|SYNC_UNVERSIONED))==0 && configRcvMask==0 && configSendMask==0 ){ return 0; /* Nothing to do */ } |
︙ | ︙ | |||
2258 2259 2260 2261 2262 2263 2264 | ** messages unique so that that the login-card nonce will always ** be unique. */ zRandomness = db_text(0, "SELECT hex(randomblob(20))"); blob_appendf(&send, "# %s\n", zRandomness); free(zRandomness); | | > > > > > | 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 | ** messages unique so that that the login-card nonce will always ** be unique. */ zRandomness = db_text(0, "SELECT hex(randomblob(20))"); blob_appendf(&send, "# %s\n", zRandomness); free(zRandomness); if( (syncFlags & SYNC_VERBOSE)!=0 && (syncFlags & SYNC_XVERBOSE)==0 ){ fossil_print("waiting for server..."); } fflush(stdout); /* Exchange messages with the server */ if( (syncFlags & SYNC_CLONE)!=0 && nCycle==0 ){ /* Do not send a login card on the first round-trip of a clone */ mHttpFlags = 0; }else{ mHttpFlags = HTTP_USE_LOGIN; } if( syncFlags & SYNC_NOHTTPCOMPRESS ){ mHttpFlags |= HTTP_NOCOMPRESS; } if( syncFlags & SYNC_XVERBOSE ){ mHttpFlags |= HTTP_VERBOSE; } /* Do the round-trip to the server */ if( http_exchange(&send, &recv, mHttpFlags, MAX_REDIRECTS, 0) ){ nErr++; go = 2; break; |
︙ | ︙ | |||
2520 2521 2522 2523 2524 2525 2526 | if( iStatus>=4 && uvPullOnly==1 ){ fossil_warning( "Warning: uv-pull-only \n" " Unable to push unversioned content because you lack\n" " sufficient permission on the server\n" ); uvPullOnly = 2; | | | 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 | if( iStatus>=4 && uvPullOnly==1 ){ fossil_warning( "Warning: uv-pull-only \n" " Unable to push unversioned content because you lack\n" " sufficient permission on the server\n" ); uvPullOnly = 2; } if( iStatus<=3 || uvPullOnly ){ db_multi_exec("DELETE FROM uv_tosend WHERE name=%Q", zName); }else if( iStatus==4 ){ db_multi_exec("UPDATE uv_tosend SET mtimeOnly=1 WHERE name=%Q",zName); }else if( iStatus==5 ){ db_multi_exec("REPLACE INTO uv_tosend(name,mtimeOnly) VALUES(%Q,0)", zName); |
︙ | ︙ | |||
2637 2638 2639 2640 2641 2642 2643 | ** The server can send pragmas to try to convey meta-information to ** the client. These are informational only. Unknown pragmas are ** silently ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma server-version VERSION ?DATE? ?TIME? ** | | | | 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 | ** The server can send pragmas to try to convey meta-information to ** the client. These are informational only. Unknown pragmas are ** silently ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma server-version VERSION ?DATE? ?TIME? ** ** The server announces to the server what version of Fossil it ** is running. The DATE and TIME are a pure numeric ISO8601 time ** for the specific check-in of the client. */ if( xfer.nToken>=3 && blob_eq(&xfer.aToken[1], "server-version") ){ xfer.remoteVersion = atoi(blob_str(&xfer.aToken[2])); if( xfer.nToken>=5 ){ xfer.remoteDate = atoi(blob_str(&xfer.aToken[3])); xfer.remoteTime = atoi(blob_str(&xfer.aToken[4])); } } /* pragma uv-pull-only ** pragma uv-push-ok ** ** If the server is unwilling to accept new unversioned content (because ** this client lacks the necessary permissions) then it sends a ** "uv-pull-only" pragma so that the client will know not to waste ** bandwidth trying to upload unversioned content. If the server ** does accept new unversioned content, it sends "uv-push-ok". */ else if( syncFlags & SYNC_UNVERSIONED ){ if( blob_eq(&xfer.aToken[1], "uv-pull-only") ){ |
︙ | ︙ | |||
2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 | }else{ manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); } db_end_transaction(0); }; transport_stats(&nSent, &nRcvd, 1); if( (rSkew*24.0*3600.0) > 10.0 ){ fossil_warning("*** time skew *** server is fast by %s", db_timespan_name(rSkew)); g.clockSkewSeen = 1; }else if( rSkew*24.0*3600.0 < -10.0 ){ fossil_warning("*** time skew *** server is slow by %s", db_timespan_name(-rSkew)); | > | 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 | }else{ manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); } db_end_transaction(0); }; transport_stats(&nSent, &nRcvd, 1); if( pnRcvd ) *pnRcvd = nArtifactRcvd; if( (rSkew*24.0*3600.0) > 10.0 ){ fossil_warning("*** time skew *** server is fast by %s", db_timespan_name(rSkew)); g.clockSkewSeen = 1; }else if( rSkew*24.0*3600.0 < -10.0 ){ fossil_warning("*** time skew *** server is slow by %s", db_timespan_name(-rSkew)); |
︙ | ︙ | |||
2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 | zOpType, nSent, nRcvd, g.zIpAddr); } } if( syncFlags & SYNC_VERBOSE ){ fossil_print( "Uncompressed payload sent: %lld received: %lld\n", nUncSent, nUncRcvd); } transport_close(&g.url); transport_global_shutdown(&g.url); if( nErr && go==2 ){ db_multi_exec("DROP TABLE onremote; DROP TABLE unk;"); manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); db_end_transaction(0); | > > | 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 | zOpType, nSent, nRcvd, g.zIpAddr); } } if( syncFlags & SYNC_VERBOSE ){ fossil_print( "Uncompressed payload sent: %lld received: %lld\n", nUncSent, nUncRcvd); } blob_reset(&send); blob_reset(&recv); transport_close(&g.url); transport_global_shutdown(&g.url); if( nErr && go==2 ){ db_multi_exec("DROP TABLE onremote; DROP TABLE unk;"); manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); db_end_transaction(0); |
︙ | ︙ |
Changes to src/xfersetup.c.
︙ | ︙ | |||
78 79 80 81 82 83 84 | @ <input type="submit" name="sync" value="%h(zButton)"> @ </div></form> @ if( P("sync") ){ user_select(); url_enable_proxy(0); @ <pre class="xfersetup"> | | | 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | @ <input type="submit" name="sync" value="%h(zButton)"> @ </div></form> @ if( P("sync") ){ user_select(); url_enable_proxy(0); @ <pre class="xfersetup"> client_sync(syncFlags, 0, 0, 0, 0); @ </pre> } } style_finish_page(); } |
︙ | ︙ |
Changes to src/zip.c.
︙ | ︙ | |||
136 137 138 139 140 141 142 | return 512; } static int archiveDeviceCharacteristics(sqlite3_file *pFile){ return 0; } static int archiveOpen( | | | 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | return 512; } static int archiveDeviceCharacteristics(sqlite3_file *pFile){ return 0; } static int archiveOpen( sqlite3_vfs *pVfs, const char *zName, sqlite3_file *pFile, int flags, int *pOutFlags ){ static struct sqlite3_io_methods methods = { 1, /* iVersion */ archiveClose, archiveRead, archiveWrite, |
︙ | ︙ | |||
245 246 247 248 249 250 251 | ** Append a single file to a growing ZIP archive. ** ** pFile is the file to be appended. zName is the name ** that the file should be saved as. */ static void zip_add_file_to_zip( Archive *p, | | | | 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | ** Append a single file to a growing ZIP archive. ** ** pFile is the file to be appended. zName is the name ** that the file should be saved as. */ static void zip_add_file_to_zip( Archive *p, const char *zName, const Blob *pFile, int mPerm ){ z_stream stream; int nameLen; int toOut = 0; int iStart; unsigned long iCRC = 0; |
︙ | ︙ | |||
372 373 374 375 376 377 378 | put16(&zExTime[2], 5); blob_append(&toc, zExTime, 9); nEntry++; } static void zip_add_file_to_sqlar( Archive *p, | | | | | | | | 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 | put16(&zExTime[2], 5); blob_append(&toc, zExTime, 9); nEntry++; } static void zip_add_file_to_sqlar( Archive *p, const char *zName, const Blob *pFile, int mPerm ){ int nName = (int)strlen(zName); if( p->db==0 ){ assert( p->vfs.zName==0 ); p->vfs.zName = (const char*)mprintf("archivevfs%p", (void*)p); p->vfs.iVersion = 1; p->vfs.szOsFile = sizeof(ArchiveFile); p->vfs.mxPathname = 512; p->vfs.pAppData = (void*)p->pBlob; p->vfs.xOpen = archiveOpen; p->vfs.xDelete = archiveDelete; p->vfs.xAccess = archiveAccess; p->vfs.xFullPathname = archiveFullPathname; p->vfs.xRandomness = archiveRandomness; p->vfs.xSleep = archiveSleep; p->vfs.xCurrentTime = archiveCurrentTime; p->vfs.xGetLastError = archiveGetLastError; sqlite3_vfs_register(&p->vfs, 0); sqlite3_open_v2("file:xyz.db", &p->db, SQLITE_OPEN_CREATE|SQLITE_OPEN_READWRITE, p->vfs.zName ); assert( p->db ); blob_zero(&p->tmp); sqlite3_exec(p->db, "PRAGMA page_size=512;" "PRAGMA journal_mode = off;" "PRAGMA cache_spill = off;" "BEGIN;" "CREATE TABLE sqlar(" "name TEXT PRIMARY KEY, -- name of the file\n" "mode INT, -- access permissions\n" "mtime INT, -- last modification time\n" "sz INT, -- original file size\n" "data BLOB -- compressed content\n" ");", 0, 0, 0 ); sqlite3_prepare(p->db, "INSERT INTO sqlar VALUES(?, ?, ?, ?, ?)", -1, &p->pInsert, 0 ); assert( p->pInsert ); sqlite3_bind_int64(p->pInsert, 3, unixTime); blob_zero(p->pBlob); } |
︙ | ︙ | |||
435 436 437 438 439 440 441 | sqlite3_bind_int(p->pInsert, 4, 0); sqlite3_bind_null(p->pInsert, 5); }else{ sqlite3_bind_text(p->pInsert, 1, zName, nName, SQLITE_STATIC); if( mPerm==PERM_LNK ){ sqlite3_bind_int(p->pInsert, 2, 0120755); sqlite3_bind_int(p->pInsert, 4, -1); | | | | | | | 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 | sqlite3_bind_int(p->pInsert, 4, 0); sqlite3_bind_null(p->pInsert, 5); }else{ sqlite3_bind_text(p->pInsert, 1, zName, nName, SQLITE_STATIC); if( mPerm==PERM_LNK ){ sqlite3_bind_int(p->pInsert, 2, 0120755); sqlite3_bind_int(p->pInsert, 4, -1); sqlite3_bind_text(p->pInsert, 5, blob_buffer(pFile), blob_size(pFile), SQLITE_STATIC ); }else{ unsigned int nIn = blob_size(pFile); unsigned long int nOut = nIn; sqlite3_bind_int(p->pInsert, 2, mPerm==PERM_EXE ? 0100755 : 0100644); sqlite3_bind_int(p->pInsert, 4, nIn); zip_blob_minsize(&p->tmp, nIn); compress( (unsigned char*) blob_buffer(&p->tmp), &nOut, (unsigned char*)blob_buffer(pFile), nIn ); if( nOut>=(unsigned long)nIn ){ sqlite3_bind_blob(p->pInsert, 5, blob_buffer(pFile), blob_size(pFile), SQLITE_STATIC ); }else{ sqlite3_bind_blob(p->pInsert, 5, blob_buffer(&p->tmp), nOut, SQLITE_STATIC ); } } } sqlite3_step(p->pInsert); sqlite3_reset(p->pInsert); } static void zip_add_file( Archive *p, const char *zName, const Blob *pFile, int mPerm ){ if( p->eType==ARCHIVE_ZIP ){ zip_add_file_to_zip(p, zName, pFile, mPerm); }else{ zip_add_file_to_sqlar(p, zName, pFile, mPerm); } |
︙ | ︙ | |||
784 785 786 787 788 789 790 | " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } | | | 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 | " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } zip_of_checkin(eType, rid, zOut ? &zip : 0, zName, pInclude, pExclude, listFlag); glob_free(pInclude); glob_free(pExclude); if( zOut ){ blob_write_to_file(&zip, zOut); blob_reset(&zip); } |
︙ | ︙ | |||
945 946 947 948 949 950 951 | zInclude = P("in"); if( zInclude ) pInclude = glob_create(zInclude); zExclude = P("ex"); if( zExclude ) pExclude = glob_create(zExclude); if( zInclude==0 && zExclude==0 ){ etag_check_for_invariant_name(z); } | | | | 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 | zInclude = P("in"); if( zInclude ) pInclude = glob_create(zInclude); zExclude = P("ex"); if( zExclude ) pExclude = glob_create(zExclude); if( zInclude==0 && zExclude==0 ){ etag_check_for_invariant_name(z); } if( eType==ARCHIVE_ZIP && nName>4 && fossil_strcmp(&zName[nName-4], ".zip")==0 ){ /* Special case: Remove the ".zip" suffix. */ nName -= 4; zName[nName] = 0; }else if( eType==ARCHIVE_SQLAR && nName>6 && fossil_strcmp(&zName[nName-6], ".sqlar")==0 ){ /* Special case: Remove the ".sqlar" suffix. */ nName -= 6; zName[nName] = 0; }else{ |
︙ | ︙ |
Changes to test/amend.test.
︙ | ︙ | |||
304 305 306 307 308 309 310 | set t5exp "*" foreach tag $tagt { lappend tags -tag $tag lappend cancels -cancel $tag } foreach res $result { append t1exp ", $res" | < > > > | 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 | set t5exp "*" foreach tag $tagt { lappend tags -tag $tag lappend cancels -cancel $tag } foreach res $result { append t1exp ", $res" append t3exp "Add*tag*\"$res\".*" append t5exp "Cancel*tag*\"$res\".*" } foreach res [lsort -nocase $result] { append t2exp "sym-$res*" } eval fossil amend $HASH $tags test amend-tag-$tc.1 {[string match "*hash:*$HASH*tags:*$t1exp*" $RESULT]} fossil tag ls --raw $HASH test amend-tag-$tc.2 {[string match $t2exp $RESULT]} fossil timeline -n 1 test amend-tag-$tc.3 {[string match $t3exp $RESULT]} |
︙ | ︙ |
Changes to test/commit-warning.test.
︙ | ︙ | |||
170 171 172 173 174 175 176 | # of source files that MUST NEVER BE TEXT. # test_block_in_checkout pre-commit-warnings-fossil-1 { fossil test-commit-warning --no-settings } { test pre-commit-warnings-fossil-1 {[normalize_result] eq \ [subst -nocommands -novariables [string trim { | < < < < < < < < < < < < | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | # of source files that MUST NEVER BE TEXT. # test_block_in_checkout pre-commit-warnings-fossil-1 { fossil test-commit-warning --no-settings } { test pre-commit-warnings-fossil-1 {[normalize_result] eq \ [subst -nocommands -novariables [string trim { 1\tcompat/zlib/contrib/blast/test.pk\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.build\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib.chm\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.sln\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/ChecksumImpl.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/CircularBuffer.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/CodecBase.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/Deflater.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/DotZLib.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/GZipStream.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/Inflater.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/LICENSE_1_0.txt\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/gcc_gvmat64/gvmat64.S\tCR/LF line endings 1\tcompat/zlib/contrib/puff/zeros.raw\tbinary data 1\tcompat/zlib/contrib/testzlib/testzlib.c\tCR/LF line endings 1\tcompat/zlib/contrib/testzlib/testzlib.txt\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/minizip.vcxproj\tCR/LF line endings |
︙ | ︙ | |||
241 242 243 244 245 246 247 248 249 250 251 252 253 254 | 1\tcompat/zlib/contrib/vstudio/vc9/zlibstat.vcproj\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.def\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.sln\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.vcproj\tCR/LF line endings 1\tcompat/zlib/win32/zlib.def\tCR/LF line endings 1\tcompat/zlib/zlib.3.pdf\tbinary data 1\tcompat/zlib/zlib.map\tCR/LF line endings 1\tskins/blitz/arrow_project.png\tbinary data 1\tskins/blitz/dir.png\tbinary data 1\tskins/blitz/file.png\tbinary data 1\tskins/blitz/fossil_100.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan_text.png\tbinary data 1\tskins/blitz/rss_20.png\tbinary data | > < | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 | 1\tcompat/zlib/contrib/vstudio/vc9/zlibstat.vcproj\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.def\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.sln\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.vcproj\tCR/LF line endings 1\tcompat/zlib/win32/zlib.def\tCR/LF line endings 1\tcompat/zlib/zlib.3.pdf\tbinary data 1\tcompat/zlib/zlib.map\tCR/LF line endings 1\textsrc/pikchr.wasm\tbinary data 1\tskins/blitz/arrow_project.png\tbinary data 1\tskins/blitz/dir.png\tbinary data 1\tskins/blitz/file.png\tbinary data 1\tskins/blitz/fossil_100.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan_text.png\tbinary data 1\tskins/blitz/rss_20.png\tbinary data 1\tsrc/alerts/bflat2.wav\tbinary data 1\tsrc/alerts/bflat3.wav\tbinary data 1\tsrc/alerts/bloop.wav\tbinary data 1\tsrc/alerts/plunk.wav\tbinary data 1\tsrc/sounds/0.wav\tbinary data 1\tsrc/sounds/1.wav\tbinary data 1\tsrc/sounds/2.wav\tbinary data |
︙ | ︙ |
Changes to test/delta1.test.
︙ | ︙ | |||
23 24 25 26 27 28 29 | # Use test script files as the basis for this test. # # For each test, copy the file intact to "./t1". Make # some random changes in "./t2". Then call test-delta on the # two files to make sure that deltas between these two files # work properly. # | | | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | # Use test script files as the basis for this test. # # For each test, copy the file intact to "./t1". Make # some random changes in "./t2". Then call test-delta on the # two files to make sure that deltas between these two files # work properly. # set filelist [lsort [glob $testdir/*]] foreach f $filelist { if {[file isdir $f]} continue set base [file root [file tail $f]] set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { write_file t2 [random_changes $f1 1 1 0 0.1] |
︙ | ︙ |
Changes to test/diff.test.
︙ | ︙ | |||
106 107 108 109 110 111 112 113 114 115 116 | fossil diff file5.dat test diff-file5-1 {[normalize_result] eq {Index: file5.dat ================================================================== --- file5.dat +++ file5.dat cannot compute difference between binary files}} ############################################################################### test_cleanup | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | fossil diff file5.dat test diff-file5-1 {[normalize_result] eq {Index: file5.dat ================================================================== --- file5.dat +++ file5.dat cannot compute difference between binary files}} ############################################################################### write_file file6a.dat "{\n \"abc\": {\n \"def\": false,\n \"ghi\": false\n }\n}\n" write_file file6b.dat "{\n \"abc\": {\n \"def\": false,\n \"ghi\": false\n },\n \"jkl\": {\n \"mno\": {\n \"pqr\": false\n }\n }\n}\n" fossil xdiff -y -W 16 file6a.dat file6b.dat test diff-file-6-1 {[normalize_result] eq {========== file6a.dat ===== versus ===== file6b.dat ===== 1 { 1 { 2 "abc": { 2 "abc": { 3 "def": false, 3 "def": false, 4 "ghi": false 4 "ghi": false > 5 }, > 6 "jkl": { > 7 "mno": { > 8 "pqr": false > 9 } 5 } 10 } 6 } 11 }}} ############################################################################### fossil rm file1.dat fossil diff -v file1.dat test diff-deleted-file-1 {[normalize_result] eq {DELETED file1.dat Index: file1.dat ================================================================== --- file1.dat +++ /dev/null @@ -1,1 +0,0 @@ -test file 1 (one line no term).}} ############################################################################### write_file file6.dat "test file 6 (one line no term)." fossil add file6.dat fossil diff -v file6.dat test diff-added-file-1 {[normalize_result] eq {ADDED file6.dat Index: file6.dat ================================================================== --- /dev/null +++ file6.dat @@ -0,0 +1,1 @@ +test file 6 (one line no term).}} ############################################################################### test_cleanup |
Changes to test/fake-editor.tcl.
︙ | ︙ | |||
47 48 49 50 51 52 53 54 55 56 57 58 59 60 | close $channel return "" } ############################################################################### set fileName [lindex $argv 0] if {[file exists $fileName]} { set data [readFile $fileName] } else { set data "" } | > > > > > | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | close $channel return "" } ############################################################################### set fileName [lindex $argv 0] if {[regexp {^CYGWIN} $::tcl_platform(os)]} { # Under Cygwin, we get a Windows path but must access using the unix path. set fileName [exec cygpath --unix $fileName] } if {[file exists $fileName]} { set data [readFile $fileName] } else { set data "" } |
︙ | ︙ |
Changes to test/json.test.
︙ | ︙ | |||
175 176 177 178 179 180 181 182 183 184 185 186 187 188 | proc test_json_payload {testname okfields badfields} { test_dict_keys $testname [dict get $::JR payload] $okfields $badfields } #### VERSION AKA HAI # The JSON API generally assumes we have a respository, so let it have one. test_setup # Stop backoffice from running during this test as it can cause hangs. fossil settings backoffice-disable 1 # Check for basic envelope fields in the result with an error fossil_json -expectError | > > > > > > > > | 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 | proc test_json_payload {testname okfields badfields} { test_dict_keys $testname [dict get $::JR payload] $okfields $badfields } #### VERSION AKA HAI # The JSON API generally assumes we have a respository, so let it have one. # Set FOSSIL_USER to ensure consistent results in "json user list" set _fossil_user "" if [info exists env(FOSSIL_USER)] { set _fossil_user $env(FOSSIL_USER) } set ::env(FOSSIL_USER) "JSON-TEST-USER" test_setup # Stop backoffice from running during this test as it can cause hangs. fossil settings backoffice-disable 1 # Check for basic envelope fields in the result with an error fossil_json -expectError |
︙ | ︙ | |||
274 275 276 277 278 279 280 | test_json_payload json-login-a {authToken name capabilities loginCookieName} {} set AuthAnon [dict get $JR payload] proc test_hascaps {testname need caps} { foreach n [split $need {}] { test $testname-$n {[string first $n $caps] >= 0} } } | | | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | test_json_payload json-login-a {authToken name capabilities loginCookieName} {} set AuthAnon [dict get $JR payload] proc test_hascaps {testname need caps} { foreach n [split $need {}] { test $testname-$n {[string first $n $caps] >= 0} } } test_hascaps json-login-c "hz" [dict get $AuthAnon capabilities] fossil user new U1 User-1 Uone fossil user capabilities U1 s write_file u1 { { "command":"login", "payload":{ |
︙ | ︙ | |||
885 886 887 888 889 890 891 | # Fossil repository db file could not be found. fossil close fossil_json HAI -expectError test json-RC-4102-CLI-exit {$CODE != 0} test_json_envelope json-RC-4102-CLI-exit {fossil timestamp command procTimeUs \ procTimeMs resultCode resultText} {payload} test json-RC-4102 {[dict get $JR resultCode] eq "FOSSIL-4102"} | < > > > > > > | 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 | # Fossil repository db file could not be found. fossil close fossil_json HAI -expectError test json-RC-4102-CLI-exit {$CODE != 0} test_json_envelope json-RC-4102-CLI-exit {fossil timestamp command procTimeUs \ procTimeMs resultCode resultText} {payload} test json-RC-4102 {[dict get $JR resultCode] eq "FOSSIL-4102"} # FOSSIL-4103 FSL_JSON_E_DB_NOT_VALID # Fossil repository db file is not valid. write_file nope.fossil { This is not a fossil repo. It ought to be a SQLite db with a well-known schema, but it is actually just a block of text. } fossil_json HAI -R nope.fossil -expectError test json-RC-4103-CLI-exit {$CODE != 0} if { $JR ne "" } { test_json_envelope json-RC-4103-CLI {fossil timestamp command procTimeUs \ procTimeMs resultCode resultText} {payload} test json-RC-4103 {[dict get $JR resultCode] eq "FOSSIL-4103"} } else { test json-RC-4103 0 knownBug } ############################################################################### test_cleanup if { $_fossil_user eq "" } { unset ::env(FOSSIL_USER) } else { set ::env(FOSSIL_USER) $_fossil_user } |
Changes to test/merge1.test.
︙ | ︙ | |||
71 72 73 74 75 76 77 | 111 - This is line one OF the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { | | | | | | | | | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | 111 - This is line one OF the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<< (line 1) 111 - This is line ONE of the demo program - 1111 ||||||| COMMON ANCESTOR content follows ||||||||||||||||||||||||| (line 1) 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows =============================== (line 1) 111 - This is line one OF the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<< (line 1) 111 - This is line one OF the demo program - 1111 ||||||| COMMON ANCESTOR content follows ||||||||||||||||||||||||| (line 1) 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows =============================== (line 1) 111 - This is line ONE of the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } fossil 3-way-merge t1 t3 t2 a32 -expectError test merge1-2.1 {[same_file t32 a32]} fossil 3-way-merge t1 t2 t3 a23 -expectError test merge1-2.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 |
︙ | ︙ | |||
156 157 158 159 160 161 162 | write_file_indented t3 { 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { | | | | | | | | | | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | write_file_indented t3 { 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<< (line 1) ||||||| COMMON ANCESTOR content follows ||||||||||||||||||||||||| (line 1) 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows =============================== (line 1) 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<< (line 1) 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 ||||||| COMMON ANCESTOR content follows ||||||||||||||||||||||||| (line 1) 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows =============================== (line 1) >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } fossil 3-way-merge t1 t3 t2 a32 -expectError test merge1-4.1 {[same_file t32 a32]} fossil 3-way-merge t1 t2 t3 a23 -expectError test merge1-4.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 |
︙ | ︙ | |||
295 296 297 298 299 300 301 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd | | | | | | | 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<< (line 2) efgh 2 ijkl 2 mnop 2 qrst uvwx yzAB 2 CDEF 2 GHIJ 2 ||||||| COMMON ANCESTOR content follows ||||||||||||||||||||||||| (line 2) efgh ijkl mnop qrst uvwx yzAB CDEF GHIJ ======= MERGED IN content follows =============================== (line 2) efgh ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } fossil 3-way-merge t1 t2 t3 a23 -expectError test merge1-7.1 {[same_file t23 a23]} write_file_indented t2 { abcd efgh 2 ijkl 2 mnop |
︙ | ︙ | |||
363 364 365 366 367 368 369 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd | | | | | | | 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<< (line 2) efgh 2 ijkl 2 mnop qrst uvwx yzAB 2 CDEF 2 GHIJ 2 ||||||| COMMON ANCESTOR content follows ||||||||||||||||||||||||| (line 2) efgh ijkl mnop qrst uvwx yzAB CDEF GHIJ ======= MERGED IN content follows =============================== (line 2) efgh ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } fossil 3-way-merge t1 t2 t3 a23 -expectError test merge1-7.2 {[same_file t23 a23]} ############################################################################### test_cleanup |
Changes to test/merge2.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # Tests of the delta mechanism. # test_setup "" | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ############################################################################ # # Tests of the delta mechanism. # test_setup "" set filelist [lsort [glob $testdir/*]] foreach f $filelist { if {[file isdir $f]} continue set base [file root [file tail $f]] if {[string match "utf16*" $base]} continue set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { |
︙ | ︙ |
Changes to test/merge3.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # Tests of the 3-way merge # test_setup "" | | | > | | > | > > | > > | > | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | ############################################################################ # # Tests of the 3-way merge # test_setup "" proc merge-test {testid basis v1 v2 result {fossil_args ""}} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n fossil 3-way-merge t1 t2 t3 t4 {*}$fossil_args set x [read_file t4] regsub -all \ {<<<<<<< BEGIN MERGE CONFLICT: local copy shown first <+ \(line \d+\)} \ $x {MINE:} x regsub -all \ {\|\|\|\|\|\|\| COMMON ANCESTOR content follows \|+ \(line \d+\)} \ $x {COM:} x regsub -all \ {======= MERGED IN content follows =+ \(line \d+\)} \ $x {YOURS:} x regsub -all \ {>>>>>>> END MERGE CONFLICT >+} \ $x {END} x set x [split [string trim $x] \n] set result [string trim $result] if {$x!=$result} { protOut " Expected \[$result\]" protOut " Got \[$x\]" test merge3-$testid 0 } else { |
︙ | ︙ | |||
65 66 67 68 69 70 71 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b COM: 3 4 5 YOURS: 3 4 5c END 6 7 8 9 | | | | | | | | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b COM: 3 4 5 YOURS: 3 4 5c END 6 7 8 9 } -expectError merge-test 4 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b 6b COM: 3 4 5 6 YOURS: 3 4 5c 6 END 7 8 9 } -expectError merge-test 5 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6c 7c 8 9 } { 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8 9 } -expectError merge-test 6 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8 9 } { 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8b 9 } -expectError merge-test 7 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8c 9 } { 1 2 MINE: 3b 4b 5b 6b 7 8b COM: 3 4 5 6 7 8 YOURS: 3 4 5c 6c 7c 8c END 9 } -expectError merge-test 8 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9b } { 1 2 3 4 5c 6c 7c 8c 9 } { 1 2 MINE: 3b 4b 5b 6b 7 8b 9b COM: 3 4 5 6 7 8 9 YOURS: 3 4 5c 6c 7c 8c 9 END } -expectError merge-test 9 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3 4 5c 6c 7c 8 9 } { |
︙ | ︙ | |||
138 139 140 141 142 143 144 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3b 4c 5 6c 7c 8 9 } { 1 2 MINE: 3b 4b COM: 3 4 YOURS: 3b 4c END 5 6c 7c 8b 9b | | | 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3b 4c 5 6c 7c 8 9 } { 1 2 MINE: 3b 4b COM: 3 4 YOURS: 3b 4c END 5 6c 7c 8b 9b } -expectError merge-test 12 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b4b 5 6 7 8b 9b } { 1 2 3b4b 5 6c 7c 8 9 } { |
︙ | ︙ | |||
193 194 195 196 197 198 199 | 1 2 3 4 5 6 7 8 9 } { 1 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 | | | | 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | 1 2 3 4 5 6 7 8 9 } { 1 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } -expectError merge-test 25 { 1 2 3 4 5 6 7 8 9 } { 1 7 8 9 } { 1 2 3 9 } { 1 MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } -expectError merge-test 30 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 } { 1 3 4 5 6 7 8 9 |
︙ | ︙ | |||
248 249 250 251 252 253 254 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END 9 | | | | 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END 9 } -expectError merge-test 35 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 7 8 9 } { 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END 9 } -expectError merge-test 40 { 2 3 4 5 6 7 8 } { 3 4 5 6 7 8 } { 2 3 4 5 6 7 |
︙ | ︙ | |||
303 304 305 306 307 308 309 | 2 3 4 5 6 7 8 } { 6 7 8 } { 2 3 4 } { MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END | | | | 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 | 2 3 4 5 6 7 8 } { 6 7 8 } { 2 3 4 } { MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } -expectError merge-test 45 { 2 3 4 5 6 7 8 } { 7 8 } { 2 3 } { MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } -expectError merge-test 50 { 2 3 4 5 6 7 8 } { 2 3 4 5 6 7 } { 3 4 5 6 7 8 |
︙ | ︙ | |||
357 358 359 360 361 362 363 | 2 3 4 5 6 7 8 } { 2 3 4 } { 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END | | | | 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | 2 3 4 5 6 7 8 } { 2 3 4 } { 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END } -expectError merge-test 55 { 2 3 4 5 6 7 8 } { 2 3 } { 7 8 } { MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END } -expectError merge-test 60 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 |
︙ | ︙ | |||
412 413 414 415 416 417 418 | 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 | | | | 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 | 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } -expectError merge-test 65 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { 1 2 3 9 } { 1 MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } -expectError merge-test 70 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 } { 1 2b 3 4 5 6 7 8 9 |
︙ | ︙ | |||
467 468 469 470 471 472 473 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END 9 | | | | 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END 9 } -expectError merge-test 75 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END 9 } -expectError merge-test 80 { 2 3 4 5 6 7 8 } { 2b 3 4 5 6 7 8 } { 2 3 4 5 6 7 |
︙ | ︙ | |||
522 523 524 525 526 527 528 | 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6 7 8 } { 2 3 4 } { MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END | | | | 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 | 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6 7 8 } { 2 3 4 } { MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } -expectError merge-test 85 { 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6b 7 8 } { 2 3 } { MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } -expectError merge-test 90 { 2 3 4 5 6 7 8 } { 2 3 4 5 6 7 } { 2b 3 4 5 6 7 8 |
︙ | ︙ | |||
577 578 579 580 581 582 583 | 2 3 4 5 6 7 8 } { 2 3 4 } { 2b 3b 4b 5b 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END | | | | 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 | 2 3 4 5 6 7 8 } { 2 3 4 } { 2b 3b 4b 5b 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END } -expectError merge-test 95 { 2 3 4 5 6 7 8 } { 2 3 } { 2b 3b 4b 5b 6b 7 8 } { MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END } -expectError merge-test 100 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3 4 5 7 8 9 a b c d e } { 1 2b 3 4 5 7 8 9 a b c d e |
︙ | ︙ | |||
623 624 625 626 627 628 629 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 MINE: 9b COM: 9 YOURS: 9b a b c d e END | | | | 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 MINE: 9b COM: 9 YOURS: 9b a b c d e END } -expectError merge-test 104 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 MINE: 9b a b c d e COM: 9 YOURS: 9b END } -expectError ############################################################################### test_cleanup |
Changes to test/merge4.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # Tests of the 3-way merge # test_setup "" | | | | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ############################################################################ # # Tests of the 3-way merge # test_setup "" proc merge-test {testid basis v1 v2 result1 result2 {fossil_args ""}} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n fossil 3-way-merge t1 t2 t3 t4 {*}$fossil_args fossil 3-way-merge t1 t3 t2 t5 {*}$fossil_args set x [read_file t4] regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<< \(line \d+\)} $x {>} x regsub -all {\|\|\|\|\|\|\|.*======= \(line \d+\)} $x {=} x regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $x {<} x set x [split [string trim $x] \n] set y [read_file t5] regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<< \(line \d+\)} $y {>} y regsub -all {\|\|\|\|\|\|\|.*======= \(line \d+\)} $y {=} y regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $y {<} y set y [split [string trim $y] \n] set result1 [string trim $result1] if {$x!=$result1} { protOut " Expected \[$result1\]" protOut " Got \[$x\]" test merge4-$testid 0 |
︙ | ︙ | |||
59 60 61 62 63 64 65 | 1 2b 3b 4b 5 6b 7b 8b 9 } { 1 2 3 4c 5c 6c 7 8 9 } { 1 > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < 9 } { 1 > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < 9 | | | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | 1 2b 3b 4b 5 6b 7b 8b 9 } { 1 2 3 4c 5c 6c 7 8 9 } { 1 > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < 9 } { 1 > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < 9 } -expectError merge-test 1001 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4 5 6 7b 8b 9 } { 1 2 3 4c 5c 6c 7 8 9 } { |
︙ | ︙ | |||
81 82 83 84 85 86 87 | 2b 3b 4b 5 6b 7b 8b } { 2 3 4c 5c 6c 7 8 } { > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < } { > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | 2b 3b 4b 5 6b 7b 8b } { 2 3 4c 5c 6c 7 8 } { > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < } { > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < } -expectError merge-test 1003 { 2 3 4 5 6 7 8 } { 2b 3b 4 5 6 7b 8b } { 2 3 4c 5c 6c 7 8 } { |
︙ | ︙ |
Changes to test/merge5.test.
︙ | ︙ | |||
14 15 16 17 18 19 20 | # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the "merge" command # | > | > | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the "merge" command # if {! $::QUIET} { puts "Skipping Merge5 tests" } protOut { fossil sqlite3 --no-repository reacts badly to SQL dumped from repositories created from fossil older than version 2.0. } test merge5-sqlite3-issue false knownBug test_cleanup_then_return |
︙ | ︙ |
Changes to test/merge_renames.test.
︙ | ︙ | |||
260 261 262 263 264 265 266 | fossil update trunk write_file f1 "f1.2" fossil add f1 fossil commit -b b2 -m "add f1" fossil update trunk fossil merge b1 | | > | | > | | 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 | fossil update trunk write_file f1 "f1.2" fossil add f1 fossil commit -b b2 -m "add f1" fossil update trunk fossil merge b1 fossil merge b2 -expectError test_status_list merge_renames-8-1 $RESULT { MERGE f1 WARNING: 1 merge conflicts } fossil revert fossil merge --integrate b1 fossil merge b2 -expectError test_status_list merge_renames-8-2 $RESULT { MERGE f1 WARNING: 1 merge conflicts } ############################################# # Test 9 # # Merging a delete/rename/add combination # ############################################# |
︙ | ︙ | |||
306 307 308 309 310 311 312 | ADDED f1 } test_status_list merge_renames-9-1 $RESULT $expectedMerge fossil changes test_status_list merge_renames-9-2 $RESULT " MERGED_WITH [commit_id b] ADDED_BY_MERGE f1 | | | | | 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 | ADDED f1 } test_status_list merge_renames-9-1 $RESULT $expectedMerge fossil changes test_status_list merge_renames-9-2 $RESULT " MERGED_WITH [commit_id b] ADDED_BY_MERGE f1 RENAMED f1 -> f2 DELETED f2 -> f2 (overwritten by rename) " test_file_contents merge_renames-9-3 f1 "f1.1" test_file_contents merge_renames-9-4 f2 "f1" # Undo and ensure a dry run merge results in no changes fossil undo test_status_list merge_renames-9-5 $RESULT { UNDO f1 UNDO f2 } fossil merge -n b -expectError test_status_list merge_renames-9-6 $RESULT " $expectedMerge REMINDER: this was a dry run - no files were actually changed. " test merge_renames-9-7 {[fossil changes] eq ""} ################################################################### |
︙ | ︙ | |||
366 367 368 369 370 371 372 | test_status_list merge_renames-10-4 $RESULT { RENAME f1 -> f2 RENAME f2 -> f1 } test_file_contents merge_renames-10-5 f1 "f1" test_file_contents merge_renames-10-6 f2 "f2" test_status_list merge_renames-10-7 [fossil changes] " | | | | 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | test_status_list merge_renames-10-4 $RESULT { RENAME f1 -> f2 RENAME f2 -> f1 } test_file_contents merge_renames-10-5 f1 "f1" test_file_contents merge_renames-10-6 f2 "f2" test_status_list merge_renames-10-7 [fossil changes] " RENAMED f1 -> f2 RENAMED f2 -> f1 BACKOUT [commit_id trunk] " fossil commit -m "swap back" ;# V fossil merge b test_status_list merge_renames-10-8 $RESULT { UPDATE f1 |
︙ | ︙ | |||
493 494 495 496 497 498 499 | ADD f2 } fossil merge trunk fossil commit -m "merge trunk" --tag c4 fossil mv --hard f2 f2n test_status_list merge_renames-13-3 $RESULT " RENAME f2 f2n | | | 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 | ADD f2 } fossil merge trunk fossil commit -m "merge trunk" --tag c4 fossil mv --hard f2 f2n test_status_list merge_renames-13-3 $RESULT " RENAME f2 f2n MOVED_FILE [file normalize $repoDir]/f2 " fossil commit -m "renamed f2->f2n" --tag c5 fossil update trunk fossil merge b test_status_list merge_renames-13-4 $RESULT {ADDED f2n} fossil commit -m "merge f2n" --tag m1 --tag c6 |
︙ | ︙ |
Changes to test/merge_warn.test.
︙ | ︙ | |||
38 39 40 41 42 43 44 | write_file f4 "f4" fossil add f4 fossil commit -m "add f4" fossil update trunk write_file f1 "f1.1" write_file f3 "f3.1" | | > | | | | < | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | write_file f4 "f4" fossil add f4 fossil commit -m "add f4" fossil update trunk write_file f1 "f1.1" write_file f3 "f3.1" fossil merge --integrate mrg -expectError test_status_list merge_warn-1 $RESULT { WARNING: 1 unmanaged files were overwritten WARNING: 2 merge conflicts DELETE f1 MERGE f2 ADDED f3 (overwrites an unmanaged file), original copy backed up locally WARNING: local edits lost for f1 } test merge_warn-2 { [string first "ignoring --integrate: mrg is not a leaf" $RESULT]>=0 } ############################################################################### |
︙ | ︙ |
Changes to test/release-checklist.wiki.
︙ | ︙ | |||
45 46 47 48 49 50 51 | <li><p> Shift-click on each of the links in [./fileage-test-1.wiki] and verify correct operation of the file-age computation. <li><p> Verify correct name-change tracking behavior (no net changes) for: | < | | | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | <li><p> Shift-click on each of the links in [./fileage-test-1.wiki] and verify correct operation of the file-age computation. <li><p> Verify correct name-change tracking behavior (no net changes) for: <pre><b>fossil test-name-changes --debug b120bc8b262ac 374920b20944b </b></pre> <li><p> Compile for all of the following platforms: <ol type="a"> <li> Linux x86 <li> Linux x86_64 <li> Mac x86 |
︙ | ︙ |
Changes to test/revert.test.
︙ | ︙ | |||
96 97 98 99 100 101 102 | # Test with a single filename argument # revert-test 1-2 f0 { UNMANAGE f0 } -changes { DELETED f1 EDITED f2 | | | | | 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | # Test with a single filename argument # revert-test 1-2 f0 { UNMANAGE f0 } -changes { DELETED f1 EDITED f2 RENAMED f3 -> f3n } -addremove { ADDED f0 } -exists {f0 f2 f3n} -notexists f3 revert-test 1-3 f1 { REVERT f1 } -changes { ADDED f0 EDITED f2 RENAMED f3 -> f3n } -exists {f0 f1 f2 f3n} -notexists f3 revert-test 1-4 f2 { REVERT f2 } -changes { ADDED f0 DELETED f1 RENAMED f3 -> f3n } -exists {f0 f2 f3n} -notexists {f1 f3} # Both files involved in a rename are reverted regardless of which filename # is used as an argument to 'fossil revert' # revert-test 1-5 f3 { REVERT f3 |
︙ | ︙ |
Added test/rewrite-test-output.tcl.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 | #!/usr/bin/env tclsh # Script to anonymise test results for comparison. # - Replaces hashes, pids and similar with fixed strings # - Rewrites temporary paths to standardise them in output # Pick up options set EXTRA 0 set i [lsearch $argv -extra] while { $i >= 0 } { incr EXTRA set argv [lreplace $argv $i $i] set i [lsearch $argv -extra] } # With no arguments or "-", use stdin. set fname "-" if { [llength $argv] > 0 } { set fname [lindex $argv 0] } # Any -options, or an empty first argument, is an error. if { [llength $argv] > 1 || [regexp {^-.+} $fname] } { puts stderr "Error: argument error" puts stderr "usage: \[-extra\] [file tail $argv0] ?FILE" puts stderr " Rewrite test output to ease comparison of outputs." puts stderr " With -extra, more output is rewritten as is summaries" puts stderr " to make diff(1) mor euseful across runs and platforms." exit 1 } elseif { $fname ne "-" && ! [file exists $fname] } { puts stderr "File does not exist: '$fname'" exit 1 } proc common_rewrites { line testname } { # Normalise the fossil commands with path as just fossil regsub {^(?:[A-Z]:)?/.*?/fossil(?:\.exe)? } $line {fossil } line if {[string match "Usage: *" $line]} { regsub {^(Usage: )/.*?/fossil(?:\.exe)? } $line {\1fossil } line regsub {^(Usage: )[A-Z]:\\.*?\\fossil(?:\.exe)? } $line {\1fossil } line } # Accept 40 and 64 byte hashes as such regsub -all {[[:<:]][0-9a-f]{40}[[:>:]]} $line HASH line regsub -all {[[:<:]][0-9a-f]{64}[[:>:]]} $line HASH line # Date and time regsub -all {[[:<:]]\d{4}-\d\d-\d\d \d\d:\d\d:\d\d[[:>:]]} $line {YYYY-mm-dd HH:MM:SS} line if { [lsearch -exact {"amend" "wiki"} $testname] >= 0 } { # With embedded T and milliseconds regsub { \d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d{3}$} $line { YYYY-mm-ddTHH:MM:SS.NNN} line } if { [lsearch -exact {"amend" "th1-hooks" "wiki"} $testname] >= 0 } { regsub {[[:<:]]\d{4}-\d\d-\d\d[[:>:]]} $line {YYYY-mm-dd} line } # Timelines have HH:MM:SS [HASH], but don't mess with the zero'ed version. regsub {^(?!00:00:00 \[0000000000\])\d\d:\d\d:\d\d \[[0-9a-f]{10}\] } $line {HH:MM:SS [HASH] } line # Temporary directories regsub -all {(?:[A-Z]:)?/.*?/repo_\d+/\d+_\d+} $line {/TMP/repo_PID/SEC_SEQ} line # Home directories only seem present with .fossil or _fossil. Simplify to .fossil. regsub -all {(?:[A-Z]:)?/.*?/home_\d+/[._]fossil[[:>:]]} $line {/TMP/home_PID/.fossil} line # Users in output regsub { (\(user: )[^\)]*\)$} $line { \1USER)} line return $line } # # tests/tests_unix/tests_windows contain tuples of # # 1. A regular expression to match current line # 2. A substitution for the current line # # Some common patterns applicable to multiples tests are appended below. # # The common_rewrites procedure is run first, so use e.g. HASH as needed. # dict set tests "amend" { {^(fossil artifact) [0-9a-f]{10}} {\1 HASH} {^U [^ ]+$} {U USER} {^Z [0-9a-f]{32}$} {Z CHECKSUM} {^(ed -s \./ci-comment-).*?(\.txt)$} {\1UNIQ\2} {^(fossil amend HASH -date \{?)\d\d/\d\d/\d{4}} {\1dd/mm/YYYY} {^(fossil amend HASH -date \{.* )\d{4}(\})$} {\1YYYY\2} {^(fossil amend HASH -date \{.* )\d\d:} {\1HH:} {^(fossil amend HASH -date \{)[A-Z][a-z]{2} [A-Z][a-z]{2} [ 0-9]\d } {\1Day Mon dd } {(\] Edit \[)[0-9a-f]{16}.[0-9a-f]{10}(\]: )} {\1HASH1|HASH2\2} {(\] Edit \[.*?&dp=)[0-9a-f]{16}} {\1dp=HASH} } dict set tests "cmdline" { {^(fossil test-echo --args) .*/} {\1 /TMP/} {^(g\.nameOfExe =) \[[^\]]+[/\\]fossil(?:\.exe)?\]$} {\1 [/PATH/FOSSILCMD]} {^(argv\[0\] =) \[[^\]]+[/\\]fossil(?:\.exe)?\]$} {\1 [/PATH/FOSSILCMD]} } dict set tests "contains-selector" { {^(fossil test-contains-selector) .*?/(compare-selector.css )} {\1 /TMP/\2} } dict set tests "json" { {^(Content-Length) \d+$} {\1 LENGTH} {^(Cookie: fossil-)[0-9a-f]{16}(\=HASH%2F)\d+\.\d+(%2Fanonymous)$} {\1CODE\2NOW\3} {^(GET /json/cap\?authToken\=HASH)/\d+\.\d+/(anonymous )} {\1/NOW/\2} {^(Cookie: fossil-)[0-9a-f]{16}\=[0-9A-F]{50}%2F[0-9a-f]{16}%2F(.*)$} {\1CODE=SHA1%2FCODE%2F\2} {("authToken":").+?(")} {\1AUTHTOKEN\2} {("averageArtifactSize":)\d+()} {\1SIZE\2} {("compiler":").+?(")} {\1COMPILER\2} {("loginCookieName":").+?(")} {\1COOKIE\2} {("manifestVersion":"\[)[0-9a-f]{10}(\]")} {\1HASH\2} {("manifestYear":")\d{4}(")} {\1YYYY\2} {("name":").+?(")} {\1NAME\2} {("password":")[0-9a-f]+(")} {\1PASSWORD\2} {("projectCode":")[0-9a-f]{40}(")} {\1HASH\2} {("procTimeMs":)\d+} {\1MSEC} {("procTimeUs":)\d+} {\1USEC} {("releaseVersion":")\d+\.\d+(")} {\1VERSION\2} {("releaseVersionNumber":")\d+(")} {\1VERSION_NUMBER\2} {("timestamp":)\d+} {\1SEC} {("seed":)\d+()} {\1SEED\2} {("uid":)\d+()} {\1UID\2} {("uncompressedArtifactSize":)\d+()} {\1SIZE\2} {("user":").+?(")} {\1USER\2} {("version":"YYYY-mm-dd HH:MM:SS )\[[0-9a-f]{10}\] \(\d+\.\d+\.\d+\)"} {\1[HASH] (major.minor.patch)} {^(Date:) [A-Z][a-z]{2}, \d\d? [A-Z][a-z]{2} \d{4} \d\d:\d\d:\d\d [-+]\d{4}$} {\1 Day, dd Mon YYYY HH:MM:SS TZ} } dict set tests "merge_renames" { {^(size: {7})\d+( bytes)$} {\1N\2} {^(type: {7}Check-in by ).+?( on YYYY-mm-dd HH:MM:SS)$} {\1USER\2} } dict set tests "set-manifest" { {^(project-code: )[0-9a-f]{40}$} {\1HASH} line } dict set tests "stash" { {^(---|\+\+\+) NUL$} {\1 /dev/null} {(^ 1: \[)[0-9a-f]{14}(\] on YYYY-mm-dd HH:MM:SS)$} {\1HASH\2} {(^ 1: \[)[0-9a-f]{14}(\] from YYYY-mm-dd HH:MM:SS)$} {\1HASH\2} } dict set tests "th1" { {^(fossil test-th-source) (?:[A-Z]:)?.*?/(th1-)\d+([.]th1)$} {\1 /TMP/\2PID\3} {^(?:[A-Z]:)?[/\\].*?[/\\]fossil(?:\.exe)?$} {/PATH/FOSSILCMD} {[[:<:]](Content-Security-Policy[[:>:]].*'nonce-)[0-9a-f]{48}(';)} {\1NONCE\2} {^(<link rel="stylesheet" href="/style.css\?id=)[0-9a-f]+(" type="text/css">)$} {\1ID\2} {^\d+\.\d{3}(s by)$} {N.MMM\1} {^(Fossil) \d+\.\d+ \[[0-9a-f]{10}\] (YYYY-mm-dd HH:MM:SS)$} {\1 N.M [HASH] \2} {^(<script nonce=")[0-9a-f]{48}(">/\* style\.c:)\d+} {\1NONCE\2LINENO} } dict set tests "th1-docs" { {^(check-ins: ).*} {\1COUNT} {^(local-root: ).*} {\1/PATH/} {^(repository: ).*} {\1/PATH/REPO} {^(comment: ).*} {\1/COMMENT/} {^(tags: ).*} {\1/TAGS/} {(--ipaddr 127\.0\.0\.1) .*? (--localauth)} {\1 REPO \2} } dict set tests "th1-hooks" { {^(?:/[^:]*/fossil|[A-Z]:\\[^:]*\\fossil\.exe): (unknown command:|use \"help\")} {fossil: \1} {^(project-code: )[0-9a-f]{40}$} {\1HASH} } dict set tests "th1-tcl" { {^(fossil test-th-render --open-config) \{?.*?[/\\]test[/\\]([^/\\]*?)\}?$} {\1 /CHECKOUT/test/\2} {^(fossil)(?:\.exe)?( 3 \{test-th-render --open-config )(?:\{[A-Z]:)?[/\\].*?[/\\]test[/\\](th1-tcl9.txt\})\}?$} {\1\2/CHECKOUT/test/\3} {^\d{10}$} {SEC} } dict set tests "unversioned" { {^(fossil user new uvtester.*) \d+$} {\1 PASSWORD} {^(fossil .*http://uvtester:)\d+(@localhost:)\d+} {\1PASSWORD\2PORT} {^(Pull from http://uvtester@localhost:)\d+} {\1PORT} {^(ERROR \(1\): Usage:) .*?[/\\]fossil(?:\.exe)? (unversioned)} {\1 /PATH/fossil \2} {^(Started Fossil server, pid \")\d+(\", port \")\d+} {\1PID\2PORT} {^(Now in client directory \")(?:[A-Z]:)?/.*?/uvtest_\d+_\d+\"} {\1/TMP/uvtest_SEC_SEQ} {^(Stopped Fossil server, pid \")\d+(\", using argument \")(?:\d+|[^\"]*\.stopper)(\")} {\1PID\2PID_OR_SCRIPT\3} {^(This is unversioned file #4\.) \d+ \d+} {\1 PID SEC} {^(This is unversioned file #4\. PID SEC) \d+ \d+} {\1 PID SEC} {^[0-9a-f]{12}( YYYY-mm-dd HH:MM:SS *)(\d+)( *)\2( unversioned4.txt)$} {HASH \1SZ\3SZ\4} {^[0-9a-f]{40}$} {\1HASH} {^((?:Clone|Pull)? done, wire bytes sent: )\d+( received: )\d+( remote: )(?:127\.0.0\.1|::1)$} {\1SENT\2RECV\3LOCALIP} {^(project-id: )[0-9a-f]{40}$} {\1HASH} {^(server-id: )[0-9a-f]{40}$} {\1HASH} {^(admin-user: uvtester \(password is ").*("\))$} {\1PASSWORD\2} {^(repository: ).*?/uvtest_\d+_\d+/(uvrepo.fossil)$} {\1/TMP/uvtest_SEC_SEQ/\2} {^(local-root: ).*?/uvtest_\d+_\d+/$} {\1/TMP/uvtest_SEC_SEQ/} {^(project-code: )[0-9a-f]{40}$} {\1HASH} } dict set tests "utf" { {^(fossil test-looks-like-utf) (?:[A-Z]:)?/.*?/([^/\\]*?)\}?$} {\1 /TMP/test/\2} {^(File ")(?:[A-Z]:)?/.*?/(utf-check-\d+-\d+-\d+-\d+.jnk" has \d+ bytes\.)$} {\1/TMP/\2} } dict set tests "wiki" { {^(fossil (?:attachment|wiki) .*--technote )[0-9a-f]{21}$} {\1HASH} {^(fossil (?:attachment|wiki) .* (?:a13|f15|fa) --technote )[0-9a-f]+$} {\1ID} {^[0-9a-f]{40}( YYYY-mm-dd HH:MM:SS)} {HASH\1} {(\] Add attachment \[/artifact/)[0-9a-f]{16}(|)} {\1HASH\2} { (to tech note \[/technote/)[0-9a-f]{16}\|[0-9a-f]{10}(\] \(user:)} {\1HASH1|HASH2\2} {^(ambiguous tech note id: )[0-9a-f]+$} {\1ID} {^(Attached fa to tech note )[0-9a-f]{21}(?:[0-9a-f]{19})?\.$} {\1HASH.} {^(Date:) [A-Z][a-z]{2}, \d\d? [A-Z][a-z]{2} \d{4} \d\d:\d\d:\d\d [-+]\d{4}$} {\1 Day, dd Mon YYYY HH:MM:SS TZ} {(Content-Security-Policy.*'nonce-)[0-9a-f]{48}(';)} {\1NONCE\2} {^(<link rel="stylesheet" href="/style.css\?id=)[0-9a-f]+(" type="text/css">)$} {\1ID\2} {^(added by )[^ ]*( on)$} {\1USER\2} {^(<script nonce=['\"])[0-9a-f]{48}(['\"]>/\* [a-z]+\.c:)\d+} {\1NONCE\2LINENO} {^(<script nonce=['\"])[0-9a-f]{48}(['\"]>)$} {\1NONCE\2} {^(projectCode: ")[0-9a-f]{40}(",)$} {\1HASH\2} {^\d+\.\d+(s by)$} {N.SUB\1} {^(window\.fossil.version = ")\d+\.\d+ \[[0-9a-f]{10}\] (YYYY-mm-dd HH:MM:SS(?: UTC";)?)$} {\1N.M [HASH] \2} {^(Fossil) \d+\.\d+ \[[0-9a-f]{10}\]( YYYY-mm-dd HH:MM:SS)$} {\1 N.M [HASH]\2} {^(type: Wiki-edit by ).+?( on YYYY-mm-dd HH:MM:SS)$$} {\1USER\2} {^(size: )\d+( bytes)$} {\1N\2} {^U [^ ]+$} {U USER} {^Z [0-9a-f]{32}$} {Z CHECKSUM} } # # Some pattersn are used in multiple groups # set testnames {"th1" "th1-docs" "th1-hooks"} set pat {^((?:ERROR \(1\): )?/[*]{5} Subprocess) \d+ (exit)} set sub {\1 PID \2} foreach testname $testnames { dict lappend tests $testname $pat $sub } set testnames {"th1-docs" "th1-hooks"} set pat {(?:[A-Z]:)?/.*?/(test-http-(?:in|out))-\d+-\d+-\d+(\.txt)} set sub {/TMP/\1-PID-SEQ-SEC\2} foreach testname $testnames { dict lappend tests $testname $pat $sub } set testnames {"json" "th1" "wiki"} set pat {^(Content-Length:) \d+$} set sub {\1 LENGTH} foreach testname $testnames { dict lappend tests $testname $pat $sub } set testnames {"th1" "wiki"} set pat {^\d+\.\d+(s by)$} set sub {N.SUB\1} foreach testname $testnames { dict lappend tests $testname $pat $sub } # # Main # if { $fname eq "-" } { set fd stdin } else { set fd [open $fname r] } # Platforms we detect set UNKOWN_PLATFORM 0 set UNIX 1 set WINDOWS 2 set CYGWIN 3 # One specific wiki test creates repetitive output of varying length set wiki_f13_cmd1 "fossil wiki create {timestamp of 2399999} f13 --technote 2399999" set wiki_f13_cmd2 "fossil wiki list --technote --show-technote-ids" set wiki_f13_cmd3 "fossil wiki export a13 --technote ID" set collecting_f3 0 set collecting_f3_verbose 0 # Collected lines for summaries in --extra mode set amend_ed_lines [list] set amend_ed_failed 0 set symlinks_lines [list] set symlinks_failed 0 set test_simplify_name_lines [list] set test_simplify_name_failed 0 # State information s we progress set check_json_empty_line 0 set lineno 0 set platform $UNKOWN_PLATFORM set prev_line "" set testname "" while { [gets $fd line] >= 0 } { incr lineno if { $lineno == 1 } { if { [string index $line 0] in {"\UFFEF" "\UFEFF"} } { set line [string range $line 1 end] } } # Remove RESULT status while matching (inserted again in output). # If collecting lines of output, include $result_prefix as needed. regexp {^(RESULT \([01]\): )?(.*)} $line match result_prefix line if { [regsub {^\*{5} ([^ ]+) \*{6}$} $line {\1} new_testname] } { # Pick up test name for special handling below set testname "$new_testname" } elseif { [regexp {^\*{5} End of } $line] } { # Test done. Handle --extra before resetting. if { $EXTRA } { if { $testname eq "symlinks" } { if { $symlinks_failed } { foreach l $symlinks_lines { puts "$l" } } else { puts "All symlinks tests OK (not run on Windows)" } } regsub {(: )\d+( errors so far)} $line {\1N\2} line } set testname "" } elseif { $testname ne "" } { if { $platform == $UNKOWN_PLATFORM } { if { [regexp {^[A-Z]:/.*?/fossil\.exe } $line] } { set platform $WINDOWS } elseif { [regexp {^/.*?/fossil\.exe } $line] } { # No drive, but still .exe - must be CYGWIN set platform $CYGWIN } elseif { [regexp {^/.*?/fossil } $line] } { set platform $UNIX } } # Do common and per testname rewrites set line [common_rewrites $line $testname] if { [dict exists $tests $testname] } { foreach {pat sub} [dict get $tests $testname] { regsub $pat $line $sub line } } # On Windows, HTTP headers may get printed with an extra newline if { $testname eq "json" } { if { $check_json_empty_line == 1 } { if { "$result_prefix$line" eq "" } { set check_json_empty_line 2 continue } set check_json_empty_line 0 } elseif { [regexp {^(?:$|GET |POST |[A-Z][A-Za-z]*(?:-[A-Z][A-Za-z]*)*: )} $line] } { set check_json_empty_line 1 } else { if { $check_json_empty_line == 2 } { # The empty line we skipped was meant to be followed by a new # HTTP header or empty line, but it was not. puts "" } set check_json_empty_line 0 } } # Summarise repetitive output of varying length for f13 in wiki test if { $testname eq "wiki" } { if { $collecting_f3 == 2 } { if { $collecting_f3_verbose == 1 && [regexp {^HASH } $line] } { incr collecting_f3_verbose } elseif { $line eq $wiki_f13_cmd3 } { incr collecting_f3 puts "\[...\]" } else { continue } } elseif { $collecting_f3 == 1 } { if { $line eq $wiki_f13_cmd2 } { incr collecting_f3 } elseif { $collecting_f3_verbose == 0 } { incr collecting_f3_verbose } } elseif { $line eq $wiki_f13_cmd1 } { incr collecting_f3 } } if { $EXTRA } { if { $line eq "ERROR (0): " && $platform == $WINDOWS } { if { [string match "fossil http --in *" $prev_line] } { continue } } if { $testname eq "amend" } { # The amend-comment-5.N tests are not run on Windows if { $line eq "fossil amend {} -close" } { if { $amend_ed_failed } { foreach l $amend_ed_lines { puts "$l" } } else { puts "All amend tests based on ed -s OK (not run on Windows)" } set amend_ed_lines [list] } elseif { [llength $amend_ed_lines] } { if { [regexp {^test amend-comment-5\.\d+ (.*)} $line match status] } { lappend amend_ed_lines "$result_prefix$line" if { $status ne "OK" } { incr amend_ed_failed } continue } elseif { [string range $line 0 4] eq "test " } { # Handle change in tests by simply emitting what we got foreach l $amend_ed_lines { puts "$l" } set amend_ed_lines [list] } else { lappend amend_ed_lines "$result_prefix$line" continue } } elseif { $line eq "fossil settings editor {ed -s}" } { lappend amend_ed_lines "$result_prefix$line" continue } } elseif { $testname eq "cmdline" } { if { [regexp {^(fossil test-echo) (.*)} $line match test args] } { if { ($platform == $UNIX && $args in {"*" "*.*"}) || ($platform == $WINDOWS && $args eq "--args /TMP/fossil-cmd-line-101.txt") || ($platform == $CYGWIN && $args in {"*" "*.*"}) } { set line "$test ARG_FOR_PLATFORM" } } } elseif { $testname eq "commit-warning" } { if { [regexp {^(micro-smile|pale facepalm) .*} $line match desc] } { set line "$desc PLATFORM_SPECIFIC_BYTES" } } elseif { $testname eq "file1" } { # test-simplify-name with question marks is specific to Windows # They all immediately preceed "fossil test-relative-name --chdir . ." if { $line eq "fossil test-relative-name --chdir . ." } { if { $test_simplify_name_failed } { foreach l $test_simplify_name_lines { puts "$l" } } else { puts "ALL Windows specific test-relative-name tests OK (if on Windows)" } set test_simplify_name_lines [list] } elseif { [regexp {^fossil test-simplify-name .*([/\\])\?\1} $line] } { lappend test_simplify_name_lines $line continue } elseif { [llength $test_simplify_name_lines] } { if { [regexp {^test simplify-name-\d+ (.*)} $line match status] } { if { $status ne "OK" } { incr test_simplify_name_failed } } lappend test_simplify_name_lines "$result_prefix$line" continue } } elseif { $testname eq "settings-repo" } { if { [regexp {^fossil test-th-eval (?:--open-config )?\{setting case-sensitive\}$} $prev_line] } { if { ($platform == $UNIX && $line eq "on") || ($platform == $WINDOWS && $line eq "off") || ($platform == $CYGWIN && $line eq "off") } { set line "EXPECTED_FOR_PLATFORM" } } } elseif { $testname eq "symlinks" } { # Collect all lines and post-process at the end lappend symlinks_lines "$result_prefix$line" if { [regexp {^test symlinks-[^ ]* (.*)} $line match status] } { if { $status ne "OK" } { #TODO: incr symlinks_failed } } continue } elseif { $testname in {"th1" "th1-docs" "th1-hooks"} } { # Special case that spans a couple of tests # "Subprocess PID exit(0)" is sent on stderr on Unix. On Windows, there is no output if { [regexp {^(ERROR \(1\): )?/\*{5} Subprocess PID exit\(0\) \*{5}/$} $line match prefix] } { if { $prefix eq "" } { continue } elseif { $prefix eq "ERROR (1): " } { set line "RESULT (0): " } } elseif { $testname eq "th1" } { if { [regexp {^fossil test-th-eval --vfs ([^ ]+) \{globalState vfs\}$} $line match vfs] } { if { ($platform == $UNIX && $vfs == "unix-dotfile") || ($platform == $WINDOWS && $vfs == "win32-longpath") || ($platform == $CYGWIN && $vfs == "win32-longpath") } { regsub $vfs $line {EXEPECTED_VFS} line } } elseif { $prev_line eq "fossil test-th-eval --vfs EXEPECTED_VFS {globalState vfs}" } { # Replace $vfs from previous line regsub "^$vfs\$" $line {EXEPECTED_VFS} line } elseif { $prev_line eq "fossil test-th-eval {set tcl_platform(platform)}" } { if { $platform == $UNIX } { regsub {^unix$} $line {EXPECTED_PLATFORM} line } elseif { $platform == $WINDOWS } { regsub {^windows$} $line {EXPECTED_PLATFORM} line } elseif { $platform == $CYGWIN } { regsub {^unix$} $line {EXPECTED_PLATFORM} line } } elseif { [string match "fossil test-th-eval --th-trace *" $prev_line] } { if { ($result_prefix eq "RESULT (1): " && $line eq "") || ($result_prefix eq "" && $line eq "ERROR (0): ") } { set result_prefix "" set line "RESULT (0): / ERROR (1): " } } } elseif { $testname eq "th1-docs" } { # In th1-docs, the fossil check-out is exposed in various states. regsub {(^project-code:) CE59BB9F186226D80E49D1FA2DB29F935CCA0333} $line {\1 HASH} line if { [regexp {^merged-from: HASH YYYY-mm-dd HH:MM:SS UTC$} $line] } { continue } } } } } elseif { $EXTRA } { # Fix up summaries to be generic and easy to diff(1) if { [regsub {(^\*{5} (Final|Ignored) results: )\d+} $line {\1N} line] } { regsub {\d+} $line {N} line } elseif { [regexp {^(\*{5} (?:Considered failure|Ignored failure|Skipped test))s: (.*)} $line match desc vals] } { if { $vals ne ""} { foreach val [split $vals " "] { puts "$desc: $val" } continue } } } # Not exactly correct if we continue'd, but OK for the purpose set prev_line "$result_prefix$line" puts "$prev_line" } |
Changes to test/set-manifest.test.
︙ | ︙ | |||
44 45 46 47 48 49 50 | test_setup #### Verify classic behavior of the manifest setting # Setting is off by default, and there are no extra files. fossil settings manifest test "set-manifest-1" {[regexp {^manifest *$} $RESULT]} | | | | 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | test_setup #### Verify classic behavior of the manifest setting # Setting is off by default, and there are no extra files. fossil settings manifest test "set-manifest-1" {[regexp {^manifest *$} $RESULT]} set filelist [lsort [glob -nocomplain manifest*]] test "set-manifest-1-n" {[llength $filelist] == 0} # Classic behavior: TRUE value creates manifest and manifest.uuid set truths [list true on 1] foreach v $truths { fossil settings manifest $v test "set-manifest-2-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-2-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [lsort [glob manifest*]] test "set-manifest-2-$v-n" {[llength $filelist] == 2} foreach f $filelist { test "set-manifest-2-$v-f-$f" {[file isfile $f]} } } # ... and manifest.uuid is the checkout's hash |
︙ | ︙ | |||
86 87 88 89 90 91 92 | # Classic behavior: FALSE value removes manifest and manifest.uuid set falses [list false off 0] foreach v $falses { fossil settings manifest $v test "set-manifest-3-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-3-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} | | | | | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | # Classic behavior: FALSE value removes manifest and manifest.uuid set falses [list false off 0] foreach v $falses { fossil settings manifest $v test "set-manifest-3-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-3-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [lsort [glob -nocomplain manifest*]] test "set-manifest-3-$v-n" {[llength $filelist] == 0} } # Classic behavior: unset removes manifest and manifest.uuid fossil unset manifest test "set-manifest-4" {$RESULT eq ""} fossil settings manifest test "set-manifest-4-a" {[regexp {^manifest *$} $RESULT]} set filelist [lsort [glob -nocomplain manifest*]] test "set-manifest-4-n" {[llength $filelist] == 0} ##### Tags Manifest feature extends the manifest setting # Manifest Tags: use letters r, u, and t to select each of manifest, # manifest.uuid, and manifest.tags files. set truths [list r u t ru ut rt rut] foreach v $truths { fossil settings manifest $v test "set-manifest-5-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-5-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [lsort [glob manifest*]] test "set-manifest-5-$v-n" {[llength $filelist] == [string length $v]} foreach f $filelist { test "set-manifest-5-$v-f-$f" {[file isfile $f]} } } # Quick check for tags applied in trunk |
︙ | ︙ |
Changes to test/settings-repo.test.
︙ | ︙ | |||
38 39 40 41 42 43 44 | set all_settings [get_all_settings] foreach name $all_settings { # # HACK: Make 100% sure that there are no non-default setting values # present anywhere. # | > > > | > | | | | | | | | | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | set all_settings [get_all_settings] foreach name $all_settings { # # HACK: Make 100% sure that there are no non-default setting values # present anywhere. # if {$name eq "manifest"} { fossil unset $name --exact --global -expectError } else { fossil unset $name --exact --global } fossil unset $name --exact # # NOTE: Query for the hard-coded default value of this setting and # save it. # fossil test-th-eval "setting $name" set defaults($name) [normalize_result] } ############################################################################### fossil settings bad-setting some_value -expectError test settings-set-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil settings bad-setting some_value --global -expectError test settings-set-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### fossil unset bad-setting -expectError test settings-unset-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil unset bad-setting --global -expectError test settings-unset-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### fossil settings ssl some_value -expectError test settings-set-ambiguous-local { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } fossil settings ssl some_value --global -expectError test settings-set-ambiguous-global { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } ############################################################################### fossil unset ssl -expectError test settings-unset-ambiguous-local { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } fossil unset ssl --global -expectError test settings-unset-ambiguous-global { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } ############################################################################### |
︙ | ︙ | |||
240 241 242 243 244 245 246 | [regexp -- [string map [list %name% $name] $pattern(5)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-set-check2-versionable-$name { | | | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 | [regexp -- [string map [list %name% $name] $pattern(5)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-set-check2-versionable-$name { $data eq "" } file delete $fileName fossil settings $name --exact set data [normalize_result] |
︙ | ︙ |
Changes to test/settings.test.
︙ | ︙ | |||
90 91 92 93 94 95 96 | set data [normalize_result] test settings-query-local-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } | > > > | > | | | 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | set data [normalize_result] test settings-query-local-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } if {$name eq "manifest"} { fossil settings $name --exact --global -expectError } else { fossil settings $name --exact --global } set data [normalize_result] if {$name eq "manifest"} { test settings-query-global-$name { $data eq "cannot set 'manifest' globally" } } else { test settings-query-global-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } } } ############################################################################### fossil settings bad-setting -expectError test settings-query-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil settings bad-setting --global -expectError test settings-query-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### test_cleanup |
Changes to test/stash.test.
︙ | ︙ | |||
139 140 141 142 143 144 145 | test stash-1-list-1 {[regexp {^1: \[[0-9a-z]+\] on } [first_data_line]]} test stash-1-list-2 {[regexp {^\s+stash 1\s*$} [second_data_line]]} set diff_stash_1 {DELETE f1 Index: f1 ================================================================== --- f1 | | | | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | test stash-1-list-1 {[regexp {^1: \[[0-9a-z]+\] on } [first_data_line]]} test stash-1-list-2 {[regexp {^\s+stash 1\s*$} [second_data_line]]} set diff_stash_1 {DELETE f1 Index: f1 ================================================================== --- f1 +++ /dev/null @@ -1,1 +0,0 @@ -f1 CHANGED f2 --- f2 +++ f2 @@ -1,1 +1,1 @@ -f2 +f2.1 CHANGED f3n --- f3n +++ f3n ADDED f0 Index: f0 ================================================================== --- /dev/null +++ f0 @@ -0,0 +1,1 @@ +f0} ######## # fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? # fossil stash [g]diff ?STASHID? ?DIFF-OPTIONS? |
︙ | ︙ | |||
183 184 185 186 187 188 189 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 | | | | 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 RENAMED f3 -> f3n } -addremove { DELETED f1 } -exists {f0 f2 f3n} -notexists {f1 f3} # Confirm there is no longer a stash saved fossil stash list test stash-2-list {[first_data_line] eq "empty stash"} # Test stashed mv without touching the file system # Issue reported by email to fossil-users # from Warren Young, dated Tue, 9 Feb 2016 01:22:54 -0700 # with checkin [b8c7af5bd9] plus a local patch on CentOS 5 # 64 bit intel, 8-byte pointer, 4-byte integer # Stashed renamed file said: # fossil: ./src/delta.c:231: checksum: Assertion '...' failed. # Should be triggered by this stash-WY-1 test. fossil checkout --force c1 fossil clean fossil mv --soft f1 f1new stash-test WY-1 {-expectError save -m "Reported 2016-02-09"} { REVERT f1 DELETE f1new } -changes { } -addremove { } -exists {f1 f2 f3} -notexists {f1new} -knownbugs {-code -result} # TODO: add tests that verify the saved stash is sensible. Possibly # by applying it and checking results. But until the SQLITE_CONSTRAINT |
︙ | ︙ | |||
263 264 265 266 267 268 269 | ADDED f3 } -exists {f1 f2 f3} -notexists {} #fossil status fossil stash show test stash-3-1-show {[normalize_result] eq {ADDED f3 Index: f3 ================================================================== | | | 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 | ADDED f3 } -exists {f1 f2 f3} -notexists {} #fossil status fossil stash show test stash-3-1-show {[normalize_result] eq {ADDED f3 Index: f3 ================================================================== --- /dev/null +++ f3 @@ -0,0 +1,1 @@ +f3}} stash-test 3-1-pop {pop} { ADDED f3 } -changes { ADDED f3 |
︙ | ︙ | |||
290 291 292 293 294 295 296 | fossil commit -m "baseline" fossil mv --hard f2 f2n test_result_state stash-3-2-mv "mv --hard f2 f2n" [concat { RENAME f2 f2n MOVED_FILE} [file normalize f2] { }] -changes { | | | | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 | fossil commit -m "baseline" fossil mv --hard f2 f2n test_result_state stash-3-2-mv "mv --hard f2 f2n" [concat { RENAME f2 f2n MOVED_FILE} [file normalize f2] { }] -changes { RENAMED f2 -> f2n } -addremove { } -exists {f1 f2n} -notexists {f2} stash-test 3-2 {save -m f2n} { REVERT f2 DELETE f2n } -exists {f1 f2} -notexists {f2n} -knownbugs {-result} fossil stash show test stash-3-2-show-1 {![regexp {\sf1} $RESULT]} knownBug test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]} stash-test 3-2-pop {pop} { UPDATE f1 UPDATE f2n } -changes { RENAMED f2 -> f2n } -addremove { } -exists {f1 f2n} -notexists {f2} ######## # fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? |
︙ | ︙ | |||
366 367 368 369 370 371 372 | file rename -force f3 f3n fossil mv f3 f3n stash-test 4-3 {snapshot -m "snap 3"} { } -changes { ADDED f0 DELETED f1 EDITED f2 | | | 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 | file rename -force f3 f3n fossil mv f3 f3n stash-test 4-3 {snapshot -m "snap 3"} { } -changes { ADDED f0 DELETED f1 EDITED f2 RENAMED f3 -> f3n } -addremove { } -exists {f0 f2 f3n} -notexists {f1 f3} fossil stash diff test stash-4-3-diff-CODE {!$::CODE} knownBug fossil stash show test stash-4-3-show-1 {[regexp {DELETE f1} $RESULT]} test stash-4-3-show-2 {[regexp {CHANGED f2} $RESULT]} |
︙ | ︙ |
Changes to test/symlinks.test.
︙ | ︙ | |||
21 22 23 24 25 26 27 | set path [file dirname [info script]] if {$is_windows} { puts "Symlinks are not supported on Windows." test_cleanup_then_return } | < < < < < < < > > > | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | set path [file dirname [info script]] if {$is_windows} { puts "Symlinks are not supported on Windows." test_cleanup_then_return } require_no_open_checkout ############################################################################### test_setup; set rootDir [file normalize [pwd]] # Using tempHomePath, allow-symlinks will always be off at this point. fossil set allow-symlinks on fossil test-th-eval --open-config {repository} set repository [normalize_result] if {[string length $repository] == 0} { puts "Detection of the open repository file failed." test_cleanup_then_return |
︙ | ︙ | |||
60 61 62 63 64 65 66 | test symlinks-dir-1 {[file exists [file join $rootDir subdirA f1.txt]] eq 1} test symlinks-dir-2 {[file exists [file join $rootDir symdirA f1.txt]] eq 1} test symlinks-dir-3 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-4 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} fossil add [file join $rootDir symdirA f1.txt] | > > > | > > > | | | | | | 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | test symlinks-dir-1 {[file exists [file join $rootDir subdirA f1.txt]] eq 1} test symlinks-dir-2 {[file exists [file join $rootDir symdirA f1.txt]] eq 1} test symlinks-dir-3 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-4 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} fossil add [file join $rootDir symdirA f1.txt] test symlinks-skip-dir-traversal {[normalize_result] eq \ "SKIP symdirA/f1.txt"} fossil commit -m "c1" -expectError test symlinks-empty-commit {[normalize_result] eq \ "nothing has changed; use --allow-empty to override"} ############################################################################### fossil ls test symlinks-dir-5 {[normalize_result] eq ""} ############################################################################### fossil extras test symlinks-dir-6 {[normalize_result] eq \ "subdirA/f1.txt\nsubdirA/f2.txt\nsymdirA"} ############################################################################### fossil close file delete [file join $rootDir subdirA f1.txt] test symlinks-dir-7 {[file exists [file join $rootDir subdirA f1.txt]] eq 0} test symlinks-dir-8 {[file exists [file join $rootDir symdirA f1.txt]] eq 0} test symlinks-dir-9 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-10 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} ############################################################################### fossil open --force $repository set code [catch {file readlink [file join $rootDir symdirA]} result] test symlinks-dir-11 {$code == 0} test symlinks-dir-12 {$result eq [file join $rootDir subdirA]} test symlinks-dir-13 {[file exists [file join $rootDir subdirA f1.txt]] eq 0} test symlinks-dir-14 {[file exists [file join $rootDir symdirA f1.txt]] eq 0} test symlinks-dir-15 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-16 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} ############################################################################### # # TODO: Add tests for symbolic links as files here, including tests with the # "allow-symlinks" setting on and off. # ############################################################################### test_cleanup |
Changes to test/tester.tcl.
︙ | ︙ | |||
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | # This is the main test script. To run a regression test, do this: # # tclsh ../test/tester.tcl ../bld/fossil # # Where ../test/tester.tcl is the name of this file and ../bld/fossil # is the name of the executable to be tested. # # We use some things introduced in 8.6 such as lmap. auto.def should # have found us a suitable Tcl installation. package require Tcl 8.6 set testfiledir [file normalize [file dirname [info script]]] set testrundir [pwd] set testdir [file normalize [file dirname $argv0]] set fossilexe [file normalize [lindex $argv 0]] set is_windows [expr {$::tcl_platform(platform) eq "windows"}] if {$::is_windows} { if {[string length [file extension $fossilexe]] == 0} { append fossilexe .exe } set outside_fossil_repo [expr ![file exists "$::testfiledir\\..\\_FOSSIL_"]] } else { | > > > > > > | 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | # This is the main test script. To run a regression test, do this: # # tclsh ../test/tester.tcl ../bld/fossil # # Where ../test/tester.tcl is the name of this file and ../bld/fossil # is the name of the executable to be tested. # # To run a subset of tests (i.e. only one or more of the test/*.test # scripts), append the script base names as arguments: # # tclsh ../test/tester.tcl ../bld/fossil <script-basename>... # # We use some things introduced in 8.6 such as lmap. auto.def should # have found us a suitable Tcl installation. package require Tcl 8.6 set testfiledir [file normalize [file dirname [info script]]] set testrundir [pwd] set testdir [file normalize [file dirname $argv0]] set fossilexe [file normalize [lindex $argv 0]] set is_windows [expr {$::tcl_platform(platform) eq "windows"}] set is_cygwin [regexp {^CYGWIN} $::tcl_platform(os)] if {$::is_windows} { if {[string length [file extension $fossilexe]] == 0} { append fossilexe .exe } set outside_fossil_repo [expr ![file exists "$::testfiledir\\..\\_FOSSIL_"]] } else { |
︙ | ︙ | |||
276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 | # set result [list \ access-log \ admin-log \ allow-symlinks \ auto-captcha \ auto-hyperlink \ auto-shun \ autosync \ autosync-tries \ backoffice-disable \ backoffice-logfile \ backoffice-nodelay \ binary-glob \ case-sensitive \ chat-alert-sound \ chat-initial-history \ chat-inline-images \ chat-keep-count \ chat-keep-days \ chat-poll-timeout \ clean-glob \ clearsign \ comment-format \ crlf-glob \ crnl-glob \ default-csp \ default-perms \ diff-binary \ diff-command \ dont-push \ dotfiles \ editor \ email-admin \ email-renew-interval \ email-self \ email-send-command \ email-send-db \ email-send-dir \ email-send-method \ email-send-relayhost \ email-subname \ email-url \ empty-dirs \ encoding-glob \ exec-rel-paths \ fileedit-glob \ forbid-delta-manifests \ gdiff-command \ gmerge-command \ hash-digits \ hooks \ http-port \ https-login \ ignore-glob \ keep-glob \ localauth \ lock-timeout \ main-branch \ mainmenu \ manifest \ max-cache-entry \ max-loadavg \ max-upload \ mimetypes \ mtime-changes \ pgp-command \ preferred-diff-type \ proxy \ redirect-to-https \ relative-paths \ repo-cksum \ repolist-skin \ safe-html \ self-register \ sitemap-extra \ ssh-command \ ssl-ca-location \ ssl-identity \ tclsh \ th1-setup \ | > > > > > > > > > | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 | # set result [list \ access-log \ admin-log \ allow-symlinks \ auto-captcha \ auto-hyperlink \ auto-hyperlink-delay \ auto-hyperlink-mouseover \ auto-shun \ autosync \ autosync-tries \ backoffice-disable \ backoffice-logfile \ backoffice-nodelay \ binary-glob \ case-sensitive \ chat-alert-sound \ chat-initial-history \ chat-inline-images \ chat-keep-count \ chat-keep-days \ chat-poll-timeout \ chat-timeline-user \ clean-glob \ clearsign \ comment-format \ crlf-glob \ crnl-glob \ default-csp \ default-perms \ diff-binary \ diff-command \ dont-commit \ dont-push \ dotfiles \ editor \ email-admin \ email-listid \ email-renew-interval \ email-self \ email-send-command \ email-send-db \ email-send-dir \ email-send-method \ email-send-relayhost \ email-subname \ email-url \ empty-dirs \ encoding-glob \ exec-rel-paths \ fileedit-glob \ forbid-delta-manifests \ forum-close-policy \ gdiff-command \ gmerge-command \ hash-digits \ hooks \ http-port \ https-login \ ignore-glob \ keep-glob \ large-file-size \ localauth \ lock-timeout \ main-branch \ mainmenu \ manifest \ max-cache-entry \ max-loadavg \ max-upload \ mimetypes \ mtime-changes \ mv-rm-files \ pgp-command \ preferred-diff-type \ proxy \ redirect-to-https \ relative-paths \ repo-cksum \ repolist-skin \ safe-html \ self-pw-reset \ self-register \ sitemap-extra \ ssh-command \ ssl-ca-location \ ssl-identity \ tclsh \ th1-setup \ |
︙ | ︙ | |||
431 432 433 434 435 436 437 438 439 440 441 442 443 444 | proc require_no_open_checkout {} { if {[info exists ::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT)] && \ $::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT) eq "YES_DO_IT"} { return } catch {exec $::fossilexe info} res if {[regexp {local-root:} $res]} { set projectName <unknown> set localRoot <unknown> regexp -line -- {^project-name: (.*)$} $res dummy projectName set projectName [string trim $projectName] regexp -line -- {^local-root: (.*)$} $res dummy localRoot set localRoot [string trim $localRoot] error "Detected an open checkout of project \"$projectName\",\ | > > | 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 | proc require_no_open_checkout {} { if {[info exists ::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT)] && \ $::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT) eq "YES_DO_IT"} { return } catch {exec $::fossilexe info} res if {[regexp {local-root:} $res]} { global skipped_tests testfile lappend skipped_tests $testfile set projectName <unknown> set localRoot <unknown> regexp -line -- {^project-name: (.*)$} $res dummy projectName set projectName [string trim $projectName] regexp -line -- {^local-root: (.*)$} $res dummy localRoot set localRoot [string trim $localRoot] error "Detected an open checkout of project \"$projectName\",\ |
︙ | ︙ | |||
468 469 470 471 472 473 474 475 476 477 478 479 | } after [expr {$try * 100}] } error "Could not delete \"$path\", error: $error" } proc test_cleanup_then_return {} { uplevel 1 [list test_cleanup] return -code return } proc test_cleanup {} { | > > | > > > > | 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 | } after [expr {$try * 100}] } error "Could not delete \"$path\", error: $error" } proc test_cleanup_then_return {} { global skipped_tests testfile lappend skipped_tests $testfile uplevel 1 [list test_cleanup] return -code return } proc test_cleanup {} { if {$::KEEP} { # To avoid errors with require_no_open_checkout, cd out of here. if {[info exists ::tempSavedPwd]} {cd $::tempSavedPwd; unset ::tempSavedPwd} return } if {![info exists ::tempRepoPath]} {return} if {![file exists $::tempRepoPath]} {return} if {![file isdirectory $::tempRepoPath]} {return} set tempPathEnd [expr {[string length $::tempPath] - 1}] if {[string length $::tempPath] == 0 || \ [string range $::tempRepoPath 0 $tempPathEnd] ne $::tempPath} { error "Temporary repository path has wrong parent during cleanup." |
︙ | ︙ | |||
502 503 504 505 506 507 508 | # Finally, attempt to gracefully delete the temporary home directory, # unless forbidden by external forces. if {![info exists ::tempKeepHome]} {delete_temporary_home} } proc delete_temporary_home {} { if {$::KEEP} {return}; # All cleanup disabled? | | | 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 | # Finally, attempt to gracefully delete the temporary home directory, # unless forbidden by external forces. if {![info exists ::tempKeepHome]} {delete_temporary_home} } proc delete_temporary_home {} { if {$::KEEP} {return}; # All cleanup disabled? if {$::is_windows || $::is_cygwin} { robust_delete [file join $::tempHomePath _fossil] } else { robust_delete [file join $::tempHomePath .fossil] } robust_delete $::tempHomePath } |
︙ | ︙ | |||
829 830 831 832 833 834 835 836 837 838 839 840 841 842 | lappend bad_test $name if {$::HALT} {exit 1} } } } set bad_test {} set ignored_test {} # Return a random string N characters long. # set vocabulary 01234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" append vocabulary " ()*^!.eeeeeeeeaaaaattiioo " set nvocabulary [string length $vocabulary] proc rand_str {N} { | > | 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 | lappend bad_test $name if {$::HALT} {exit 1} } } } set bad_test {} set ignored_test {} set skipped_tests {} # Return a random string N characters long. # set vocabulary 01234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" append vocabulary " ()*^!.eeeeeeeeaaaaattiioo " set nvocabulary [string length $vocabulary] proc rand_str {N} { |
︙ | ︙ | |||
993 994 995 996 997 998 999 | set inFileName [file join $::tempPath [appendArgs test-http-in- $suffix]] set outFileName [file join $::tempPath [appendArgs test-http-out- $suffix]] set data [subst [read_file $dataFileName]] write_file $inFileName $data fossil http --in $inFileName --out $outFileName --ipaddr 127.0.0.1 \ | | | 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 | set inFileName [file join $::tempPath [appendArgs test-http-in- $suffix]] set outFileName [file join $::tempPath [appendArgs test-http-out- $suffix]] set data [subst [read_file $dataFileName]] write_file $inFileName $data fossil http --in $inFileName --out $outFileName --ipaddr 127.0.0.1 \ $repository --localauth --th-trace -expectError set result [expr {[file exists $outFileName] ? [read_file $outFileName] : ""}] if {1} { catch {file delete $inFileName} catch {file delete $outFileName} } |
︙ | ︙ | |||
1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 | } error] != 0} { error "Could not write file \"$tempFile\" in directory \"$tempPath\",\ please set TEMP variable in environment, error: $error" } set tempHomePath [file join $tempPath home_[pid]] if {[catch { file mkdir $tempHomePath } error] != 0} { error "Could not make directory \"$tempHomePath\",\ please set TEMP variable in environment, error: $error" } protInit $fossilexe set ::tempKeepHome 1 foreach testfile $argv { protOut "***** $testfile ******" if { [catch {source $testdir/$testfile.test} testerror testopts] } { test test-framework-$testfile 0 protOut "!!!!! $testfile: $testerror" protOutDict $testopts" } else { test test-framework-$testfile 1 } protOut "***** End of $testfile: [llength $bad_test] errors so far ******" } unset ::tempKeepHome; delete_temporary_home set nErr [llength $bad_test] if {$nErr>0 || !$::QUIET} { protOut "***** Final results: $nErr errors out of $test_count tests" 1 } if {$nErr>0} { protOut "***** Considered failures: $bad_test" 1 } set nErr [llength $ignored_test] if {$nErr>0 || !$::QUIET} { protOut "***** Ignored results: $nErr ignored errors out of $test_count tests" 1 } if {$nErr>0} { protOut "***** Ignored failures: $ignored_test" 1 } | > > > > > > > > > > > > > > > > > > > > > > > > > > | 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 | } error] != 0} { error "Could not write file \"$tempFile\" in directory \"$tempPath\",\ please set TEMP variable in environment, error: $error" } set tempHomePath [file join $tempPath home_[pid]] # Close stdin to avoid errors on wrapped text for narrow terminals. # Closing stdin means that terminal detection returns 0 width, in turn # causing the relvant strings to be printed on a single line. # However, closing stdin makes file descriptor 0 avaailable on some systems # and/or TCL implementations, which triggers fossil to complain about opening # databases using fd 0. Avoid this by opening the script, consuming fd 0. close stdin set possibly_fd0 [open [info script] r] if {[catch { file mkdir $tempHomePath } error] != 0} { error "Could not make directory \"$tempHomePath\",\ please set TEMP variable in environment, error: $error" } protInit $fossilexe set ::tempKeepHome 1 # Start in tempHomePath to help avoid errors with require_no_open_checkout set startPwd [pwd] cd $tempHomePath foreach testfile $argv { protOut "***** $testfile ******" if { [catch {source $testdir/$testfile.test} testerror testopts] } { test test-framework-$testfile 0 protOut "!!!!! $testfile: $testerror" protOutDict $testopts" } else { test test-framework-$testfile 1 } protOut "***** End of $testfile: [llength $bad_test] errors so far ******" } cd $startPwd unset ::tempKeepHome; delete_temporary_home # Clean up the file descriptor close $possibly_fd0 set nErr [llength $bad_test] if {$nErr>0 || !$::QUIET} { protOut "***** Final results: $nErr errors out of $test_count tests" 1 } if {$nErr>0} { protOut "***** Considered failures: $bad_test" 1 } set nErr [llength $ignored_test] if {$nErr>0 || !$::QUIET} { protOut "***** Ignored results: $nErr ignored errors out of $test_count tests" 1 } if {$nErr>0} { protOut "***** Ignored failures: $ignored_test" 1 } set nSkipped [llength $skipped_tests] if {$nSkipped>0} { protOut "***** Skipped tests: $skipped_tests" 1 } if {$bad_test>0} { exit 1 } |
Changes to test/th1-docs.test.
︙ | ︙ | |||
28 29 30 31 32 33 34 | fossil test-th-eval "hasfeature tcl" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with Tcl support." test_cleanup_then_return } | < < < < < < < < | < | < > > > > > > > > > < | | | < | | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | fossil test-th-eval "hasfeature tcl" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with Tcl support." test_cleanup_then_return } ############################################################################### test_setup ############################################################################### set env(TH1_ENABLE_DOCS) 1; # TH1 docs must be enabled for this test. set env(TH1_ENABLE_TCL) 1; # Tcl integration must be enabled for this test. ############################################################################### set data [fossil info] regexp -line -- {^repository: (.*)$} $data dummy repository if {[string length $repository] == 0 || ![file exists $repository]} { error "unable to locate repository" } set dataFileName [file join $::testdir th1-docs-input.txt] set origFileStat [file join $::testdir fileStat.th1] if {![file exists $origFileStat]} { error "unable to locate [$origFileStat]" } file copy $origFileStat fileStat.th1 fossil add fileStat.th1 fossil commit -m "Add fileStat.th1" ############################################################################### set RESULT [test_fossil_http \ $repository $dataFileName /doc/trunk/fileStat.th1] test th1-docs-1a {[regexp {<title>Unnamed Fossil Project: fileStat.th1</title>} $RESULT]} test th1-docs-1b {[regexp {>\[[0-9a-f]{40,64}\]<} $RESULT]} test th1-docs-1c {[regexp { contains \d+ files\.} $RESULT]} ############################################################################### test_cleanup |
Changes to test/th1-hooks.test.
︙ | ︙ | |||
144 145 146 147 148 149 150 | test th1-cmd-hooks-1b {[normalize_result] eq \ {<h1><b>command_hook timeline</b></h1> +++ some stuff here +++ <h1><b>command_hook timeline command_notify timeline</b></h1>}} ############################################################################### | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | test th1-cmd-hooks-1b {[normalize_result] eq \ {<h1><b>command_hook timeline</b></h1> +++ some stuff here +++ <h1><b>command_hook timeline command_notify timeline</b></h1>}} ############################################################################### fossil timeline custom3 -expectError; # NOTE: Bad "WHEN" argument. test th1-cmd-hooks-1c {[normalize_result] eq \ {<h1><b>command_hook timeline</b></h1> unknown check-in or invalid date: custom3}} ############################################################################### |
︙ | ︙ | |||
194 195 196 197 198 199 200 | fossil test3 test th1-custom-cmd-3a {[string trim $RESULT] eq \ {<h1><b>command_hook test3</b></h1>}} ############################################################################### | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | fossil test3 test th1-custom-cmd-3a {[string trim $RESULT] eq \ {<h1><b>command_hook test3</b></h1>}} ############################################################################### fossil test4 -expectError test th1-custom-cmd-4a {[first_data_line] eq \ {<h1><b>command_hook test4</b></h1>}} test th1-custom-cmd-4b {[regexp -- \ {: unknown command: test4$} [second_data_line]]} |
︙ | ︙ |
Changes to test/th1-tcl.test.
︙ | ︙ | |||
75 76 77 78 79 80 81 | } ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl3.txt]] | | | | | | | | | | > > > > > > > | | 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | } ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl3.txt]] test th1-tcl-3 {$RESULT eq {<hr><p class="thmainError">ERROR:\ invalid command name "bad_command"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl4.txt]] test th1-tcl-4 {$RESULT eq {<hr><p class="thmainError">ERROR:\ divide by zero</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl5.txt]] test th1-tcl-5 {$RESULT eq {<hr><p class="thmainError">ERROR:\ Tcl command not found: bad_command</p>} || $RESULT eq {<hr><p\ class="thmainError">ERROR: invalid command name "bad_command"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl6.txt]] test th1-tcl-6 {$RESULT eq {<hr><p class="thmainError">ERROR:\ no such command: bad_command</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl7.txt]] test th1-tcl-7 {$RESULT eq {<hr><p class="thmainError">ERROR:\ syntax error in expression: "2**0"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl8.txt]] test th1-tcl-8 {$RESULT eq {<hr><p class="thmainError">ERROR:\ cannot invoke Tcl command: tailcall</p>} || $RESULT eq {<hr><p\ class="thmainError">ERROR: tailcall can only be called from a proc or\ lambda</p>} || $RESULT eq {<hr><p class="thmainError">ERROR: This test\ requires Tcl 8.6 or higher.</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl9.txt]] # Under cygwin, the printed name with Usage: strips the extension if { $::is_cygwin && [file extension $fossilexe] eq ".exe" } { set fossilexeref [string range $fossilexe 0 end-4] } else { set fossilexeref $fossilexe } test th1-tcl-9 {[string trim $RESULT] eq [list [file tail $fossilexeref] 3 \ [list test-th-render --open-config [file nativename [file join $path \ th1-tcl9.txt]]]]} ############################################################################### fossil test-th-eval "tclMakeSafe a" test th1-tcl-10 {[normalize_result] eq \ |
︙ | ︙ |
Changes to test/th1.test.
︙ | ︙ | |||
728 729 730 731 732 733 734 | ############################################################################### fossil test-th-eval "trace {}" test th1-trace-1 {$RESULT eq {}} ############################################################################### | | | | | | | | | | > > | 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | ############################################################################### fossil test-th-eval "trace {}" test th1-trace-1 {$RESULT eq {}} ############################################################################### fossil test-th-eval --th-trace "trace {}" -expectError set normalized_result [normalize_result] regsub -- {\n/\*\*\*\*\* Subprocess \d+ exit\(\d+\) \*\*\*\*\*/} \ $normalized_result {} normalized_result if {$th1Hooks} { test th1-trace-2 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br> ------------------- END TRACE LOG -------------------}} } else { test th1-trace-2 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br> th1-setup {} => TH_OK<br> ------------------- END TRACE LOG -------------------}} } ############################################################################### fossil test-th-eval "trace {this is a trace message.}" test th1-trace-3 {$RESULT eq {}} ############################################################################### fossil test-th-eval --th-trace "trace {this is a trace message.}" -expectError set normalized_result [normalize_result] regsub -- {\n/\*\*\*\*\* Subprocess \d+ exit\(\d+\) \*\*\*\*\*/} \ $normalized_result {} normalized_result if {$th1Hooks} { test th1-trace-4 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br> this is a trace message. ------------------- END TRACE LOG -------------------}} } else { test th1-trace-4 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br> th1-setup {} => TH_OK<br> this is a trace message. ------------------- END TRACE LOG -------------------}} } ############################################################################### fossil test-th-eval "defHeader {Page Title Here}" test th1-defHeader-1 {$RESULT eq \ {TH_ERROR: wrong # args: should be "defHeader"}} ############################################################################### fossil test-th-eval "defHeader" test th1-defHeader-2 {[string match *<body> [normalize_result]] || \ [string match "*<body class=\"\$current_feature\ rpage-\$requested_page\ cpage-\$canonical_page\">" [normalize_result]]} ############################################################################### fossil test-th-eval "styleHeader {Page Title Here}" test th1-header-1 {$RESULT eq {TH_ERROR: repository unavailable}} ############################################################################### |
︙ | ︙ | |||
1019 1020 1021 1022 1023 1024 1025 | ############################################################################### fossil test-th-eval "globalState vfs" test th1-globalState-14 {[string length $RESULT] == 0} ############################################################################### | | | 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 | ############################################################################### fossil test-th-eval "globalState vfs" test th1-globalState-14 {[string length $RESULT] == 0} ############################################################################### if {$is_windows || $is_cygwin} { set altVfs win32-longpath } else { set altVfs unix-dotfile } ############################################################################### |
︙ | ︙ | |||
1060 1061 1062 1063 1064 1065 1066 | set sorted_result [lsort $RESULT] protOut "Sorted: $sorted_result" set base_commands {anoncap anycap array artifact break breakpoint \ builtin_request_js capexpr captureTh1 catch cgiHeaderLine checkout \ combobox continue copybtn date decorate defHeader dir enable_htmlify \ enable_output encode64 error expr for foreach getParameter glob_match \ globalState hascap hasfeature html htmlize http httpize if info \ | | | | | | 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 | set sorted_result [lsort $RESULT] protOut "Sorted: $sorted_result" set base_commands {anoncap anycap array artifact break breakpoint \ builtin_request_js capexpr captureTh1 catch cgiHeaderLine checkout \ combobox continue copybtn date decorate defHeader dir enable_htmlify \ enable_output encode64 error expr for foreach getParameter glob_match \ globalState hascap hasfeature html htmlize http httpize if info \ insertCsrf lappend lindex linecount list llength lsearch markdown nonce \ proc puts query randhex redirect regexp reinitialize rename render \ repository return searchable set setParameter setting stime string \ styleFooter styleHeader styleScript submenu tclReady trace unset \ unversioned uplevel upvar utime verifyCsrf verifyLogin wiki} set tcl_commands {tclEval tclExpr tclInvoke tclIsSafe tclMakeSafe} if {$th1Tcl} { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands $tcl_commands"]} } else { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands"]} } |
︙ | ︙ |
Changes to test/unversioned.test.
︙ | ︙ | |||
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | puts "The \"sha1\" package is not available." test_cleanup_then_return } require_no_open_checkout test_setup; set rootDir [file normalize [pwd]] fossil test-th-eval --open-config {repository} set repository [normalize_result] if {[string length $repository] == 0} { puts "Detection of the open repository file failed." test_cleanup_then_return } write_file unversioned1.txt "This is unversioned file #1." write_file unversioned2.txt " This is unversioned file #2. " write_file "unversioned space.txt" "\nThis is unversioned file #3.\n" write_file unversioned4.txt "This is unversioned file #4." write_file unversioned5.txt "This is unversioned file #5." set env(VISUAL) [appendArgs \ [info nameofexecutable] " " [file join $path fake-editor.tcl]] ############################################################################### | > > > > > > > > > > | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | puts "The \"sha1\" package is not available." test_cleanup_then_return } require_no_open_checkout test_setup; set rootDir [file normalize [pwd]] # Avoid delays from the backoffice. fossil set backoffice-disable 1 fossil test-th-eval --open-config {repository} set repository [normalize_result] if {[string length $repository] == 0} { puts "Detection of the open repository file failed." test_cleanup_then_return } write_file unversioned1.txt "This is unversioned file #1." write_file unversioned2.txt " This is unversioned file #2. " write_file "unversioned space.txt" "\nThis is unversioned file #3.\n" write_file unversioned4.txt "This is unversioned file #4." write_file unversioned5.txt "This is unversioned file #5." set env(VISUAL) [appendArgs \ [info nameofexecutable] " " [file join $path fake-editor.tcl]] ############################################################################### # Under cygwin, the printed name with Usage: strips the extension if { $::is_cygwin && [file extension $fossilexe] eq ".exe" } { set fossilexeref [string range $fossilexe 0 end-4] } else { set fossilexeref $fossilexe } fossil unversioned -expectError test unversioned-1 {[normalize_result] eq \ [string map [list %fossil% [file nativename $fossilexeref]] {Usage: %fossil%\ unversioned add|cat|edit|export|list|revert|remove|sync|touch}]} ############################################################################### fossil unversioned list test unversioned-2 {[normalize_result] eq {}} |
︙ | ︙ | |||
310 311 312 313 314 315 316 | fossil user new uvtester "Unversioned Test User" $password fossil user capabilities uvtester oy ############################################################################### foreach {pid port outTmpFile} [test_start_server $repository stopArg] {} | > | > > | > | | | 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 | fossil user new uvtester "Unversioned Test User" $password fossil user capabilities uvtester oy ############################################################################### foreach {pid port outTmpFile} [test_start_server $repository stopArg] {} if {! $::QUIET} { puts [appendArgs "Started Fossil server, pid \"" $pid \" ", port \"" $port \".] } set remote [appendArgs http://uvtester: $password @localhost: $port /] ############################################################################### set clientDir [file join $tempPath [appendArgs \ uvtest_ [string trim [clock seconds] -] _ [getSeqNo]]] set savedPwd [pwd] file mkdir $clientDir; cd $clientDir if {! $::QUIET} { puts [appendArgs "Now in client directory \"" [pwd] \".] } write_file unversioned-client1.txt "This is unversioned client file #1." ############################################################################### fossil clone --save-http-password $remote uvrepo.fossil fossil open -f uvrepo.fossil ############################################################################### fossil unversioned list test unversioned-45 {[normalize_result] eq {}} ############################################################################### fossil_maybe_answer y unversioned sync $remote test unversioned-46 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 \n? done, wire bytes sent: \d+ received: \d+ remote: (?:127\.0\.0\.1|::1)} \ [normalize_result]]} ############################################################################### fossil unversioned ls test unversioned-47 {[normalize_result] eq {unversioned2.txt unversioned5.txt}} |
︙ | ︙ | |||
387 388 389 390 391 392 393 | fossil_maybe_answer y unversioned revert $remote test unversioned-52 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 | | | 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 | fossil_maybe_answer y unversioned revert $remote test unversioned-52 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 \n? done, wire bytes sent: \d+ received: \d+ remote: (?:127\.0\.0\.1|::1)} \ [normalize_result]]} ############################################################################### fossil unversioned list test unversioned-53 {[regexp \ {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30\ |
︙ | ︙ | |||
412 413 414 415 416 417 418 | fossil_maybe_answer y unversioned sync $remote test unversioned-55 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 | | > | > > | > | 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | fossil_maybe_answer y unversioned sync $remote test unversioned-55 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 \n? done, wire bytes sent: \d+ received: \d+ remote: (?:127\.0\.0\.1|::1)} \ [normalize_result]]} ############################################################################### fossil close test unversioned-56 {[normalize_result] eq {}} ############################################################################### cd $savedPwd; unset savedPwd file delete -force $clientDir if {! $::QUIET} { puts [appendArgs "Now in server directory \"" [pwd] \".] } ############################################################################### set stopped [test_stop_server $stopArg $pid $outTmpFile] if {! $::QUIET} { puts [appendArgs \ [expr {$stopped ? "Stopped" : "Could not stop"}] \ " Fossil server, pid \"" $pid "\", using argument \"" \ $stopArg \".] } ############################################################################### fossil unversioned list test unversioned-57 {[regexp \ {^[0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 35 35\ unversioned-client1\.txt |
︙ | ︙ |
Added test/update.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 | # # Copyright (c) 2024 Preben Guldnerg <preben@guldberg.org> # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests for the "update" command. # # Track number of tests we have set up in test_update_setup. This helps ensure # that generated files are ordered in `fossil update --verbose` mode. set UPDATE_TEST 0 proc test_update_setup {desc} { global UPDATE_TEST incr UPDATE_TEST fossil revert fossil update return [format "test-%02u-%s.txt" $UPDATE_TEST $desc] } # The output is in file name order, so massage $RESULT to remove initial UNCHANGED # files. Only do this if we have the expected branch information. proc test_update {testname message changes {fossil_args ""}} { fossil update --verbose {*}$fossil_args if { [regsub {\n-{79}\nupdated-from: [0-9a-z]{40} .*} $::RESULT {} test_result ] } { regsub {^(?:UNCHANGED [-a-z0-9.]+\n)*} $test_result {} test_result } else { set test_result $::RESULT } test "update-message-$testname" {$message == $test_result} fossil changes test "update-changes-$testname" {$changes == $::RESULT} } # Use a sequence number for file content that is not important for the test. set UPDATE_SEQ_NO 0 proc write_seq_to_file {fname} { global UPDATE_SEQ_NO incr UPDATE_SEQ_NO write_file $fname "$UPDATE_SEQ_NO\n" } # Make sure we are not in an open repository and initialize new repository test_setup ############################################################################### fossil update --verbose test update-already-up-to-date { [regexp {^-{79}\ncheckout: .*\nchanges: +None. Already up-to-date$} $RESULT] } # Remaining tests are carried out in the order update_cmd() performs checks. # # Common approach for tests below: # 1. Set the testname # 2. Set the file name, done by calling update_setup # 3. Set message and changes, the expected message message and subsequent changes # 3. Optionally set up and commit a common base for the next steps # 4. Commit a change to the repository (new tip) # 5. Update to the previous version # 6. Make changes # 7. Call test_update to attempt and update to tip set testname "conflict-standard" set fname [test_update_setup $testname] set message "CONFLICT $fname" set changes "EDITED $fname" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" fossil up previous write_seq_to_file $fname fossil add $fname test_update $testname $message $changes -expectError set testname "add-overwrites" set fname [test_update_setup $testname] set message "ADD $fname - overwrites an unmanaged file, original copy backed up locally" set changes "" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" fossil up previous write_seq_to_file $fname test_update $testname $message $changes -expectError set testname "add-standard" set fname [test_update_setup $testname] set message "ADD $fname" set changes "" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" fossil up previous test_update $testname $message $changes set testname "update-change" set fname [test_update_setup $testname] set message "UPDATE $fname - change to unmanaged file" set changes "DELETED $fname" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" write_seq_to_file $fname fossil commit -m "Update $fname" fossil up previous fossil rm --hard $fname test_update $testname $message $changes set testname "update-standard" set fname [test_update_setup $testname] set message "UPDATE $fname" set changes "" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" write_seq_to_file $fname fossil commit -m "Update $testname" fossil up previous test_update $testname $message $changes set testname "update-missing" set fname [test_update_setup $testname] set message "UPDATE $fname" set changes "" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" write_seq_to_file $fname fossil commit -m "Update $fname" fossil up previous file delete $fname test_update $testname $message $changes set testname "conflict-deleted" set fname [test_update_setup $testname] set message "CONFLICT $fname - edited locally but deleted by update" set changes "" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" fossil rm --hard $fname fossil commit -m "Remove $fname" fossil up previous file delete $fname test_update $testname $message $changes -expectError set testname "remove" set fname [test_update_setup $testname] set message "REMOVE $fname" set changes "" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" fossil rm --hard $fname fossil commit -m "Remove $fname" fossil up previous test_update $testname $message $changes set testname "merge-renamed" set fname [test_update_setup $testname] set message "MERGE $fname -> $fname.renamed" set changes "EDITED $fname.renamed" write_file $fname "center\n" fossil add $fname fossil commit -m "Add $fname" write_file $fname "top\ncenter\n" fossil mv --hard $fname "$fname.renamed" fossil commit -m "Update and rename $fname" fossil up previous write_file $fname "center\nbelow\n" test_update $testname $message $changes set testname "merge-standard" set fname [test_update_setup $testname] set message "MERGE $fname" set changes "EDITED $fname" write_file $fname "center\n" fossil add $fname fossil commit -m "Add $fname" write_file $fname "top\ncenter\n" fossil commit -m "Update $fname" fossil up previous write_file $fname "center\nbelow\n" test_update $testname $message $changes # TODO: test for "Cannot merge symlink" would be platform dependent set testname "merge-conflict" set fname [test_update_setup $testname] set message "MERGE $fname\n***** 1 merge conflicts in $fname" set changes "CONFLICT $fname" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" write_seq_to_file $fname fossil commit -m "Update $fname" fossil up previous write_seq_to_file $fname test_update $testname $message $changes -expectError # TODO: test for "Cannot merge binary file"? set testname "edited" set fname [test_update_setup $testname] set message "EDITED $fname\nADD $fname.other" set changes "EDITED $fname" write_seq_to_file $fname fossil add $fname fossil commit -m "Add $fname" write_seq_to_file "$fname.other" fossil add $fname.other fossil commit -m "Add $fname.other" fossil up previous write_seq_to_file $fname test_update $testname $message $changes set testname "unchanged" set fname [test_update_setup $testname] set message "ADD $fname\nUNCHANGED $fname.unchanged" set changes "" write_seq_to_file "$fname.unchanged" fossil add "$fname.unchanged" fossil commit -m "Add $fname.unchanged" write_seq_to_file "$fname" fossil add "$fname" fossil commit -m "Add $fname" fossil up previous test_update $testname $message $changes ############################################################################### test_cleanup |
Changes to tools/codecheck1.c.
︙ | ︙ | |||
602 603 604 605 606 607 608 | if( (acType[i]=='s' || acType[i]=='z' || acType[i]=='b') ){ const char *zExpr = azArg[fmtArg+i]; if( never_safe(zExpr) ){ printf("%s:%d: Argument %d to %.*s() is not safe for" " a query parameter\n", zFilename, lnFCall, i+fmtArg, szFName, zFCall); nErr++; | | | 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | if( (acType[i]=='s' || acType[i]=='z' || acType[i]=='b') ){ const char *zExpr = azArg[fmtArg+i]; if( never_safe(zExpr) ){ printf("%s:%d: Argument %d to %.*s() is not safe for" " a query parameter\n", zFilename, lnFCall, i+fmtArg, szFName, zFCall); nErr++; }else if( (fmtFlags & FMT_SQL)!=0 && !is_sql_safe(zExpr) ){ printf("%s:%d: Argument %d to %.*s() not safe for SQL\n", zFilename, lnFCall, i+fmtArg, szFName, zFCall); nErr++; } } } |
︙ | ︙ |
Changes to tools/makeheaders.c.
︙ | ︙ | |||
36 37 38 39 40 41 42 | #include <stdlib.h> #include <ctype.h> #include <memory.h> #include <sys/stat.h> #include <assert.h> #include <string.h> | | > | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | #include <stdlib.h> #include <ctype.h> #include <memory.h> #include <sys/stat.h> #include <assert.h> #include <string.h> #if defined( __MINGW32__) || defined(__DMC__) || \ defined(_MSC_VER) || defined(__POCC__) # ifndef WIN32 # define WIN32 # endif #else # include <unistd.h> #endif |
︙ | ︙ | |||
2224 2225 2226 2227 2228 2229 2230 | if (pToken->zText[pToken->nText-1] == '\r') { nArg--; } if( nArg==9 && strncmp(zArg,"INTERFACE",9)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Interface); }else if( nArg==16 && strncmp(zArg,"EXPORT_INTERFACE",16)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Export); }else if( nArg==15 && strncmp(zArg,"LOCAL_INTERFACE",15)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Local); | > | | 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 | if (pToken->zText[pToken->nText-1] == '\r') { nArg--; } if( nArg==9 && strncmp(zArg,"INTERFACE",9)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Interface); }else if( nArg==16 && strncmp(zArg,"EXPORT_INTERFACE",16)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Export); }else if( nArg==15 && strncmp(zArg,"LOCAL_INTERFACE",15)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Local); }else if( nArg==15 && strncmp(zArg,"MAKEHEADERS_STOPLOCAL_INTERFACE",15)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Local); }else{ PushIfMacro(0,zArg,nArg,pToken->nLine,0); } }else if( nCmd==5 && strncmp(zCmd,"ifdef",5)==0 ){ /* ** Push an #ifdef. |
︙ | ︙ |
Changes to tools/mkindex.c.
︙ | ︙ | |||
36 37 38 39 40 41 42 | ** legacy commands. Test commands are unsupported commands used for testing ** and analysis only. ** ** Commands are 1st-tier by default. If the command name begins with ** "test-" or if the command name has a "test" argument, then it becomes ** a test command. If the command name has a "2nd-tier" argument or ends ** with a "*" character, it is second tier. If the command name has an "alias" | | | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | ** legacy commands. Test commands are unsupported commands used for testing ** and analysis only. ** ** Commands are 1st-tier by default. If the command name begins with ** "test-" or if the command name has a "test" argument, then it becomes ** a test command. If the command name has a "2nd-tier" argument or ends ** with a "*" character, it is second tier. If the command name has an "alias" ** argument or ends with a "#" character, it is an alias: another name ** (a one-to-one replacement) for a command. Examples: ** ** COMMAND: abcde* ** COMMAND: fghij 2nd-tier ** COMMAND: mnopq# ** COMMAND: rstuv alias ** COMMAND: test-xyzzy |
︙ | ︙ |
Changes to tools/skintxt2config.c.
|
| | | 1 2 3 4 5 6 7 8 | /* -*- Mode: C; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* vim: set ts=2 et sw=2 tw=80: */ /* ** Copyright (c) 2021 Stephan Beal (https://wanderinghorse.net/home/stephan/) ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) |
︙ | ︙ | |||
100 101 102 103 104 105 106 | end: fclose(f); if(rc){ free(zMem); }else{ *zContent = zMem; *nContent = fpos; | | | 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | end: fclose(f); if(rc){ free(zMem); }else{ *zContent = zMem; *nContent = fpos; } return rc; } /* ** Expects zFilename to be one of the conventional skin filename ** parts. This routine converts it to config format and emits it to ** App.ostr. |
︙ | ︙ |
Changes to tools/sqlcompattest.c.
︙ | ︙ | |||
51 52 53 54 55 56 57 | #error "Must set -DMINIMUM_SQLITE_VERSION=nn.nn.nn in auto.def" #endif #define QUOTE(VAL) #VAL #define STR(MACRO_VAL) QUOTE(MACRO_VAL) char zMinimumVersionNumber[8]="nn.nn.nn"; | | > | > | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | #error "Must set -DMINIMUM_SQLITE_VERSION=nn.nn.nn in auto.def" #endif #define QUOTE(VAL) #VAL #define STR(MACRO_VAL) QUOTE(MACRO_VAL) char zMinimumVersionNumber[8]="nn.nn.nn"; strncpy((char *)&zMinimumVersionNumber,STR(MINIMUM_SQLITE_VERSION), sizeof(zMinimumVersionNumber)); long major, minor, release, version; sscanf(zMinimumVersionNumber, "%li.%li.%li", &major, &minor, &release); version=(major*1000000)+(minor*1000)+release; int i; static const char *zRequiredOpts[] = { "ENABLE_FTS4", /* Required for repository search */ "ENABLE_DBSTAT_VTAB", /* Required by /repo-tabsize page */ }; /* Check minimum SQLite version number */ if( sqlite3_libversion_number()<version ){ printf("found system SQLite version %s but need %s or later, " "consider removing --disable-internal-sqlite\n", sqlite3_libversion(),STR(MINIMUM_SQLITE_VERSION)); return 1; } for(i=0; i<sizeof(zRequiredOpts)/sizeof(zRequiredOpts[0]); i++){ if( !sqlite3_compileoption_used(zRequiredOpts[i]) ){ printf("system SQLite library omits required build option -DSQLITE_%s\n", |
︙ | ︙ |
Changes to win/Makefile.mingw.
︙ | ︙ | |||
575 576 577 578 579 580 581 582 583 584 585 586 587 588 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ $(SRCDIR)/../skins/khaki/css.txt \ $(SRCDIR)/../skins/khaki/details.txt \ $(SRCDIR)/../skins/khaki/footer.txt \ $(SRCDIR)/../skins/khaki/header.txt \ $(SRCDIR)/../skins/original/css.txt \ $(SRCDIR)/../skins/original/details.txt \ $(SRCDIR)/../skins/original/footer.txt \ | > > > > | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ $(SRCDIR)/../skins/etienne/css.txt \ $(SRCDIR)/../skins/etienne/details.txt \ $(SRCDIR)/../skins/etienne/footer.txt \ $(SRCDIR)/../skins/etienne/header.txt \ $(SRCDIR)/../skins/khaki/css.txt \ $(SRCDIR)/../skins/khaki/details.txt \ $(SRCDIR)/../skins/khaki/footer.txt \ $(SRCDIR)/../skins/khaki/header.txt \ $(SRCDIR)/../skins/original/css.txt \ $(SRCDIR)/../skins/original/details.txt \ $(SRCDIR)/../skins/original/footer.txt \ |
︙ | ︙ |
Changes to win/Makefile.msc.
︙ | ︙ | |||
533 534 535 536 537 538 539 540 541 542 543 544 545 546 | "$(SRCDIR)\..\skins\default\details.txt" \ "$(SRCDIR)\..\skins\default\footer.txt" \ "$(SRCDIR)\..\skins\default\header.txt" \ "$(SRCDIR)\..\skins\eagle\css.txt" \ "$(SRCDIR)\..\skins\eagle\details.txt" \ "$(SRCDIR)\..\skins\eagle\footer.txt" \ "$(SRCDIR)\..\skins\eagle\header.txt" \ "$(SRCDIR)\..\skins\khaki\css.txt" \ "$(SRCDIR)\..\skins\khaki\details.txt" \ "$(SRCDIR)\..\skins\khaki\footer.txt" \ "$(SRCDIR)\..\skins\khaki\header.txt" \ "$(SRCDIR)\..\skins\original\css.txt" \ "$(SRCDIR)\..\skins\original\details.txt" \ "$(SRCDIR)\..\skins\original\footer.txt" \ | > > > > | 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 | "$(SRCDIR)\..\skins\default\details.txt" \ "$(SRCDIR)\..\skins\default\footer.txt" \ "$(SRCDIR)\..\skins\default\header.txt" \ "$(SRCDIR)\..\skins\eagle\css.txt" \ "$(SRCDIR)\..\skins\eagle\details.txt" \ "$(SRCDIR)\..\skins\eagle\footer.txt" \ "$(SRCDIR)\..\skins\eagle\header.txt" \ "$(SRCDIR)\..\skins\etienne\css.txt" \ "$(SRCDIR)\..\skins\etienne\details.txt" \ "$(SRCDIR)\..\skins\etienne\footer.txt" \ "$(SRCDIR)\..\skins\etienne\header.txt" \ "$(SRCDIR)\..\skins\khaki\css.txt" \ "$(SRCDIR)\..\skins\khaki\details.txt" \ "$(SRCDIR)\..\skins\khaki\footer.txt" \ "$(SRCDIR)\..\skins\khaki\header.txt" \ "$(SRCDIR)\..\skins\original\css.txt" \ "$(SRCDIR)\..\skins\original\details.txt" \ "$(SRCDIR)\..\skins\original\footer.txt" \ |
︙ | ︙ | |||
1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 | echo "$(SRCDIR)\../skins/default/details.txt" >> $@ echo "$(SRCDIR)\../skins/default/footer.txt" >> $@ echo "$(SRCDIR)\../skins/default/header.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/css.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/details.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/footer.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/header.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/css.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/details.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/footer.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/header.txt" >> $@ echo "$(SRCDIR)\../skins/original/css.txt" >> $@ echo "$(SRCDIR)\../skins/original/details.txt" >> $@ echo "$(SRCDIR)\../skins/original/footer.txt" >> $@ | > > > > | 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 | echo "$(SRCDIR)\../skins/default/details.txt" >> $@ echo "$(SRCDIR)\../skins/default/footer.txt" >> $@ echo "$(SRCDIR)\../skins/default/header.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/css.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/details.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/footer.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/header.txt" >> $@ echo "$(SRCDIR)\../skins/etienne/css.txt" >> $@ echo "$(SRCDIR)\../skins/etienne/details.txt" >> $@ echo "$(SRCDIR)\../skins/etienne/footer.txt" >> $@ echo "$(SRCDIR)\../skins/etienne/header.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/css.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/details.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/footer.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/header.txt" >> $@ echo "$(SRCDIR)\../skins/original/css.txt" >> $@ echo "$(SRCDIR)\../skins/original/details.txt" >> $@ echo "$(SRCDIR)\../skins/original/footer.txt" >> $@ |
︙ | ︙ |
Changes to win/buildmsvc.bat.
1 2 3 4 5 | @ECHO OFF :: :: buildmsvc.bat -- :: | | > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | @ECHO OFF :: :: buildmsvc.bat -- :: :: This batch file attempts to build Fossil using the latest version of :: Microsoft Visual Studio installed on this machine. :: :: For VS 2017 and later, it uses the x64 build tools by default; :: pass "x86" as the first argument to use the x86 tools. :: SETLOCAL REM SET __ECHO=ECHO REM SET __ECHO2=ECHO IF NOT DEFINED _AECHO (SET _AECHO=REM) |
︙ | ︙ | |||
48 49 50 51 52 53 54 | ) REM REM Visual Studio 2017 / 2019 / 2022 REM CALL :fn_TryUseVsWhereExe IF NOT DEFINED VSWHEREINSTALLDIR GOTO skip_detectVisualStudio2017 | | > > > | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | ) REM REM Visual Studio 2017 / 2019 / 2022 REM CALL :fn_TryUseVsWhereExe IF NOT DEFINED VSWHEREINSTALLDIR GOTO skip_detectVisualStudio2017 SET VSVARS32=%VSWHEREINSTALLDIR%\VC\Auxiliary\Build\vcvars64.bat IF "%~1" == "x86" ( SET VSVARS32=%VSWHEREINSTALLDIR%\VC\Auxiliary\Build\vcvars32.bat ) IF EXIST "%VSVARS32%" ( %_AECHO% Using Visual Studio 2017 / 2019 / 2022... GOTO skip_detectVisualStudio ) :skip_detectVisualStudio2017 REM |
︙ | ︙ |
Changes to www/aboutcgi.wiki.
1 | <title>How CGI Works In Fossil</title> | > | > | | > | > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | <title>How CGI Works In Fossil</title> <h2>Introduction</h2> CGI or "Common Gateway Interface" is a venerable yet reliable technique for generating dynamic web content. This article gives a quick background on how CGI works and describes how Fossil can act as a CGI service. This is a "how it works" guide. This document provides background information on the CGI protocol so that you can better understand what is going on behind the scenes. If you just want to set up Fossil as a CGI server, see the [./server/ | Fossil Server Setup] page. Or if you want to development CGI-based extensions to Fossil, see the [./serverext.wiki|CGI Server Extensions] page. <h2>A Quick Review Of CGI</h2> An HTTP request is a block of text that is sent by a client application (usually a web browser) and arrives at the web server over a network connection. The HTTP request contains a URL that describes the information being requested. The URL in the HTTP request is typically the same URL that appears in the URL bar at the top of the web browser that is making the request. The URL might contain a "?" character followed query parameters. The HTTP will usually also contain other information such as the name of the application that made the request, whether or not the requesting application can accept a compressed reply, POST parameters from forms, and so forth. The job of the web server is to interpret the HTTP request and formulate an appropriate reply. The web server is free to interpret the HTTP request in any way it wants. But most web servers follow a similar pattern, described below. (Note: details may vary from one web server to another.) Suppose the filename component of the URL in the HTTP request looks like this: <pre>/one/two/timeline/four</pre> Most web servers will search their content area for files that match some prefix of the URL. The search starts with <b>/one</b>, then goes to <b>/one/two</b>, then <b>/one/two/timeline</b>, and finally <b>/one/two/timeline/four</b> is checked. The search stops at the first match. Suppose the first match is <b>/one/two</b>. If <b>/one/two</b> is an ordinary file in the content area, then that file is returned as static content. The "<b>/timeline/four</b>" suffix is silently ignored. If <b>/one/two</b> is a CGI script (or program), then the web server executes the <b>/one/two</b> script. The output generated by the script is collected and repackaged as the HTTP reply. Before executing the CGI script, the web server will set up various environment variables with information useful to the CGI script: <table> <tr><th>Variable<th>Meaning <tr><td>GATEWAY_INTERFACE<td>Always set to "CGI/1.0" <tr><td>REQUEST_URI <td>The input URL from the HTTP request. <tr><td>SCRIPT_NAME <td>The prefix of the input URL that matches the CGI script name. In this example: "/one/two". <tr><td>PATH_INFO |
︙ | ︙ | |||
83 84 85 86 87 88 89 | The CGI script exits as soon as it generates a single reply. The web server will (usually) persist and handle multiple HTTP requests, but a CGI script handles just one HTTP request and then exits. The above is a rough outline of how CGI works. There are many details omitted from this brief discussion. See other on-line CGI tutorials for further information. | | | > | | > | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | The CGI script exits as soon as it generates a single reply. The web server will (usually) persist and handle multiple HTTP requests, but a CGI script handles just one HTTP request and then exits. The above is a rough outline of how CGI works. There are many details omitted from this brief discussion. See other on-line CGI tutorials for further information. <h2>How Fossil Acts As A CGI Program</h2> An appropriate CGI script for running Fossil will look something like the following: <pre> #!/usr/bin/fossil repository: /home/www/repos/project.fossil </pre> The first line of the script is a "[https://en.wikipedia.org/wiki/Shebang_%28Unix%29|shebang]" that tells the operating system what program to use as the interpreter for this script. On unix, when you execute a script that starts with a shebang, the operating system runs the program identified by the shebang with a single argument that is the full pathname of the script itself. |
︙ | ︙ | |||
132 133 134 135 136 137 138 139 140 141 142 143 144 145 | With Fossil, terms of PATH_INFO beyond the webpage name are converted into the "name" query parameter. Hence, the following two URLs mean exactly the same thing to Fossil: <ol type='A'> <li> [https://fossil-scm.org/home/info/c14ecc43] <li> [https://fossil-scm.org/home/info?name=c14ecc43] </ol> In both cases, the CGI script is called "/fossil". For case (A), the PATH_INFO variable will be "info/c14ecc43" and so the "[/help?cmd=/info|/info]" webpage will be generated and the suffix of PATH_INFO will be converted into the "name" query parameter, which identifies the artifact about which information is requested. In case (B), the PATH_INFO is just "info", but the same "name" query parameter is set explicitly by the URL itself. | > | | > | | > | > | > | > | > > > | > > > > > | > | > > > > > | > | < > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | | < < | | 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | With Fossil, terms of PATH_INFO beyond the webpage name are converted into the "name" query parameter. Hence, the following two URLs mean exactly the same thing to Fossil: <ol type='A'> <li> [https://fossil-scm.org/home/info/c14ecc43] <li> [https://fossil-scm.org/home/info?name=c14ecc43] </ol> In both cases, the CGI script is called "/fossil". For case (A), the PATH_INFO variable will be "info/c14ecc43" and so the "[/help?cmd=/info|/info]" webpage will be generated and the suffix of PATH_INFO will be converted into the "name" query parameter, which identifies the artifact about which information is requested. In case (B), the PATH_INFO is just "info", but the same "name" query parameter is set explicitly by the URL itself. <h2>Serving Multiple Fossil Repositories From One CGI Script</h2> The previous example showed how to serve a single Fossil repository using a single CGI script. On a website that wants to serve multiple repositories, one could simply create multiple CGI scripts, one script for each repository. But it is also possible to serve multiple Fossil repositories from a single CGI script. If the CGI script for Fossil contains a "directory:" line instead of a "repository:" line, then the argument to "directory:" is the name of a directory that contains multiple repository files, each ending with ".fossil". For example: <pre> #!/usr/bin/fossil directory: /home/www/repos </pre> Suppose the /home/www/repos directory contains files named <b>one.fossil</b>, <b>two.fossil</b>, and <b>subdir/three.fossil</b>. Further suppose that the name of the CGI script (relative to the root of the webserver document area) is "cgis/example2". Then to see the timeline for the "three.fossil" repository, the URL would be: <pre> http://example.com/cgis/example2/subdir/three/timeline </pre> Here is what happens: <ol> <li> The input URI on the HTTP request is <b>/cgis/example2/subdir/three/timeline</b> <li> The web server searches prefixes of the input URI until it finds the "cgis/example2" script. The web server then sets PATH_INFO to the "subdir/three/timeline" suffix and invokes the "cgis/example2" script. <li> Fossil runs and sees the "directory:" line pointing to "/home/www/repos". Fossil then starts pulling terms off the front of the PATH_INFO looking for a repository. It first looks at "/home/www/resps/subdir.fossil" but there is no such repository. So then it looks at "/home/www/repos/subdir/three.fossil" and finds a repository. The PATH_INFO is shortened by removing "subdir/three/" leaving it at just "timeline". <li> Fossil looks at the rest of PATH_INFO to see that the webpage requested is "timeline". </ol> <a id="cgivar"></a> The web server sets many environment variables in step 2 in addition to just PATH_INFO. The following diagram shows a few of these variables and their relationship to the request URL: <verbatim type="pikchr"> charwid = 0.075 thickness = 0 SCHEME: box "https://" mono fit DOMAIN: box "example.com" mono fit SCRIPT: box "/cgis/example2" mono fit PATH: box "/subdir/three/timeline" mono fit QUERY: box "?c=55d7e1" mono fit thickness = 0.01 DB: box at 0.3 below DOMAIN "HTTP_HOST" mono fit invis SB: box at 0.3 below SCRIPT "SCRIPT_NAME" mono fit invis PB: box at 0.3 below PATH "PATH_INFO" mono fit invis QB: box at 0.3 below QUERY "QUERY_STRING" mono fit invis RB: box at 0.5 above PATH "REQUEST_URI" mono fit invis color = lightgray box at SCHEME width SCHEME.width height SCHEME.height line fill 0x7799CC behind QUERY \ from SCRIPT.nw \ to RB.sw \ to RB.se \ to QUERY.ne \ close line fill 0x99CCFF behind DOMAIN \ from DOMAIN.nw \ to DOMAIN.sw \ to DB.n \ to DOMAIN.se \ to DOMAIN.ne \ close line fill 0xCCEEFF behind SCRIPT \ from SCRIPT.nw \ to SCRIPT.sw \ to SB.n \ to SCRIPT.se \ to SCRIPT.ne \ close line fill 0x99CCFF behind PATH \ from PATH.nw \ to PATH.sw \ to PB.n \ to PATH.se \ to PATH.ne \ close line fill 0xCCEEFF behind QUERY \ from QUERY.nw \ to QUERY.sw \ to QB.n \ to QUERY.se \ to QUERY.ne \ close </verbatim> <h2>Additional CGI Script Options</h2> The CGI script can have additional options used to fine-tune Fossil's behavior. See the [./cgi.wiki|CGI script documentation] for details. <h2>Additional Observations</h2> <ol type="I"> <li><p> Fossil does not distinguish between the various HTTP methods (GET, PUT, DELETE, etc). Fossil figures out what it needs to do purely from the webpage term of the URI.</p></li> <li><p> Fossil does not distinguish between query parameters that are part of the URI, application/x-www-form-urlencoded or multipart/form-data encoded |
︙ | ︙ | |||
237 238 239 240 241 242 243 | converted into CGI, then Fossil creates a separate child Fossil process to handle each CGI request.</p></li> <li><p> Fossil is itself often launched using CGI. But Fossil can also then turn around and launch [./serverext.wiki|sub-CGI scripts to implement extensions].</p></li> </ol> | < | 295 296 297 298 299 300 301 | converted into CGI, then Fossil creates a separate child Fossil process to handle each CGI request.</p></li> <li><p> Fossil is itself often launched using CGI. But Fossil can also then turn around and launch [./serverext.wiki|sub-CGI scripts to implement extensions].</p></li> </ol> |
Changes to www/aboutdownload.wiki.
1 | <title>How The Fossil Download Page Works</title> | < | 1 2 3 4 5 6 7 8 | <title>How The Fossil Download Page Works</title> <h2>1.0 Overview</h2> The [/uv/download.html|Download] page for the Fossil self-hosting repository is implemented using [./unvers.wiki|unversioned files]. The "download.html" screen itself, and the various build products are all stored as unversioned content. The download.html page |
︙ | ︙ | |||
41 42 43 44 45 46 47 | Notice how the hyperlinks above use the "mimetype=text/plain" query parameter in order to display the file as plain text instead of the usual HTML or Javascript. The default mimetype for "download.html" is text/html. But because the entire page is enclosed within | | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | Notice how the hyperlinks above use the "mimetype=text/plain" query parameter in order to display the file as plain text instead of the usual HTML or Javascript. The default mimetype for "download.html" is text/html. But because the entire page is enclosed within <verbatim><div class='fossil-doc' data-title='Download Page'>...</div></verbatim> Fossil knows to add its standard header and footer information to the document, making it look just like any other page. See "[./embeddeddoc.wiki|embedded documentation]" for further details on how this <div class='fossil-doc'> markup works. With each new release, the "releases" variable in the javascript on the [/uv/download.js?mimetype=text/plain|download.js] page is edited (using "[/help?cmd=uv|fossil uv edit download.js]") to add details of the release. When the JavaScript in the "download.js" file runs, it requests a listing of all unversioned content using the /juvlist URL. ([/juvlist|sample /juvlist output]). The content of the download page is constructed by matching unversioned files against regular expressions in the "releases" variable. Build products need to be constructed on different machines. The precompiled binary for Linux is compiled on Linux, the precompiled binary for Windows is compiled on Windows11, and so forth. After a new release is tagged, the release manager goes around to each of the target platforms, checks out the release and compiles it, then runs [/help?cmd=uv|fossil uv add] for the build product followed by [/help?cmd=uv|fossil uv sync] to push the new build product to the [./selfhost.wiki|various servers]. This process is repeated for each build product. |
︙ | ︙ |
Changes to www/adding_code.wiki.
︙ | ︙ | |||
48 49 50 51 52 53 54 | source tree. Suppose one wants to add a new source code file named "xyzzy.c". The first step is to add this file to the various makefiles. Do so by editing the file tools/makemake.tcl and adding "xyzzy" (without the final ".c") to the list of source modules at the top of that script. Save the result and then run the makemake.tcl script using a TCL interpreter. The command to run the makemake.tcl script is: | > | > | | | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | source tree. Suppose one wants to add a new source code file named "xyzzy.c". The first step is to add this file to the various makefiles. Do so by editing the file tools/makemake.tcl and adding "xyzzy" (without the final ".c") to the list of source modules at the top of that script. Save the result and then run the makemake.tcl script using a TCL interpreter. The command to run the makemake.tcl script is: <verbatim> tclsh makemake.tcl </verbatim> The working directory must be src/ when the command above is run. Note that TCL is not normally required to build Fossil, but it is required for this step. If you do not have a TCL interpreter on your system already, they are easy to install. A popular choice is the [http://www.activestate.com/activetcl|Active Tcl] installation from ActiveState. After the makefiles have been updated, create the xyzzy.c source file from the following template: <verbatim> /* ** Copyright boilerplate goes here. ***************************************************** ** High-level description of what this module goes ** here. */ #include "config.h" #include "xyzzy.h" #if INTERFACE /* Exported object (structure) definitions or #defines ** go here */ #endif /* INTERFACE */ /* New code goes here */ </verbatim> Note in particular the <b>#include "xyzzy.h"</b> line near the top. The "xyzzy.h" file is automatically generated by makeheaders. Every normal Fossil source file must have a #include at the top that imports its private header file. (Some source files, such as "sqlite3.c" are exceptions to this rule. Don't worry about those exceptions. The files you write will require this #include line.) |
︙ | ︙ | |||
105 106 107 108 109 110 111 | Fossil repository and then [/help/commit|commit] your changes! <h2 id="newcmd">4.0 Creating A New Command</h2> By "commands" we mean the keywords that follow "fossil" when invoking Fossil from the command-line. So, for example, in | > | > | | | | > | | | > | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | Fossil repository and then [/help/commit|commit] your changes! <h2 id="newcmd">4.0 Creating A New Command</h2> By "commands" we mean the keywords that follow "fossil" when invoking Fossil from the command-line. So, for example, in <verbatim> fossil diff xyzzy.c </verbatim> The "command" is "diff". Commands may optionally be followed by arguments and/or options. To create new commands in Fossil, add code (either to an existing source file, or to a new source file created as described above) according to the following template: <verbatim> /* ** COMMAND: xyzzy ** ** Help text goes here. Backslashes must be escaped. */ void xyzzy_cmd(void){ /* Implement the command here */ fossil_print("Hello, World!\n"); } </verbatim> The example above creates a new command named "xyzzy" that prints the message "Hello, World!" on the console. This command is a normal command that will show up in the list of command from [/help/help|fossil help]. If you add an asterisk to the end of the command name, like this: <verbatim> ** COMMAND: xyzzy* </verbatim> Then the command will only show up if you add the "--all" option to [/help/help|fossil help]. Or, if the command name starts with "test" then the command will be considered experimental and will only show up when the --test option is used with [/help/help|fossil help]. The example above is a fully functioning Fossil command. You can add the text shown to an existing Fossil source file, recompiling then test it out by typing: <verbatim> ./fossil xyzzy ./fossil help xyzzy ./fossil xyzzy --help </verbatim> The name of the C function that implements the command can be anything you like (as long as it does not collide with some other symbol in the Fossil code) but it is traditional to name the function "<i>commandname</i><b>_cmd</b>", as is done in the example. You could also use "printf()" instead of "fossil_print()" to generate |
︙ | ︙ | |||
171 172 173 174 175 176 177 | <h2 id="newpage">5.0 Creating A New Web Page</h2> As with commands, new webpages can be added simply by inserting a function that generates the webpage together with a special header comment. A template follows: | | | | 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | <h2 id="newpage">5.0 Creating A New Web Page</h2> As with commands, new webpages can be added simply by inserting a function that generates the webpage together with a special header comment. A template follows: <verbatim> /* ** WEBPAGE: helloworld */ void helloworld_page(void){ style_header("Hello World!"); @ <p>Hello, World!</p> style_footer(); } </verbatim> Add the code above to a new or existing Fossil source code file, then recompile fossil and run [/help/ui|fossil ui] then enter "http://localhost:8080/helloworld" in your web browser and the routine above will generate a web page that says "Hello World." It really is that simple. |
︙ | ︙ |
Changes to www/alerts.md.
︙ | ︙ | |||
89 90 91 92 93 94 95 | the "From" address above, or it could be a different value like `admin@example.com`. Save your changes. At the command line, say | | | | | | | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | the "From" address above, or it could be a different value like `admin@example.com`. Save your changes. At the command line, say $ fossil set email-send-command If that gives a blank value instead of `sendmail -ti`, say $ fossil set email-send-command "sendmail -ti" to force the setting. That works around a [known bug](https://fossil-scm.org/forum/forumpost/840b676410) which may be squished by the time you read this. If you're running Postfix or Exim, you might think that command is wrong, since you aren't running Sendmail. These mail servers provide a `sendmail` command for compatibility with software like Fossil that has no good reason to care exactly which SMTP server implementation is running at a given site. There may be other SMTP servers that also provide a compatible `sendmail` command, in which case they may work with Fossil using the same steps as above. <a id="status"></a> If you reload the Admin → Notification page, the Status section at the top should show: Outgoing Email: Piped to command "sendmail -ti" Pending Alerts: 0 normal, 0 digest Subscribers: 0 active, 0 total Before you move on to the next section, you might like to read up on [some subtleties](#pipe) with the "pipe to a command" method that we did not cover above. <a id="usage"></a> |
︙ | ︙ | |||
153 154 155 156 157 158 159 | by the way: a user can be signed up for email alerts without having a full-fledged Fossil user account. Only when both user names are the same are the two records tied together under the hood. For more on this, see [Users vs Subscribers below](#uvs). If you are seeing the following complaint from Fossil: | < | < < | 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 | by the way: a user can be signed up for email alerts without having a full-fledged Fossil user account. Only when both user names are the same are the two records tied together under the hood. For more on this, see [Users vs Subscribers below](#uvs). If you are seeing the following complaint from Fossil: > Use a different login with greater privilege than FOO to access /subscribe ...then the repository's administrator forgot to give the [**EmailAlert** capability][cap7] to that user or to a user category that the user is a member of. After a subscriber signs up for alerts for the first time, a single verification email is sent to that subscriber's given email address. |
︙ | ︙ | |||
212 213 214 215 216 217 218 | Announcement](/announce)" link at the top of the "Email Notification Setup" page. Put your email address in the "To:" line and a test message below, then press "Send Message" to verify that outgoing email is working. Another method is from the command line: | | | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | Announcement](/announce)" link at the top of the "Email Notification Setup" page. Put your email address in the "To:" line and a test message below, then press "Send Message" to verify that outgoing email is working. Another method is from the command line: $ fossil alerts test-message you@example.com --body README.md --subject Test That should send you an email with "Test" in the subject line and the contents of your project's `README.md` file in the body. That command assumes that your project contains a "readme" file, but of course it does, because you have followed the [Programming Style Guide Checklist][cl], right? Right. |
︙ | ︙ | |||
262 263 264 265 266 267 268 | ### Troubleshooting If email alerts aren't working, there are several useful commands you can give to figure out why. (Be sure to [`cd` into a repo checkout directory](#cd) first!) | | | | | | | | | 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 | ### Troubleshooting If email alerts aren't working, there are several useful commands you can give to figure out why. (Be sure to [`cd` into a repo checkout directory](#cd) first!) $ fossil alerts status This should give much the same information as you saw [above](#status). One difference is that, since you've created a forum post, the `pending-alerts` value should only be zero if you did in fact get the requested email alert. If it's zero, check your mailer's spam folder. If it's nonzero, continue with these troubleshooting steps. $ fossil backoffice That forces Fossil to run its ["back office" process](./backoffice.md). Its only purpose at the time of this writing is to push out alert emails, but it might do other things later. Sometimes it can get stuck and needs to be kicked. For that reason, you might want to set up a crontab entry to make sure it runs occasionally. $ fossil alerts send This should also kick off the backoffice processing, if there are any pending alerts to send out. $ fossil alert pending Show any pending alerts. The number of lines output here should equal the [status output above](#status). $ fossil test-add-alerts f5900 $ fossil alert send Manually create an email alert and push it out immediately. The `f` in the first command's final parameter means you're scheduling a "forum" alert. The integer is the ID of a forum post, which you can find by visiting `/timeline?showid` on your Fossil instance. The second command above is necessary because the `test-add-alerts` command doesn't kick off a backoffice run. $ fossil ale send This only does the same thing as the final command above, rather than send you an ale, as you might be hoping. Sorry. <a id="advanced"></a> ## Advanced Email Setups |
︙ | ︙ | |||
422 423 424 425 426 427 428 | corruption][rdbc] if used with a file sharing technology that doesn't use proper file locking. You can start this Tcl script as a daemon automatically on most Unix and Unix-like systems by adding the following line to the `/etc/rc.local` file of the server that hosts the repository sending email alerts: | | | 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 | corruption][rdbc] if used with a file sharing technology that doesn't use proper file locking. You can start this Tcl script as a daemon automatically on most Unix and Unix-like systems by adding the following line to the `/etc/rc.local` file of the server that hosts the repository sending email alerts: /usr/bin/tclsh /home/www/fossil/email-sender.tcl & [cj]: https://en.wikipedia.org/wiki/Chroot [rdbc]: https://www.sqlite.org/howtocorrupt.html#_filesystems_with_broken_or_missing_lock_implementations <a id="dir"></a> ### Method 3: Store in a Directory |
︙ | ︙ | |||
681 682 683 684 685 686 687 | attacker with the `subscriberCode`. Nor can knowledge of the `subscriberCode` lead to an email flood or other annoyance attack, as far as I can see. If the `subscriberCodes` for a Fossil repository are ever compromised, new ones can be generated as follows: | | | 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 | attacker with the `subscriberCode`. Nor can knowledge of the `subscriberCode` lead to an email flood or other annoyance attack, as far as I can see. If the `subscriberCodes` for a Fossil repository are ever compromised, new ones can be generated as follows: UPDATE subscriber SET subscriberCode=randomblob(32); Since this then affects all new email alerts going out from Fossil, your end users may never even realize that they're getting new codes, as long as they don't click on the URLs in the footer of old alert messages. With that in mind, a Fossil server administrator could choose to randomize the `subscriberCodes` periodically, such as just before the |
︙ | ︙ |
Changes to www/backoffice.md.
︙ | ︙ | |||
77 78 79 80 81 82 83 | However, the daily digest of email notifications is handled by the backoffice. If a Fossil server can sometimes go more than a day without being accessed, then the automatic backoffice will never run, and the daily digest might not go out until somebody does visit a webpage. If this is a problem, an administrator can set up a cron job to periodically run: | | | 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | However, the daily digest of email notifications is handled by the backoffice. If a Fossil server can sometimes go more than a day without being accessed, then the automatic backoffice will never run, and the daily digest might not go out until somebody does visit a webpage. If this is a problem, an administrator can set up a cron job to periodically run: fossil backoffice _REPOSITORY_ That command will cause backoffice processing to occur immediately. Note that this is almost never necessary for an internet-facing Fossil repository, since most repositories will get multiple accesses per day from random robots, which will be sufficient to kick off the daily digest emails. And even for a private server, if there is very little traffic, then the daily digests are probably a no-op anyhow |
︙ | ︙ | |||
100 101 102 103 104 105 106 | [Fossil Forum](https://fossil-scm.org/forum) so that we can perhaps fix the problem.) For now, the backoffice must be run manually on OpenBSD systems. To set up fully-manual backoffice, first disable the automatic backoffice using the "[backoffice-disable](/help?cmd=backoffice-disable)" setting. | | | | 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | [Fossil Forum](https://fossil-scm.org/forum) so that we can perhaps fix the problem.) For now, the backoffice must be run manually on OpenBSD systems. To set up fully-manual backoffice, first disable the automatic backoffice using the "[backoffice-disable](/help?cmd=backoffice-disable)" setting. fossil setting backoffice-disable on Then arrange to invoke the backoffice separately using a command like this: fossil backoffice --poll 30 _REPOSITORY-LIST_ Multiple repositories can be named. This one command will handle launching the backoffice for all of them. There are additional useful command-line options. See the "[fossil backoffice](/help?cmd=backoffice)" documentation for details. The backoffice processes run manually using the "fossil backoffice" |
︙ | ︙ | |||
145 146 147 148 149 150 151 | "no process". Sometimes the process id will be non-zero even if there is no corresponding process. Fossil knows how to figure out whether or not a process still exists. You can print out a decoded copy of the current backoffice lease using this command: | | | 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | "no process". Sometimes the process id will be non-zero even if there is no corresponding process. Fossil knows how to figure out whether or not a process still exists. You can print out a decoded copy of the current backoffice lease using this command: fossil test-backoffice-lease -R _REPOSITORY_ If a system has been idle for a long time, then there will be no backoffice processes. (Either the process id entries in the lease will be zero, or there will exist no process associated with the process id.) When a new web request comes in, the system sees that no backoffice process is active and so it kicks off a separate process to run backoffice. |
︙ | ︙ | |||
195 196 197 198 199 200 201 | The backoffice should "just work". It should not require administrator attention. However, if you suspect that something is not working right, there are some debugging aids. We have already mentioned the command that shows the backoffice lease for a repository: | | | 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | The backoffice should "just work". It should not require administrator attention. However, if you suspect that something is not working right, there are some debugging aids. We have already mentioned the command that shows the backoffice lease for a repository: fossil test-backoffice-lease -R _REPOSITORY_ Running that command every few seconds should show what is going on with backoffice processing in a particular repository. There are also settings that control backoffice behavior. The "backoffice-nodelay" setting prevents the "next" process from taking a lease and sleeping. If "backoffice-nodelay" is set, that causes all |
︙ | ︙ |
Changes to www/backup.md.
︙ | ︙ | |||
134 135 136 137 138 139 140 | # <a id="sync-solution"></a> Solution 1: Explicit Pulls The following script solves most of the above problems for the use case where you want a *nearly-complete* clone of the remote repository using nothing but the normal Fossil sync protocol. It only does so if you are logged into the remote as a user with Setup capability, however. | < < < < | 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | # <a id="sync-solution"></a> Solution 1: Explicit Pulls The following script solves most of the above problems for the use case where you want a *nearly-complete* clone of the remote repository using nothing but the normal Fossil sync protocol. It only does so if you are logged into the remote as a user with Setup capability, however. ``` shell #!/bin/sh fossil sync --unversioned fossil configuration pull all fossil rebuild ``` The last step is needed to ensure that shunned artifacts on the remote are removed from the local clone. The second step includes `fossil conf pull shun`, but until those artifacts are actually rebuilt out of existence, your backup will be “more than complete” in the sense that it will continue to have information that the remote says should not exist any more. That would be not so much a “backup” as an “archive,” which might not be what you want. |
︙ | ︙ | |||
168 169 170 171 172 173 174 | allows you to get a SQL-level backup. This requires Fossil 2.12 or newer, which added [the `backup` command][bu] to take care of locking and transaction isolation, allowing the user to safely back up an in-use repository. If you have SSH access to the remote server, something like this will work: | < < < < < < < < | 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | allows you to get a SQL-level backup. This requires Fossil 2.12 or newer, which added [the `backup` command][bu] to take care of locking and transaction isolation, allowing the user to safely back up an in-use repository. If you have SSH access to the remote server, something like this will work: ``` shell #!/bin/bash bf=repo-$(date +%Y-%m-%d).fossil ssh example.com "cd museum ; fossil backup -R repo.fossil backups/$bf" && scp example.com:museum/backups/$bf ~/museum/backups ``` Beware that this method does not solve [the intransitive sync problem](#ait), in and of itself: if you do a SQL-level backup of a stale repo DB, you have a *stale backup!* You should therefore run this on every node that may need to serve as a backup so that at least *one* of the backups is also up-to-date. # <a id="enc"></a> Encrypted Off-Site Backups A useful refinement that you can apply to both methods above is encrypted off-site backups. You may wish to store backups of your repositories off-site on a service such as Dropbox, Google Drive, iCloud, or Microsoft OneDrive, where you don’t fully trust the service not to leak your information. This addition to the prior scripts will encrypt the resulting backup in such a way that the cloud copy is a useless blob of noise to anyone without the key: ```shell iter=152830 pass="h8TixP6Mt6edJ3d6COaexiiFlvAM54auF2AjT7ZYYn" gd="$HOME/Google Drive/Fossil Backups/$bf.xz.enc" fossil sql -R ~/museum/backups/"$bf" .dump | xz -9 | openssl enc -e -aes-256-cbc -pbkdf2 -iter $iter -pass pass:"$pass" -out "$gd" ``` If you’re adding this to the first script above, remove the “`-R repo-name`” bit so you get a dump of the repository backing the current working directory. Change the `pass` value to some other long random string, and change the `iter` value to something in the hundreds of thousands range. A good source for the first is [here][grcp], and for the second, [here][rint]. |
︙ | ︙ | |||
260 261 262 263 264 265 266 | lacked this capability until Ventura (13.0). If you’re on Monterey (12) or older, we recommend use of the [Homebrew][hb] OpenSSL package rather than give up on the security afforded by use of configurable-iteration PBKDF2. To avoid a conflict with the platform’s `openssl` binary, Homebrew’s installation is [unlinked][hbul] by default, so you have to give an explicit path to it, one of: | | | | 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | lacked this capability until Ventura (13.0). If you’re on Monterey (12) or older, we recommend use of the [Homebrew][hb] OpenSSL package rather than give up on the security afforded by use of configurable-iteration PBKDF2. To avoid a conflict with the platform’s `openssl` binary, Homebrew’s installation is [unlinked][hbul] by default, so you have to give an explicit path to it, one of: /usr/local/opt/openssl/bin/openssl ... # Intel x86 Macs /opt/homebrew/opt/openssl/bin/openssl ... # ARM Macs (“Apple silicon”) [lssl]: https://www.libressl.org/ ## <a id="rest"></a> Restoring From An Encrypted Backup The “restore” script for the above fragment is basically an inverse of |
︙ | ︙ |
Changes to www/branching.wiki.
︙ | ︙ | |||
244 245 246 247 248 249 250 | branches identified only by the commit ID currently at its tip, being a long string of hex digits. Therefore, Fossil conflates two concepts: branching as intentional forking and the naming of forks as branches. They are in fact separate concepts, but since Fossil is intended to be used primarily by humans, we combine them in Fossil's human user interfaces. | | | | | | | | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | branches identified only by the commit ID currently at its tip, being a long string of hex digits. Therefore, Fossil conflates two concepts: branching as intentional forking and the naming of forks as branches. They are in fact separate concepts, but since Fossil is intended to be used primarily by humans, we combine them in Fossil's human user interfaces. <p class="blockquote"> <b>Key Distinction:</b> A branch is a <i>named, intentional</i> fork. </p> Unnamed forks <i>may</i> be intentional, but most of the time, they're accidental and left unnamed. Fossil offers two primary ways to create named, intentional forks, a.k.a. branches. First: <pre> $ fossil commit --branch my-new-branch-name </pre> This is the method we recommend for most cases: it creates a branch as part of a check-in using the version in the current checkout directory as its basis. (This is normally the tip of the current branch, though it doesn't have to be. You can create a branch from an ancestor check-in on a branch as well.) After making this branch-creating check-in, your local working directory is switched to that branch, so that further check-ins occur on that branch as well, as children of the tip check-in on that branch. The second, more complicated option is: <pre> $ fossil branch new my-new-branch-name trunk $ fossil update my-new-branch-name $ fossil commit </pre> Not only is this three commands instead of one, the first of which is longer than the entire simpler command above, you must give the second command before creating any check-ins, because until you do, your local working directory remains on the same branch it was on at the time you issued the command, so that the commit would otherwise put the new material on |
︙ | ︙ | |||
375 376 377 378 379 380 381 | <h2 id="fix">Fixing Forks</h2> If your local checkout is on a forked branch, you can usually fix a fork automatically with: <pre> | | | 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 | <h2 id="fix">Fixing Forks</h2> If your local checkout is on a forked branch, you can usually fix a fork automatically with: <pre> $ fossil merge </pre> Normally you need to pass arguments to <b>fossil merge</b> to tell it what you want to merge into the current basis view of the repository, but without arguments, the command seeks out and fixes forks. |
︙ | ︙ | |||
489 490 491 492 493 494 495 | <h2 id="bad-fork">How Can Forks Divide Development Effort?</h2> [#dist-clone|Above], we stated that forks carry a risk that development effort on a branch can be divided among the forks. It might not be immediately obvious why this is so. To see it, consider this swim lane diagram: | | | 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 | <h2 id="bad-fork">How Can Forks Divide Development Effort?</h2> [#dist-clone|Above], we stated that forks carry a risk that development effort on a branch can be divided among the forks. It might not be immediately obvious why this is so. To see it, consider this swim lane diagram: <verbatim type="pikchr center toggle"> $laneh = 0.75 ALL: [ # Draw the lanes down box width 3.5in height $laneh fill 0xacc9e3 box same fill 0xc5d8ef |
︙ | ︙ | |||
693 694 695 696 697 698 699 | bad, which is why [./concepts.wiki#workflow|Fossil tries so hard to avoid them], why it warns you about it when they do occur, and why it makes it relatively [#fix|quick and painless to fix them] when they do occur. <h2>Review Of Terminology</h2> | | | | 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 | bad, which is why [./concepts.wiki#workflow|Fossil tries so hard to avoid them], why it warns you about it when they do occur, and why it makes it relatively [#fix|quick and painless to fix them] when they do occur. <h2>Review Of Terminology</h2> <dl> <dt><b>Branch</b></dt> <dd><p>A branch is a set of check-ins with the same value for their "branch" property.</p></dd> <dt><b>Leaf</b></dt> <dd><p>A leaf is a check-in with no children in the same branch.</p></dd> <dt><b>Closed Leaf</b></dt> <dd><p>A closed leaf is any leaf with the <b>closed</b> tag. These leaves are intended to never be extended with descendants and hence are omitted from lists of leaves in the command-line and web interface.</p></dd> <dt><b>Open Leaf</b></dt> <dd><p>A open leaf is a leaf that is not closed.</p></dd> <dt><b>Fork</b></dt> <dd><p>A fork is when a check-in has two or more direct (non-merge) children in the same branch.</p></dd> <dt><b>Branch Point</b></dt> <dd><p>A branch point occurs when a check-in has two or more direct (non-merge) children in different branches. A branch point is similar to a fork, except that the children are in different branches.</p></dd> </dl> Check-in 4 of Figure 3 is not a leaf because it has a child (check-in 5) in the same branch. Check-in 9 of Figure 5 also has a child (check-in 10) but that child is in a different branch, so check-in 9 is a leaf. Because of the <b>closed</b> tag on check-in 9, it is a closed leaf. Check-in 2 of Figure 3 is considered a "fork" |
︙ | ︙ |
Changes to www/build.wiki.
︙ | ︙ | |||
38 39 40 41 42 43 44 | <ol> <li>Point your web browser to [https://fossil-scm.org/]</li> <li>Click on the [/timeline|Timeline] link at the top of the page.</li> | | | | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | <ol> <li>Point your web browser to [https://fossil-scm.org/]</li> <li>Click on the [/timeline|Timeline] link at the top of the page.</li> <li>Select a version of Fossil you want to download. The latest version on the trunk branch is usually a good choice. Click on its link.</li> <li>Finally, click on one of the "Zip Archive" or "Tarball" links, according to your preference. These links will build a ZIP archive or a gzip-compressed tarball of the complete source code and download it to your computer.</li> </ol> <h2>Aside: Is it really safe to use an unreleased development version of the Fossil source code?</h2> Yes! Any check-in on the |
︙ | ︙ | |||
174 175 176 177 178 179 180 | Alternatively, running <b>./configure</b> under MSYS should give a suitable top-level Makefile. However, options passed to configure that are not applicable on Windows may cause the configuration or compilation to fail (e.g. fusefs, internal-sqlite, etc). <li><i>MSVC</i> → Use the MSVC makefile.</li> | > | > > < | | | | | | | | | | 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 | Alternatively, running <b>./configure</b> under MSYS should give a suitable top-level Makefile. However, options passed to configure that are not applicable on Windows may cause the configuration or compilation to fail (e.g. fusefs, internal-sqlite, etc). <li><i>MSVC</i> → Use the MSVC makefile.</li> <em>NB:</em> Run the following <code>nmake</code> commands from a "x64 Native Tools Command Prompt"; <code>buildmsvc.bat</code> is able to automatically load the build tools (x64 by default, pass "x86" as the first argument to use the x86 tools), so it can be called from a normal command prompt. First, change to the "win/" subdirectory ("<b>cd win</b>"), then run "<b>nmake /f Makefile.msc</b>".<br><br>Alternatively, the batch file "<b>win\buildmsvc.bat</b>" may be used and it will attempt to detect and use the latest installed version of MSVC.<br><br>To enable the optional <a href="https://www.openssl.org/">OpenSSL</a> support, first <a href="https://www.openssl.org/source/">download the official source code for OpenSSL</a> and extract it to an appropriately named "<b>openssl</b>" subdirectory within the local [/tree?ci=trunk&name=compat | compat] directory then make sure that some recent <a href="http://www.perl.org/">Perl</a> binaries are installed locally, and finally run one of the following commands: <pre> nmake /f Makefile.msc FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin </pre> <pre> buildmsvc.bat FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin </pre> To enable the optional native [./th1.md#tclEval | Tcl integration feature], run one of the following commands or add the "FOSSIL_ENABLE_TCL=1" argument to one of the other NMAKE command lines: <pre> nmake /f Makefile.msc FOSSIL_ENABLE_TCL=1 </pre> <pre> buildmsvc.bat FOSSIL_ENABLE_TCL=1 </pre> <li><i>Cygwin</i> → The same as other Unix-like systems. It is recommended to configure using: "<b>configure --disable-internal-sqlite</b>", making sure you have the "libsqlite3-devel" , "zlib-devel" and "openssl-devel" packages installed first.</li> </ol> </ol> |
︙ | ︙ | |||
249 250 251 252 253 254 255 | be installed on the local machine. You can get Tcl/Tk from [http://www.activestate.com/activetcl|ActiveState]. </li> <li> To build on older Macs (circa 2002, MacOS 10.2) edit the Makefile generated by configure to add the following lines: | | | | 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | be installed on the local machine. You can get Tcl/Tk from [http://www.activestate.com/activetcl|ActiveState]. </li> <li> To build on older Macs (circa 2002, MacOS 10.2) edit the Makefile generated by configure to add the following lines: <pre> TCC += -DSQLITE_WITHOUT_ZONEMALLOC TCC += -D_BSD_SOURCE TCC += -DWITHOUT_ICONV TCC += -Dsocketlen_t=int TCC += -DSQLITE_MAX_MMAP_SIZE=0 </pre> </li> </ul> <h2 id="docker" name="oci">5.0 Building a Docker Container</h2> The information on building Fossil inside an |
︙ | ︙ | |||
436 437 438 439 440 441 442 | [https://emscripten.org/docs/getting_started/downloads.html] For instructions on keeping the SDK up to date, see: [https://emscripten.org/docs/tools_reference/emsdk.html] | | | | | | | | | | > | 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | [https://emscripten.org/docs/getting_started/downloads.html] For instructions on keeping the SDK up to date, see: [https://emscripten.org/docs/tools_reference/emsdk.html] <div class="sidebar">Getting Emscripten up and running is trivial and painless, at least on Linux systems, but the installer downloads many hundreds of megabytes of tools and dependencies, all of which will be installed under the single SDK directory (as opposed to being installed at the system level). It does, however, require that python3 be installed at the system level and it can optionally make use of a system-level cmake for certain tasks unrelated to how fossil uses the SDK.</div> After installing the SDK, configure the fossil tree with emsdk support: <pre><code>$ ./configure --with-emsdk=/path/to/emsdk \ --and-other-options... </code></pre> If the <tt>--with-emsdk</tt> flag is not provided, the configure script will check for the environment variable <tt>EMSDK</tt>, which is one of the standard variables the SDK environment uses. If that variable is found, its value will implicitly be used in place of the missing <tt>--with-emsdk</tt> flag. Thus, if the <tt>emsdk_env.sh</tt> |
︙ | ︙ | |||
478 479 480 481 482 483 484 485 486 487 488 489 490 491 | build cycle. They are instead explicitly built as described below. From the top of the source tree, all WASM-related components can be built with: <pre><code>$ make wasm</code></pre> As of this writing, those parts include: * <tt>extsrc/pikchr.wasm</tt> is a WASM-compiled form of <tt>extsrc/pikchr.c</tt>. * <tt>extsrc/pikchr.js</tt> is JS/WASM glue code generated by Emscripten to give JS code access to the API exported by the WASM file. | > > > > > > > > > > < < < < < < < < < < | < | < | | | | | | 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 | build cycle. They are instead explicitly built as described below. From the top of the source tree, all WASM-related components can be built with: <pre><code>$ make wasm</code></pre> <div class="sidebar">The file <tt>[/file/extsrc/pikcher-worker.js|extsrc/pikcher-worker.js]</tt> is hand-coded and intended to be loaded as a "Worker" in JavaScript. That file loads the main module and provides an interface via which a main JavaScript thread can communicate with pikchr running in a Worker thread. The file <tt>[/file/src/fossil.page.pikchrshowasm.js|src/fossil.page.pikchrshowasm.js]</tt> implements the [/pikchrshow] app and demonstrates how <tt>pikchr-worker.js</tt> is used.</div> As of this writing, those parts include: * <tt>extsrc/pikchr.wasm</tt> is a WASM-compiled form of <tt>extsrc/pikchr.c</tt>. * <tt>extsrc/pikchr.js</tt> is JS/WASM glue code generated by Emscripten to give JS code access to the API exported by the WASM file. When a new version of <tt>extsrc/pikchr.c</tt> is installed, the files <tt>pikchr.{js,wasm}</tt> will need to be recompiled to account for that. Running <tt>make wasm</tt> will, if the build is set up for the emsdk, recompile those: <pre><code>$ make wasm ./tools/emcc.sh -o extsrc/pikchr.js ... $ ls -la extsrc/pikchr.{js,wasm} -rw-rw-r-- 1 stephan stephan 17263 Jun 8 03:59 extsrc/pikchr.js -rw-rw-r-- 1 stephan stephan 97578 Jun 8 03:59 extsrc/pikchr.wasm </code></pre> <div class="sidebar">If that fails with a message along the lines of “<code>setting `EXPORTED_RUNTIME_METHODS` expects `<class 'list'>` but got `<class 'str'>`</code>” then the emcc being invoked is too old: emcc changed the format of list-type arguments at some point. The required minimum version is unknown, but any SDK version from May 2022 or later "should" (as of this writing) suffice. Any older version may or may not work.</div> After that succeeds, we need to run the normal build so that those generated files can be compiled in to the fossil binary, accessible via the [/help?cmd=/builtin|/builtin page]: <pre><code>$ make</code></pre> |
︙ | ︙ |
Changes to www/caps/login-groups.md.
︙ | ︙ | |||
103 104 105 106 107 108 109 | Login groups have names. A repo can be in only one of these named login groups at a time. Trust in login groups is transitive within a single server. Consider this sequence: | < | | | | < | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | Login groups have names. A repo can be in only one of these named login groups at a time. Trust in login groups is transitive within a single server. Consider this sequence: $ cd /path/to/A/checkout $ fossil login-group join --name G ~/museum/B.fossil $ cd /path/to/C/checkout $ fossil login-group join ~/museum/B.fossil That creates login group G joining repo A to B, then joins C to B. Although we didn’t explicitly tie C to A, a successful login on C gets you into both A and B, within the restrictions set out above. Changes are transitive in the same way, provided you check that “apply to all” box on the user edit screen. |
︙ | ︙ |
Changes to www/cgi.wiki.
︙ | ︙ | |||
21 22 23 24 25 26 27 | those options. <h1>CGI Script Options</h1> The CGI script used to launch a Fossil server will usually look something like this: | | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | those options. <h1>CGI Script Options</h1> The CGI script used to launch a Fossil server will usually look something like this: <verbatim> #!/usr/bin/fossil repository: /home/www/fossils/myproject.fossil </verbatim> Of course, pathnames will likely be different. The first line (the "[wikipedia:/wiki/Shebang_(Unix)|shebang]") always gives the name of the Fossil executable. Subsequent lines are of the form "<b>property: argument ...</b>". The remainder of this document describes the available properties and their arguments. |
︙ | ︙ |
Changes to www/changes.wiki.
1 2 3 4 5 6 7 8 | <title>Change Log</title> <h2 id='v2_23'>Changes for version 2.23 (2023-11-01)</h2> * Add ability to "close" forum threads, such that unprivileged users may no longer respond to them. Only administrators can close threads or respond to them by default, and the [/help?cmd=forum-close-policy|forum-close-policy setting] can be | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | <title>Change Log</title> <h2 id='v2_24'>Changes for version 2.24 (pending)</h2> * Improvements to the default skin * If an "ssh:" sync fails in a way that suggests that the fossil executable could not be found on the remote host, then retry after adding a PATH= prefix to the command. This helps "ssh:" to "just work" when the server is a Mac. * Enhancements to the [/help?cmd=/timeline|/timeline page]: <ul> <li> Add the x= query paramater <li> Add the shortcut tl= and rl= query parameters <li> Add support for from=,ft= and from=,bt= query parameter combinations <li> Automatically highlight the endpoints for from=,to= queries. <li> Add the to2=Z query parameter to augment from=X,to=Y so that the path from X to Z is shown if Y cannot be found. </ul> * Moved the /museum/repo.fossil file referenced from the Dockerfile from the ENTRYPOINT to the CMD part to allow use of --repolist mode. * The /uvlist page now shows the hash algorithm used so that outsiders don't have to guess it from the hash length when double-checking hashes of downloaded files on their end. * The hash itself is now shown in a fixed-width font on the /uvlist page, suiting this tabular display. * If the [/help?cmd=autosync|autosync setting] contains keyword "all", the automatic sync occurs against all defined remote repositories, not just the default. * Markdown formatter: improved handling of indented fenced code blocks that contain blank lines. * Fix problems with one-click unsubscribe on email notifications. * Import the latest [/doc/trunk/www/pikchr.md|Pikchr] containing support for "diamond" objects. * Reworked the default skin to make everything more readable: larger fonts, more whitespace, deeper indents to show hierarchy and to offset command examples, etc. Adjusted colors slightly to bring things into better accord with the WCAG accessibility guidelines. This constitutes a <strong>breaking change</strong> for those with custom skins; see [./customskin.md#version-2.24 | this section of the docs] for migration advice. * Add ability to render committed Pikchr files to SVG via <samp>/doc/…/foo.pikchr?popup</samp> URLs. <h2 id='v2_23'>Changes for version 2.23 (2023-11-01)</h2> * Add ability to "close" forum threads, such that unprivileged users may no longer respond to them. Only administrators can close threads or respond to them by default, and the [/help?cmd=forum-close-policy|forum-close-policy setting] can be |
︙ | ︙ |
Changes to www/chat.md.
︙ | ︙ | |||
77 78 79 80 81 82 83 84 85 86 87 88 89 90 | Send button is pressed, any pending text is submitted along with the selected file. Image files sent this way will, by default, appear inline in messages, but each user may toggle that via the settings popup menu, such that images instead appear as downloadable links. Non-image files always appear in messages as download links. ### Deletion of Messages Any user may *locally* delete a given message by clicking on the "tab" at the top of the message and clicking the button which appears. Such deletions are local-only, and the messages will reappear if the page is reloaded. The user who posted a given message, or any Admin users, may additionally choose to globally delete a message from the chat record, which deletes it not only from their own browser but also | > > > > > > > > > | 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | Send button is pressed, any pending text is submitted along with the selected file. Image files sent this way will, by default, appear inline in messages, but each user may toggle that via the settings popup menu, such that images instead appear as downloadable links. Non-image files always appear in messages as download links. ### Deletion of Messages <div class="sidebar">Message deletion is itself a type of message, which is why deletions count towards updates in the recent activity list. (It is counted for the person who performed the deletion, not the author of the deleted comment.) That can potentially lead to odd corner cases where a user shows up in the list but has no messages which are currently visible because they were deleted, or an admin user who has not posted anything but deleted a message. That is a known minor cosmetic-only bug with a resolution of "will not fix."</div> Any user may *locally* delete a given message by clicking on the "tab" at the top of the message and clicking the button which appears. Such deletions are local-only, and the messages will reappear if the page is reloaded. The user who posted a given message, or any Admin users, may additionally choose to globally delete a message from the chat record, which deletes it not only from their own browser but also |
︙ | ︙ | |||
110 111 112 113 114 115 116 | online, but it gives an overview of which users have been active most recently, noting that "lurkers" (people who post no messages) will not show up in that list, nor does the chat infrastructure have a way to track and present those. That list can be used to filter messages on a specific user by tapping on that user's name, tapping a second time to remove the filter. | < < < < < < < < < | 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | online, but it gives an overview of which users have been active most recently, noting that "lurkers" (people who post no messages) will not show up in that list, nor does the chat infrastructure have a way to track and present those. That list can be used to filter messages on a specific user by tapping on that user's name, tapping a second time to remove the filter. ### <a id="cli"></a> The `fossil chat` Command Type [fossil chat](/help?cmd=chat) from within any open check-out to bring up a chatroom for the project that is in that checkout. The new chat window will attempt to connect to the default sync target for that check-out (the server whose URL is shown by the [fossil remote](/help?cmd=remote) command). |
︙ | ︙ | |||
144 145 146 147 148 149 150 | The recommended way to allow robots to send chat messages is to create a new user on the server for each robot. Give each such robot account the "C" privilege only. That means that the robot user account will be able to send chat messages, but not do anything else. Then, in the program or script that runs the robot, when it wants to send a chat message, have it run a command like this: | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | The recommended way to allow robots to send chat messages is to create a new user on the server for each robot. Give each such robot account the "C" privilege only. That means that the robot user account will be able to send chat messages, but not do anything else. Then, in the program or script that runs the robot, when it wants to send a chat message, have it run a command like this: ~~~~ fossil chat send --remote https://robot:PASSWORD@project.org/fossil \ --message 'MESSAGE TEXT' --file file-to-attach.txt ~~~~ Substitute the appropriate project URL, robot account name and password, message text and file attachment, of course. |
︙ | ︙ | |||
210 211 212 213 214 215 216 | Fetches the file content associated with a post (one file per post, maximum). In the UI, this is accessed via links to uploaded files and via inlined image tags. Chat messages are stored on the server-side in the CHAT table of the repository. | | | 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 | Fetches the file content associated with a post (one file per post, maximum). In the UI, this is accessed via links to uploaded files and via inlined image tags. Chat messages are stored on the server-side in the CHAT table of the repository. ~~~ CREATE TABLE repository.chat( msgid INTEGER PRIMARY KEY AUTOINCREMENT, mtime JULIANDAY, -- Time for this entry - Julianday Zulu lmtime TEXT, -- Client YYYY-MM-DDZHH:MM:SS when message originally sent xfrom TEXT, -- Login of the sender xmsg TEXT, -- Raw, unformatted text of the message fname TEXT, -- Filename of the uploaded file, or NULL |
︙ | ︙ |
Changes to www/checkin_names.wiki.
1 2 | <title>Check-in Names</title> | | < | | | | | | | | | | | > | | | | | | > | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | <title>Check-in Names</title> <div class="sidebar no-label"> <b>Quick Reference</b> <ul> <li> Hash prefix <li> Branch name <li> Tag name <li> Timestamp: <i>YYYY-MM-DD HH:MM:SS</i> <li> <i>tag-name</i> <big><b>:</b></big> <i>timestamp</i> <li> <b>root <big>:</big></b> <i>branchname</i> <li> <b>start <big>:</big></b> <i>branchname</i> <li> <b>merge-in <big>:</big></b> <i>branchname</i> <li> Special names: <ul> <li> <b>tip</b> <li> <b>current</b> <li> <b>next</b> <li> <b>previous</b> or <b>prev</b> <li> <b>ckout</b> (<a href='./embeddeddoc.wiki'>embedded docs</a> only) </ul> </ul> </div> Many Fossil [/help|commands] and [./webui.wiki | web interface] URLs accept check-in names as an argument. For example, the "[/help/info|info]" command accepts an optional check-in name to identify the specific check-in about which information is desired: <pre style="white-space: pre-wrap"> fossil info <i>checkin-name</i> </pre> You are perhaps reading this page from the following URL: <verbatim> https://fossil-scm.org/home/doc/trunk/www/checkin_names.wiki </verbatim> This is an example of an [./embeddeddoc.wiki | embedded documentation] page URL. The "trunk" element of the pathname is a [./glossary.md#check-in | check-in] name that determines which version of the documentation to display. Fossil provides a variety of ways to specify a check-in. This document describes the various methods. <h2 id="canonical">Canonical Check-in Name</h2> The canonical name of a check-in is the hash of its [./fileformat.wiki#manifest | manifest] expressed as a [./hashes.md | long lowercase hexadecimal number]. For example: <pre> fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9 </pre> The full 40 or 64 character hash is unwieldy to remember and type, though, so Fossil also accepts a unique prefix of the hash, using any combination of upper and lower case letters, as long as the prefix is at least 4 characters long. Hence the following commands all accomplish the same thing as the above: <pre> fossil info e5a734a19a9 fossil info E5a734A fossil info e5a7 </pre> Fossil uses this feature itself, identifying check-ins by 8 to 16-character prefixes of the canonical name in places where it doesn't want to chew up the screen real estate required to display the whole hash. <h2 id="tags">Tags And Branch Names</h2> Using a tag or branch name where a check-in name is expected causes Fossil to choose the most recent check-in with that tag or branch name. So for example, the most recent check-in that is tagged with "release" as of this writing is [b98ce23d4fc]. The command: <pre> fossil info release </pre> …results in the following output: <pre> hash: b98ce23d4fc3b734cdc058ee8a67e6dad675ca13 2020-08-20 13:27:04 UTC parent: 40feec329163103293d98dfcc2d119d1a16b227a 2020-08-20 13:01:51 UTC tags: release, branch-2.12, version-2.12.1 comment: Version 2.12.1 (user: drh) </pre> There are multiple check-ins that are tagged with "release" but (as of this writing) the [b98ce23d4fc] check-in is the most recent so it is the one that is selected. Note that unlike some other version control systems, a "branch" in Fossil is not anything special: it is simply a sequence of check-ins that share a common tag, so the same mechanism that resolves tag names also resolves branch names. <a id="tagpfx"></a> Note also that there can — in theory, if rarely in practice — be an ambiguity between tag names and canonical names. Suppose, for example, you had a check-in with the canonical name deed28aa99… and you also happened to have tagged a different check-in with "deed2". If you use the "deed2" name, does it choose the canonical name or the tag name? In such cases, you can prefix the tag name with "tag:". For example: <pre> fossil info tag:deed2 </pre> The "tag:deed2" name will refer to the most recent check-in tagged with "deed2" rather than the check-in whose canonical name begins with "deed2". <h2 id="whole-branches">Whole Branches</h2> |
︙ | ︙ | |||
178 179 180 181 182 183 184 | repo could have release tags like “2020-04-01”, the date the release was cut, but you could force Fossil to interpret that string as a date rather than as a tag by passing “date:2020-04-01”. For an example of how timestamps are useful, consider the homepage for the Fossil website itself: | | | | | | | | | | | | | | 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | repo could have release tags like “2020-04-01”, the date the release was cut, but you could force Fossil to interpret that string as a date rather than as a tag by passing “date:2020-04-01”. For an example of how timestamps are useful, consider the homepage for the Fossil website itself: <pre> https://fossil-scm.org/home/doc/<b>trunk</b>/www/index.wiki </pre> The bold component of that URL is a check-in name. To see the stored content of the Fossil website repository as of January 1, 2009, one has merely to change the URL to the following: <pre> https://fossil-scm.org/home/doc/<b>2009-01-01</b>/www/index.wiki </pre> (Note that this won't roll you back to the <i>skin</i> and other cosmetic configurations as of that date. It also won't change screens like the timeline, which has an independent date selector.) <h2 id="tag-ts">Tag And Timestamp</h2> A check-in name can also take the form of a tag or branch name followed by a colon and then a timestamp. The combination means to take the most recent check-in with the given tag or branch which is not more recent than the timestamp. So, for example: <pre> fossil update trunk:2010-07-01T14:30 </pre> Would cause Fossil to update the working check-out to be the most recent check-in on the trunk that is not more recent than 14:30 (UTC) on July 1, 2010. <h2 id="root">Root Of A Branch</h2> A branch name that begins with the "<tt>root:</tt>" prefix refers to the last check-in on the parent branch prior to the beginning of the branch. Such a label is useful, for example, in computing all diffs for a single branch. The following example will show all changes in the hypothetical branch "xyzzy": <pre> fossil diff --from root:xyzzy --to xyzzy </pre> <a id="merge-in"></a> That doesn't do what you might expect after you merge the parent branch's changes into the child branch: the above command will include changes made on the parent branch as well. You can solve this by using the prefix "<tt>merge-in:</tt>" instead of "<tt>root:</tt>" to tell Fossil to find the most recent merge-in point for that branch. The resulting diff will then show only the changes in the branch itself, omitting any changes that have already been merged in from the parent branch. <a id="start"></a> The prefix "<tt>start:</tt>" gives the first check-in of the named branch. The prefixes "<tt>root:</tt>", "<tt>start:</tt>", and "<tt>merge-in:</tt>" can be chained: one can say for example <pre> fossil info merge-in:xyzzy:2022-03-01 </pre> to get informations about the most recent merge-in point on the branch "xyzzy" that happened on or before March 1, 2022. <h2 id="special">Special Tags</h2> The tag "tip" means the most recent check-in. The "tip" tag is practically equivalent to the timestamp "9999-12-31". This special name works anywhere you can pass a "NAME", such as with <tt>/info</tt> URLs: <pre> http://localhost:8080/info/tip </pre> There are several other special names, but they only work from within a check-out directory because they are relative to the current checked-out version: * "current": the current checked-out version * "next": the youngest child of the current checked-out version |
︙ | ︙ | |||
280 281 282 283 284 285 286 | <h2 id="examples">Additional Examples</h2> To view the changes in the most recent check-in prior to the version currently checked out: | | | | | | 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | <h2 id="examples">Additional Examples</h2> To view the changes in the most recent check-in prior to the version currently checked out: <pre> fossil diff --from previous --to current </pre> Suppose you are of the habit of tagging each release with a "release" tag. Then to see everything that has changed on the trunk since the last release: <pre> fossil diff --from release --to trunk </pre> <h2 id="order">Resolution Order</h2> Fossil currently resolves name strings to artifact hashes in the following order: |
︙ | ︙ |
Changes to www/childprojects.wiki.
︙ | ︙ | |||
26 27 28 29 30 31 32 | at the request of the child. <h2>Creating a Child Project</h2> To create a new child project, first clone the parent. Then make manual SQL changes to the child repository as follows: | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | at the request of the child. <h2>Creating a Child Project</h2> To create a new child project, first clone the parent. Then make manual SQL changes to the child repository as follows: <verbatim> UPDATE config SET name='parent-project-code' WHERE name='project-code'; UPDATE config SET name='parent-project-name' WHERE name='project-name'; INSERT INTO config(name,value) VALUES('project-code',lower(hex(randomblob(20)))); INSERT INTO config(name,value) VALUES('project-name','CHILD-PROJECT-NAME'); </verbatim> Modify the CHILD-PROJECT-NAME in the last statement to be the name of the child project, of course. The repository is now a separate project, independent from its parent. Clone the new project to the developers as needed. |
︙ | ︙ |
Changes to www/ckout-workflows.md.
︙ | ︙ | |||
8 9 10 11 12 13 14 | ## <a id="mcw"></a> Multiple-Checkout Workflow With Fossil, it is routine to have multiple check-outs from the same repository: | | | | | | | | | | | | | | | | | | | 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ## <a id="mcw"></a> Multiple-Checkout Workflow With Fossil, it is routine to have multiple check-outs from the same repository: fossil clone https://example.com/repo /path/to/repo.fossil mkdir -p ~/src/my-project/trunk cd ~/src/my-project/trunk fossil open /path/to/repo.fossil # implicitly opens “trunk” mkdir ../release cd ../release fossil open /path/to/repo.fossil release mkdir ../my-other-branch cd ../my-other-branch fossil open /path/to/repo.fossil my-other-branch mkdir ../scratch cd ../scratch fossil open /path/to/repo.fossil abcd1234 mkdir ../test cd ../test fossil open /path/to/repo.fossil 2019-04-01 Now you have five separate check-out directories: one each for: * trunk * the latest tagged public release * an alternate branch you’re working on * a “scratch” directory for experiments you don’t want to do in the other check-out directories; and |
︙ | ︙ | |||
71 72 73 74 75 76 77 | Nevertheless, it is possible to work in a more typical Git sort of style, switching between versions in a single check-out directory. #### <a id="idiomatic"></a> The Idiomatic Fossil Way The most idiomatic way is as follows: | | | | | | | | | | | | | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | Nevertheless, it is possible to work in a more typical Git sort of style, switching between versions in a single check-out directory. #### <a id="idiomatic"></a> The Idiomatic Fossil Way The most idiomatic way is as follows: fossil clone https://example.com/repo /path/to/repo.fossil mkdir work-dir cd work-dir fossil open /path/to/repo.fossil ...work on trunk... fossil update my-other-branch ...work on your other branch in the same directory... Basically, you replace the `cd` commands in the multiple checkouts workflow above with `fossil up` commands. #### <a id="open"></a> Opening a Repository by URI In Fossil 2.12, we added a feature to simplify the single-worktree use case: mkdir work-dir cd work-dir fossil open https://example.com/repo Now you have “trunk” open in `work-dir`, with the repo file stored as `repo.fossil` in that same directory. Users of Git may be surprised that it doesn’t create a directory for you and that you `cd` into it *before* the clone-and-open step, not after. This is because we’re overloading the “open” command, which already had the behavior of opening into the current working directory. Changing it to behave like `git clone` would therefore make the behavior surprising to Fossil users. (See [our discussions][caod] if you want the full details.) #### <a id="clone"></a> Git-Like Clone-and-Open In Fossil 2.14, we added a more Git-like alternative: fossil clone https://fossil-scm.org/fossil cd fossil This results in a `fossil.fossil` repo DB file and a `fossil/` working directory. Note that our `clone URI` behavior does not commingle the repo and check-out, solving our major problem with the Git design. If you want the repo to be named something else, adjust the URL: fossil clone https://fossil-scm.org/fossil/fsl That gets you `fsl.fossil` checked out into `fsl/`. For sites where the repo isn’t served from a subdirectory like this, you might need another form of the URL. For example, you might have your repo served from `dev.example.com` and want it cloned as `my-project`: fossil clone https://dev.example.com/repo/my-project The `/repo` addition is the key: whatever comes after is used as the repository name. [See the docs][clone] for more details. [caod]: https://fossil-scm.org/forum/forumpost/3f143cec74 [clone]: /help?cmd=clone <div style="height:50em" id="this-space-intentionally-left-blank"></div> |
Changes to www/concepts.wiki.
1 | <title>Fossil Concepts</title> | < | 1 2 3 4 5 6 7 8 | <title>Fossil Concepts</title> <h2>1.0 Introduction</h2> [./index.wiki | Fossil] is a [http://en.wikipedia.org/wiki/Software_configuration_management | software configuration management] system. Fossil is software that is designed to control and track the development of a software project and to record the history |
︙ | ︙ | |||
113 114 115 116 117 118 119 | identifier for a blob of data, such as a file. Given any file, it is simple to find the artifact ID for that file. But given an artifact ID, it is computationally intractable to generate a file that will have that same artifact ID. Artifact IDs look something like this: | | | | | | | | 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | identifier for a blob of data, such as a file. Given any file, it is simple to find the artifact ID for that file. But given an artifact ID, it is computationally intractable to generate a file that will have that same artifact ID. Artifact IDs look something like this: <pre> 6089f0b563a9db0a6d90682fe47fd7161ff867c8 59712614a1b3ccfd84078a37fa5b606e28434326 19dbf73078be9779edd6a0156195e610f81c94f9 b4104959a67175f02d6b415480be22a239f1f077 997c9d6ae03ad114b2b57f04e9eeef17dcb82788 </pre> When referring to an artifact using Fossil, you can use a unique prefix of the artifact ID that is four characters or longer. This saves a lot of typing. When displaying artifact IDs, Fossil will usually only show the first 10 digits since that is normally enough to uniquely identify a file. |
︙ | ︙ | |||
237 238 239 240 241 242 243 | an upgrade. Running "all rebuild" never hurts, so when upgrading it is a good policy to run it even if it is not strictly necessary. To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example: | < | < | 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 | an upgrade. Running "all rebuild" never hurts, so when upgrading it is a good policy to run it even if it is not strictly necessary. To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example: <pre>fossil help</pre> In the next section, when we say things like "use the <b>help</b> command" we mean to use the command name "help" as the first token after the name of the Fossil executable, as shown above. <h2 id="workflow">4.0 Workflow</h2> |
︙ | ︙ | |||
280 281 282 283 284 285 286 | An interesting feature of Fossil is that it supports both autosync and manual-merge work flows. The default setting for Fossil is to be in autosync mode. You can change the autosync setting or check the current autosync setting using commands like: | | | | | | | 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 | An interesting feature of Fossil is that it supports both autosync and manual-merge work flows. The default setting for Fossil is to be in autosync mode. You can change the autosync setting or check the current autosync setting using commands like: <pre> fossil setting autosync on fossil setting autosync off fossil settings </pre> By default, Fossil runs with autosync mode turned on. The authors finds that projects run more smoothly in autosync mode since autosync helps to prevent pointless forking and merging and helps keeps all collaborators working on exactly the same code rather than on their own personal forks of the code. In the author's view, manual-merge mode should be reserved for disconnected operation. |
︙ | ︙ |
Changes to www/containers.md.
︙ | ︙ | |||
11 12 13 14 15 16 17 | ## 1. Quick Start Fossil ships a `Dockerfile` at the top of its source tree, [here][DF], which you can build like so: | < | < < | < | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ## 1. Quick Start Fossil ships a `Dockerfile` at the top of its source tree, [here][DF], which you can build like so: $ docker build -t fossil . If the image built successfully, you can create a container from it and test that it runs: $ docker run --name fossil -p 9999:8080/tcp fossil This shows us remapping the internal TCP listening port as 9999 on the host. This feature of OCI runtimes means there’s little point to using the “`fossil server --port`” feature inside the container. We can let Fossil default to 8080 internally, then remap it to wherever we want it on the host instead. |
︙ | ︙ | |||
42 43 44 45 46 47 48 | fresh container based on that image. You can pass extra arguments to the first command via the Makefile’s `DBFLAGS` variable and to the second with the `DCFLAGS` variable. (DB is short for “`docker build`”, and DC is short for “`docker create`”, a sub-step of the “run” target.) To get the custom port setting as in second command above, say: | < | < | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | fresh container based on that image. You can pass extra arguments to the first command via the Makefile’s `DBFLAGS` variable and to the second with the `DCFLAGS` variable. (DB is short for “`docker build`”, and DC is short for “`docker create`”, a sub-step of the “run” target.) To get the custom port setting as in second command above, say: $ make container-run DCFLAGS='-p 9999:8080/tcp' Contrast the raw “`docker`” commands above, which create an _unversioned_ image called `fossil:latest` and from that a container simply called `fossil`. The unversioned names are more convenient for interactive use, while the versioned ones are good for CI/CD type applications since they avoid a conflict with past versions; it lets you keep old containers around for quick roll-backs while replacing them |
︙ | ︙ | |||
79 80 81 82 83 84 85 | ### <a id="repo-inside"></a> 2.1 Storing the Repo Inside the Container The simplest method is to stop the container if it was running, then say: | < | | | < | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | ### <a id="repo-inside"></a> 2.1 Storing the Repo Inside the Container The simplest method is to stop the container if it was running, then say: $ docker cp /path/to/my-project.fossil fossil:/museum/repo.fossil $ docker start fossil $ docker exec fossil chown -R 499 /museum That copies the local Fossil repo into the container where the server expects to find it, so that the “start” command causes it to serve from that copied-in file instead. Since it lives atop the immutable base layers, it persists as part of the container proper, surviving restarts. Notice that the copy command changes the name of the repository |
︙ | ︙ | |||
118 119 120 121 122 123 124 | The simple storage method above has a problem: containers are designed to be killed off at the slightest cause, rebuilt, and redeployed. If you do that with the repo inside the container, it gets destroyed, too. The solution is to replace the “run” command above with the following: | < | | | | | < | 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | The simple storage method above has a problem: containers are designed to be killed off at the slightest cause, rebuilt, and redeployed. If you do that with the repo inside the container, it gets destroyed, too. The solution is to replace the “run” command above with the following: $ docker run \ --publish 9999:8080 \ --name fossil-bind-mount \ --volume ~/museum:/museum \ fossil Because this bind mount maps a host-side directory (`~/museum`) into the container, you don’t need to `docker cp` the repo into the container at all. It still expects to find the repository as `repo.fossil` under that directory, but now both the host and the container can see that repo DB. Instead of a bind mount, you could instead set up a separate |
︙ | ︙ | |||
149 150 151 152 153 154 155 | #### 2.2.1 <a id="wal-mode"></a>WAL Mode Interactions You might be aware that OCI containers allow mapping a single file into the repository rather than a whole directory. Since Fossil repositories are specially-formatted SQLite databases, you might be wondering why we don’t say things like: | < | < | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | #### 2.2.1 <a id="wal-mode"></a>WAL Mode Interactions You might be aware that OCI containers allow mapping a single file into the repository rather than a whole directory. Since Fossil repositories are specially-formatted SQLite databases, you might be wondering why we don’t say things like: --volume ~/museum/my-project.fossil:/museum/repo.fossil That lets us have a convenient file name for the project outside the container while letting the configuration inside the container refer to the generic “`/museum/repo.fossil`” name. Why should we have to name the repo generically on the outside merely to placate the container? The reason is, you might be serving that repo with [WAL mode][wal] |
︙ | ︙ | |||
290 291 292 293 294 295 296 | granularity beyond the classic Unix ones inside the container, so we drop root’s ability to change them. All together, we recommend adding the following options to your “`docker run`” commands, as well as to any “`docker create`” command that will be followed by “`docker start`”: | < | | | | | | | | | < | 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | granularity beyond the classic Unix ones inside the container, so we drop root’s ability to change them. All together, we recommend adding the following options to your “`docker run`” commands, as well as to any “`docker create`” command that will be followed by “`docker start`”: --cap-drop AUDIT_WRITE \ --cap-drop CHOWN \ --cap-drop FSETID \ --cap-drop KILL \ --cap-drop MKNOD \ --cap-drop NET_BIND_SERVICE \ --cap-drop NET_RAW \ --cap-drop SETFCAP \ --cap-drop SETPCAP In the next section, we’ll show a case where you create a container without ever running it, making these options pointless. [backoffice]: ./backoffice.md [defcap]: https://docs.docker.com/engine/security/#linux-kernel-capabilities [capchg]: https://stackoverflow.com/a/45752205/142454 |
︙ | ︙ | |||
324 325 326 327 328 329 330 | A secondary benefit falls out of this process for free: it’s arguably the easiest way to build a purely static Fossil binary for Linux. Most modern Linux distros make this [surprisingly difficult][lsl], but Alpine’s back-to-basics nature makes static builds work the way they used to, back in the day. If that’s all you’re after, you can do so as easily as this: | < | | | | < < | < < | < | 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 | A secondary benefit falls out of this process for free: it’s arguably the easiest way to build a purely static Fossil binary for Linux. Most modern Linux distros make this [surprisingly difficult][lsl], but Alpine’s back-to-basics nature makes static builds work the way they used to, back in the day. If that’s all you’re after, you can do so as easily as this: $ docker build -t fossil . $ docker create --name fossil-static-tmp fossil $ docker cp fossil-static-tmp:/bin/fossil . $ docker container rm fossil-static-tmp The result is six or seven megs, depending on the CPU architecture you build for. It’s built stripped. [lsl]: https://stackoverflow.com/questions/3430400/linux-static-linking-is-dead ## 5. <a id="custom" name="args"></a>Customization Points ### <a id="pkg-vers"></a> 5.1 Fossil Version The default version of Fossil fetched in the build is the version in the checkout directory at the time you run it. You could override it to get a release build like so: $ docker build -t fossil --build-arg FSLVER=version-2.20 . Or equivalently, using Fossil’s `Makefile` convenience target: $ make container-image DBFLAGS='--build-arg FSLVER=version-2.20' While you could instead use the generic “`release`” tag here, it’s better to use a specific version number since container builders cache downloaded files, hoping to reuse them across builds. If you ask for “`release`” before a new version is tagged and then immediately after, you might expect to get two different tarballs, but because the underlying source tarball URL |
︙ | ︙ | |||
382 383 384 385 386 387 388 | leaving those below it for system users like this Fossil daemon owner. Since it’s typical for these to start at 0 and go upward, we started at 500 and went *down* one instead to reduce the chance of a conflict to as close to zero as we can manage. To change it to something else, say: | < | < < < < | | < | | | | 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | leaving those below it for system users like this Fossil daemon owner. Since it’s typical for these to start at 0 and go upward, we started at 500 and went *down* one instead to reduce the chance of a conflict to as close to zero as we can manage. To change it to something else, say: $ make container-image DBFLAGS='--build-arg UID=501' This is particularly useful if you’re putting your repository on a separate volume since the IDs “leak” out into the host environment via file permissions. You may therefore wish them to mean something on both sides of the container barrier rather than have “499” appear on the host in “`ls -l`” output. ### 5.3 <a id="cengine"></a>Container Engine Although the Fossil container build system defaults to Docker, we allow for use of any OCI container system that implements the same interfaces. We go into more details about this [below](#light), but for now, it suffices to point out that you can switch to Podman while using our `Makefile` convenience targets unchanged by saying: $ make CENGINE=podman container-run ### 5.4 <a id="config"></a>Fossil Configuration Options You can use this same mechanism to enable non-default Fossil configuration options in your build. For instance, to turn on the JSON API and the TH1 docs extension: $ make container-image \ DBFLAGS='--build-arg FSLCFG="--json --with-th1-docs"' If you also wanted [the Tcl evaluation extension](./th1.md#tclEval), that brings us to [the next point](#run). ### 5.5 <a id="run"></a>Elaborating the Run Layer If you want a basic shell environment for temporary debugging of the running container, that’s easily added. Simply change this line in the `Dockerfile`… FROM scratch AS run …to this: FROM busybox AS run Rebuild and redeploy to give your Fossil container a [BusyBox]-based shell environment that you can get into via: $ docker exec -it -u fossil $(make container-version) sh That command assumes you built it via “`make container`” and are therefore using its versioning scheme. You will likely want to remove the `PATH` override in the “RUN” stage when doing this since it’s written for the case where everything is in `/bin`, and that will no longer be the case with a more full-featured |
︙ | ︙ | |||
461 462 463 464 465 466 467 | Let’s say the extension is written in Python. Because this is one of the most popular programming languages in the world, we have many options for achieving this. For instance, there is a whole class of “[distroless]” images that will do this efficiently by changing “`STAGE 2`” in the `Dockefile` to this: | < < < < | 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 | Let’s say the extension is written in Python. Because this is one of the most popular programming languages in the world, we have many options for achieving this. For instance, there is a whole class of “[distroless]” images that will do this efficiently by changing “`STAGE 2`” in the `Dockefile` to this: ## --------------------------------------------------------------------- ## STAGE 2: Pare that back to the bare essentials, plus Python. ## --------------------------------------------------------------------- FROM cgr.dev/chainguard/python:latest USER root ARG UID=499 ENV PATH "/sbin:/usr/sbin:/bin:/usr/bin" COPY --from=builder /tmp/fossil /bin/ COPY --from=builder /bin/busybox.static /bin/busybox RUN [ "/bin/busybox", "--install", "/bin" ] RUN set -x \ && echo "fossil:x:${UID}:${UID}:User:/museum:/false" >> /etc/passwd \ && echo "fossil:x:${UID}:fossil" >> /etc/group \ && install -d -m 700 -o fossil -g fossil log museum You will also have to add `busybox-static` to the APK package list in STAGE 1 for the `RUN` script at the end of that stage to work, since the [Chainguard Python image][cgimgs] lacks a shell, on purpose. The need to install root-level binaries is why we change `USER` temporarily here. Build it and test that it works like so: $ make container-run && docker exec -i $(make container-version) python --version 3.11.2 The compensation for the hassle of using Chainguard over something more general purpose like changing the `run` layer to Alpine and then adding a “`apk add python`” command to the `Dockerfile` is huge: we no longer leave a package manager sitting around inside the container, waiting for some malefactor to figure out how to abuse it. |
︙ | ︙ | |||
553 554 555 556 557 558 559 | default under the theory that you don’t want those services to run until you’ve logged into the GUI as that user. If you find yourself running into this, [enable linger mode](https://www.freedesktop.org/software/systemd/man/loginctl.html).) so I was able to create a unit file called `~/.local/share/systemd/user/alert-sender@.service` with these contents: | < < < < | 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 | default under the theory that you don’t want those services to run until you’ve logged into the GUI as that user. If you find yourself running into this, [enable linger mode](https://www.freedesktop.org/software/systemd/man/loginctl.html).) so I was able to create a unit file called `~/.local/share/systemd/user/alert-sender@.service` with these contents: [Unit] Description=Fossil email alert sender for %I [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/alert-sender %I/mail.db Restart=always RestartSec=3 [Install] WantedBy=default.target I was then able to enable email alert forwarding for select repositories after configuring them per [the docs](./alerts.md) by saying: $ systemctl --user daemon-reload $ systemctl --user enable alert-sender@myproject $ systemctl --user start alert-sender@myproject Because this is a parameterized script and we’ve set our repository paths predictably, you can do this for as many repositories as you need to by passing their names after the “`@`” sign in the commands above. ## 6. <a id="light"></a>Lightweight Alternatives to Docker |
︙ | ︙ | |||
604 605 606 607 608 609 610 | leaving the benefits of containerization to those with bigger budgets. For the sake of simple examples in this section, we’ll assume you’re integrating Fossil into a larger web site, such as with our [Debian + nginx + TLS][DNT] plan. This is why all of the examples below create the container with this option: | < | < | 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 | leaving the benefits of containerization to those with bigger budgets. For the sake of simple examples in this section, we’ll assume you’re integrating Fossil into a larger web site, such as with our [Debian + nginx + TLS][DNT] plan. This is why all of the examples below create the container with this option: --publish 127.0.0.1:9999:8080 The assumption is that there’s a reverse proxy running somewhere that redirects public web hits to localhost port 9999, which in turn goes to port 8080 inside the container. This use of port publishing effectively replaces the use of the “`fossil server --localhost`” option. |
︙ | ︙ | |||
676 677 678 679 680 681 682 | On Ubuntu 22.04, the installation size is about 38 MiB, roughly a tenth the size of Docker Engine. For our purposes here, the only thing that changes relative to the examples at the top of this document are the initial command: | < | | < < | | | | | | | | | | | < < | < < | < < | | | < < | | | | | | | | | | | | | | | | | | | | | | | | | | | | < | 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 | On Ubuntu 22.04, the installation size is about 38 MiB, roughly a tenth the size of Docker Engine. For our purposes here, the only thing that changes relative to the examples at the top of this document are the initial command: $ podman build -t fossil . $ podman run --name fossil -p 9999:8080/tcp fossil Your Linux package repo may have a `podman-docker` package which provides a “`docker`” script that calls “`podman`” for you, eliminating even the command name difference. With that installed, the `make` commands above will work with Podman as-is. The only difference that matters here is that Podman doesn’t have the same [default Linux kernel capability set](#caps) as Docker, which affects the `--cap-drop` flags recommended above to: $ podman create \ --name fossil \ --cap-drop CHOWN \ --cap-drop FSETID \ --cap-drop KILL \ --cap-drop NET_BIND_SERVICE \ --cap-drop SETFCAP \ --cap-drop SETPCAP \ --publish 127.0.0.1:9999:8080 \ localhost/fossil $ podman start fossil [pmmac]: https://podman.io/getting-started/installation.html#macos [pmwin]: https://github.com/containers/podman/blob/main/docs/tutorials/podman-for-windows.md [Podman]: https://podman.io/ [rl]: https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md [whatis]: https://podman.io/whatis.html ### 6.3 <a id="nspawn"></a>`systemd-container` If even the Podman stack is too big for you, the next-best option I’m aware of is the `systemd-container` infrastructure on modern Linuxes, available since version 239 or so. Its runtime tooling requires only about 1.4 MiB of disk space: $ sudo apt install systemd-container btrfs-tools That command assumes the primary test environment for this guide, Ubuntu 22.04 LTS with `systemd` 249. For best results, `/var/lib/machines` should be a btrfs volume, because [`$REASONS`][mcfad]. For CentOS Stream 9 and other Red Hattish systems, you will have to make several adjustments, which we’ve collected [below](#nspawn-centos) to keep these examples clear. We’ll assume your Fossil repository stores something called “`myproject`” within `~/museum/myproject/repo.fossil`, named according to the reasons given [above](#repo-inside). We’ll make consistent use of this naming scheme in the examples below so that you will be able to replace the “`myproject`” element of the various file and path names. If you use [the stock `Dockerfile`][DF] to generate your base image, `nspawn` won’t recognize it as containing an OS unless you change the “`FROM scratch AS os`” line at the top of the second stage to something like this: FROM gcr.io/distroless/static-debian11 AS os Using that as a base image provides all the files `nspawn` checks for to determine whether the container is sufficiently close to a Linux VM for the following step to proceed: $ make container $ docker container export $(make container-version) | machinectl import-tar - myproject Next, create `/etc/systemd/nspawn/myproject.nspawn`: ---- [Exec] WorkingDirectory=/ Parameters=bin/fossil server \ --baseurl https://example.com/myproject \ --create \ --jsmode bundled \ --localhost \ --port 9000 \ --scgi \ --user admin \ museum/repo.fossil DropCapability= \ CAP_AUDIT_WRITE \ CAP_CHOWN \ CAP_FSETID \ CAP_KILL \ CAP_MKNOD \ CAP_NET_BIND_SERVICE \ CAP_NET_RAW \ CAP_SETFCAP \ CAP_SETPCAP ProcessTwo=yes LinkJournal=no Timezone=no [Files] Bind=/home/fossil/museum/myproject:/museum [Network] VirtualEthernet=no ---- If you recognize most of that from the `Dockerfile` discussion above, congratulations, you’ve been paying attention. The rest should also be clear from context. |
︙ | ︙ | |||
817 818 819 820 821 822 823 | on the host for the reasons given [above](#bind-mount). That being done, we also need a generic `systemd` unit file called `/etc/systemd/system/fossil@.service`, containing: ---- | < | | | | | | | | < < | | < < | | | | | | | < < | < < | < < | | | < | 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 | on the host for the reasons given [above](#bind-mount). That being done, we also need a generic `systemd` unit file called `/etc/systemd/system/fossil@.service`, containing: ---- [Unit] Description=Fossil %i Repo Service Wants=modprobe@tun.service modprobe@loop.service After=network.target systemd-resolved.service modprobe@tun.service modprobe@loop.service [Service] ExecStart=systemd-nspawn --settings=override --read-only --machine=%i bin/fossil [Install] WantedBy=multi-user.target ---- You shouldn’t have to change any of this because we’ve given the `--setting=override` flag, meaning any setting in the nspawn file overrides the setting passed to `systemd-nspawn`. This arrangement not only keeps the unit file simple, it allows multiple services to share the base configuration, varying on a per-repo level through adjustments to their individual `*.nspawn` files. You may then start the service in the normal way: $ sudo systemctl enable fossil@myproject $ sudo systemctl start fossil@myproject You should then find it running on localhost port 9000 per the nspawn configuration file above, suitable for proxying Fossil out to the public using nginx via SCGI. If you aren’t using a front-end proxy and want Fossil exposed to the world via HTTPS, you might say this instead in the `*.nspawn` file: Parameters=bin/fossil server \ --cert /path/to/cert.pem \ --create \ --jsmode bundled \ --port 443 \ --user admin \ museum/repo.fossil You would also need to un-drop the `CAP_NET_BIND_SERVICE` capability to allow Fossil to bind to this low-numbered port. We use the `systemd` template file feature to allow multiple Fossil servers running on a single machine, each on a different TCP port, as when proxying them out as subdirectories of a larger site. To add another project, you must first clone the base “machine” layer: $ sudo machinectl clone myproject otherthing That will not only create a clone of `/var/lib/machines/myproject` as `../otherthing`, it will create a matching `otherthing.nspawn` file for you as a copy of the first one. Adjust its contents to suit, then enable and start it as above. [mcfad]: https://www.freedesktop.org/software/systemd/man/machinectl.html#Files%20and%20Directories ### 6.3.1 <a id="nspawn-rhel"></a>Getting It Working on a RHEL Clone The biggest difference between doing this on OSes like CentOS versus Ubuntu is that RHEL (thus also its clones) doesn’t ship btrfs in its kernel, thus ships with no package repositories containing `mkfs.btrfs`, which [`machinectl`][mctl] depends on for achieving its various purposes. Fortunately, there are workarounds. First, the `apt install` command above becomes: $ sudo dnf install systemd-container Second, you have to hack around the lack of `machinectl import-tar`: $ rootfs=/var/lib/machines/fossil $ sudo mkdir -p $rootfs $ docker container export fossil | sudo tar -xf -C $rootfs - The parent directory path in the `rootfs` variable is important, because although we aren’t able to use `machinectl` on such systems, the `systemd-nspawn` developers assume you’re using them together; when you give `--machine`, it assumes the `machinectl` directory scheme. You could instead use `--directory`, allowing you to store the rootfs wherever you like, but why make things difficult? It’s a perfectly sensible |
︙ | ︙ |
Changes to www/contribute.wiki.
︙ | ︙ | |||
24 25 26 27 28 29 30 | definition of that term is up to the project leader. <h2>2.0 Submitting Patches</h2> Suggested changes or bug fixes can be submitted by creating a patch against the current source tree: | | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | definition of that term is up to the project leader. <h2>2.0 Submitting Patches</h2> Suggested changes or bug fixes can be submitted by creating a patch against the current source tree: <pre>fossil diff -i > my-change.patch</pre> Alternatively, you can create a binary patch: <pre>fossil patch create my-change.db</pre> Post patches to [https://fossil-scm.org/forum | the forum] or email them to <a href="mailto:drh@sqlite.org">drh@sqlite.org</a>. Be sure to describe in detail what the patch does and which version of Fossil it is written against. It's best to make patches against tip-of-trunk rather than against past releases. |
︙ | ︙ |
Changes to www/custom_ticket.wiki.
1 | <title>Customizing The Ticket System</title> | | < > < < > < < | > | | | | | | > | | > | | | > | | > | | | | > | < < | | | | | | | | > | < < | > > | > | | | > > | > > | > | < | | | < | > | < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | <title>Customizing The Ticket System</title> <h2>Introduction</h2> This guide will explain how to add the "assigned_to" and "submitted_by" fields to the ticket system in Fossil, as well as making the system more useful. You must have "admin" access to the repository to implement these instructions. <h2>First modify the TICKET table</h2> Click on the "Admin" menu, then "Tickets", then "Table". After the other fields and before the final ")", insert: <pre> , assigned_to TEXT, opened_by TEXT </pre> And "Apply Changes". You have just added two more fields to the ticket database! NOTE: I won't tell you to "Apply Changes" after each step from here on out. Now, how do you use these fields? <h2>Next add assignees</h2> Back to the "Tickets" admin page, and click "Common". Add something like this: <pre> set assigned_choices { unassigned tom dick harriet } </pre> Obviously, choose names corresponding to the logins on your system. The 'unassigned' entry is important, as it prevents you from having a NULL in that field (which causes problems later when editing). <h2>Now modify the 'new ticket' page</h2> Back to the "Tickets" admin page, and click "New Ticket Page". This is a little more tricky. Edit the top part: <verbatim> if {[info exists submit]} { set status Open set opened_by $login set assigned_to "unassigned" submit_ticket } </verbatim> Note the "set opened_by" bit -- that will automatically set the "opened_by" field to the login name of the bug reporter. Now, skip to the part with "EMail" and modify it like so: <verbatim> <th1>enable_output expr { "$login" eq "anonymous"}</th1> <tr> <td align="right"> EMail: <input type="text" name="private_contact" value="$<private_contact>" size="30"> </td> <td> <u>Not publicly visible</u>. Used by developers to contact you with questions. </td> </tr> <th1>enable_output 1</th1> </verbatim> This bit of code will get rid of the "email" field entry for logged-in users. Since we know the user's information, we don't have to ask for it. NOTE: it might be good to automatically scoop up the user's email and put it here. You might also want to enable people to actually assign the ticket to a specific person during creation. For this to work, you need to add the code for "assigned_to" as shown below under the heading "Modify the 'edit ticket' page". This will give you an additional combobox where you can choose a person during ticket creation. <h2>Modify the 'view ticket' page</h2> Look for the text "Contact:" (about halfway through). Then insert these lines after the closing tr tag and before the "enable_output" line: <verbatim> <td align="right">Assigned to:</td><td bgcolor="#d0d0d0"> $<assigned_to> </td> <td align="right">Opened by:</td><td bgcolor="#d0d0d0"> $<opened_by> </td> </verbatim> This will add a row which displays these two fields, in the event the user has <a href="./caps/ref.html#w">ticket "edit" capability</a>. <h2>Modify the 'edit ticket' page</h2> Before the "Severity:" line, add this: <verbatim> <tr> <td align="right">Assigned to:</td> <td> <th1>combobox assigned_to $assigned_choices 1</th1> </td> </tr> </verbatim> That will give you a drop-down list of assignees. The first argument to the TH1 command 'combobox' is the database field which the combobox is associated to. The next argument is the list of choices you want to show in the combobox (and that you specified in the second step above. The last argument should be 1 for a true combobox (see the <a href="th1.md#combobox">TH1 documentation</a> for details). Now, similar to the previous section, look for "Contact:" and add this: <verbatim> <tr> <td align="right">Reported by:</td> <td> <input type="text" name="opened_by" size="40" value="$<opened_by>"> </td> </tr> </verbatim> <h2>What next?</h2> Now you can add custom reports which select based on the person to whom the ticket is assigned. For example, an "Assigned to me" report could be: <verbatim> SELECT CASE WHEN status IN ('Open','Verified') THEN '#f2dcdc' WHEN status='Review' THEN '#e8e8e8' WHEN status='Fixed' THEN '#cfe8bd' WHEN status='Tested' THEN '#bde5d6' WHEN status='Deferred' THEN '#cacae5' ELSE '#c8c8c8' END AS 'bgcolor', substr(tkt_uuid,1,10) AS '#', datetime(tkt_mtime) AS 'mtime', type, status, subsystem, title FROM ticket WHERE assigned_to=user() </verbatim> |
Changes to www/customgraph.md.
1 2 3 4 5 6 7 8 9 10 11 | # Customizing the Timeline Graph Beginning with version 1.33, Fossil gives users and skin authors significantly more control over the look and feel of the timeline graph. ## <a id="basic-style"></a>Basic Style Options Fossil includes several options for changing the graph's style without having to delve into CSS. These can be found in the details.txt file of your skin or under Admin/Skins/Details in the web UI. | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | # Customizing the Timeline Graph Beginning with version 1.33, Fossil gives users and skin authors significantly more control over the look and feel of the timeline graph. ## <a id="basic-style"></a>Basic Style Options Fossil includes several options for changing the graph's style without having to delve into CSS. These can be found in the details.txt file of your skin or under Admin/Skins/Details in the web UI. * **`timeline-arrowheads`** Set this to `0` to hide arrowheads on primary child lines. * **`timeline-circle-nodes`** Set this to `1` to make check-in nodes circular instead of square. * **`timeline-color-graph-lines`** Set this to `1` to colorize primary child lines. * **`white-foreground`** Set this to `1` if your skin uses white (or any light color) text. This tells Fossil to generate darker background colors for branches. ## <a id="adv-style"></a>Advanced Styling |
︙ | ︙ | |||
40 41 42 43 44 45 46 | latter, less obvious type. ## <a id="pos-elems"></a>Positioning Elements These elements aren't intended to be seen. They're only used to help position the graph and its visible elements. | | | | | | | | | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | latter, less obvious type. ## <a id="pos-elems"></a>Positioning Elements These elements aren't intended to be seen. They're only used to help position the graph and its visible elements. * <a id="tl-canvas"></a>**`.tl-canvas`** Set the left and right margins on this class to give the desired amount of space between the graph and its adjacent columns in the timeline. **Additional Classes** * `.sel`: See [`.tl-node`](#tl-node) for more information. * <a id="tl-rail"></a>**`.tl-rail`** Think of rails as invisible vertical lines on which check-in nodes are placed. The more simultaneous branches in a graph, the more rails required to draw it. Setting the `width` property on this class determines the maximum spacing between rails. This spacing is automatically reduced as the number of rails increases. If you change the `width` of `.tl-node` elements, you'll probably need to change this value, too. * <a id="tl-mergeoffset"></a>**`.tl-mergeoffset`** A merge line often runs vertically right beside a primary child line. This class's `width` property specifies the maximum spacing between the two. Setting this value to `0` will eliminate the vertical merge lines. Instead, the merge arrow will extend directly off the primary child line. As with rail spacing, this is also adjusted automatically as needed. * <a id="tl-nodemark"></a>**`.tl-nodemark`** In the timeline table, the second cell in each check-in row contains an invisible div with this class. These divs are used to determine the vertical position of the nodes. By setting the `margin-top` property, you can adjust this position. ## <a id="vis-elems"></a>Visible Elements These are the elements you can actually see on the timeline graph: the nodes, arrows, and lines. Each of these elements may also have additional classes attached to them, depending on their context. * <a id="tl-node"></a>**`.tl-node`** A node exists for each check-in in the timeline. **Additional Classes** * `.leaf`: Specifies that the check-in is a leaf (i.e. that it has no children in the same branch). * `.merge`: Specifies that the check-in contains a merge. * `.sel`: When the user clicks a node to designate it as the beginning of a diff, this class is added to both the node itself and the [`.tl-canvas`](#tl-canvas) element. The class is removed from both elements when the node is clicked again. * <a id="tl-arrow"></a>**`.tl-arrow`** Arrows point from parent nodes to their children. Technically, this class is just for the arrowhead. The rest of the arrow is composed of [`.tl-line`](#tl-line) elements. There are six additional classes that are used to distinguish the different types of arrows. However, only these combinations are valid: * `.u`: Up arrow that points to a child from its primary parent. * `.u.sm`: Smaller up arrow, used when there is limited space between parent and child nodes. * `.merge.l` or `.merge.r`: Merge arrow pointing either to the left or right. * `.warp`: A timewarped arrow (always points to the right), used when a misconfigured clock makes a check-in appear to have occurred before its parent ([example](https://www.sqlite.org/src/timeline?c=2010-09-29&nd)). * <a id="tl-line"></a>**`.tl-line`** Along with arrows, lines connect parent and child nodes. Line thickness is determined by the `width` property, regardless of whether the line is horizontal or vertical. You can also use borders to create special line styles. Here's a CSS snippet for making dotted merge lines: .tl-line.merge { width: 0; background: transparent; border: 0 dotted #000; } .tl-line.merge.h { border-top-width: 1px; } .tl-line.merge.v { border-left-width: 1px; } **Additional Classes** * `.merge`: A merge line. * `.h` or `.v`: Horizontal or vertical. * `.warp`: A timewarped line. |
︙ | ︙ |
Changes to www/customskin.md.
︙ | ︙ | |||
55 56 57 58 59 60 61 | When cloning a repository, the skin of the new repository is initialized to the skin of the repository from which it was cloned. # Structure Of A Fossil Web Page Every HTML page generated by Fossil has the same basic structure: | < < | | < | | < | < < < < | < < < | | | | | | | | | > > > > > | | | | > > | | | | | > > < < | > > > > > > > > > > | | | > > > | > < < < < > > | | | | | | | | | | | | | | 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | When cloning a repository, the skin of the new repository is initialized to the skin of the repository from which it was cloned. # Structure Of A Fossil Web Page Every HTML page generated by Fossil has the same basic structure: | Fossil-Generated HTML Header | | Skin Header | | Fossil-Generated Content | | Skin Footer | | Fossil-Generated HTML Footer | By default, Fossil starts every generated HTML page with this: <html> <head> <base href="..."> <meta http-equiv="Content-Security-Policy" content="...."> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>....</title> <link rel="stylesheet" href="..." type="text/css"> </head> <body class="FEATURE"> Fossil used to require a static version of this in every skin’s Header area, but over time, we have found good cause to generate multiple elements at runtime. One such is the `FEATURE` element, being either the top-level HTTP request routing element (e.g. `doc`) or an aggregate feature class that groups multiple routes under a single name. A prime example is `forum`, which groups the `/forummain`, `/forumpost`, and `/forume2` routes, allowing per-feature CSS. For instance, to style `<blockquote>` tags specially for forum posts written in Markdown, leaving all other block quotes alone, you could say: body.forum div.markdown blockquote { margin-left: 10px; } You can [override this generated HTML header](#override) by including a “`<body>`” tag somewhere in the Header area of the skin, but it is almost always best to limit a custom skin’s Header section to something like this: <div class="sidebar" id="version-2.24">Prior to Fossil 2.24, we used generic `<div>` elements to mark up these sections of the header, but we switched to these semantic tag names to give browser accessibility features more freedom to do intelligent things with the page content. Those who made custom skins based on the old way of doing things will need to track this change when upgrading, else the corresponding CSS will mistarget the page header elements. Also, if you’re using Fossil’s chat feature, failing to track this change will cause it to miscalculate the message area size, resulting in double scrollbars. Simply diffing your custom header in the skin editor against the stock version should be sufficient to show what you need to change.</div> <header> ... </header> <nav class="mainmenu" title="Main Menu"> ... </nav> <nav id="hbdrop" class="hbdrop" title="sitemap"></nav> See the stock skins’ headers for ideas of what to put in place of the ellipses. The Fossil-generated Content section immediately follows this Header. It will look like this: <div class="content"> ... Fossil-generated content here ... </div> After the Content is the custom Skin Footer section which should follow this template: <footer> ... skin-specific stuff here ... </footer> As with the `<header>` change called out above, this, too, is a breaking change in Fossil 2.24. Finally, Fossil always adds its own footer (unless overridden) to close out the generated HTML: </body> </html> ## <a id="mainmenu"></a>Changing the Main Menu Contents As of Fossil 2.15, the actual text content of the skin’s main menu is no longer part of the skin proper if you’re using one of the stock skins. If you look at the Header section of the skin, you’ll find a `<div class="mainmenu">` element whose contents are set by a short |
︙ | ︙ | |||
158 159 160 161 162 163 164 | Notice that the `<html>`, `<head>`, and opening `<body>` elements at the beginning of the document, and the closing `</body>` and `</html>` elements at the end are automatically generated by Fossil. This is recommended. However, for maximum design flexibility, Fossil allows those elements to be | | | | | | | | 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | Notice that the `<html>`, `<head>`, and opening `<body>` elements at the beginning of the document, and the closing `</body>` and `</html>` elements at the end are automatically generated by Fossil. This is recommended. However, for maximum design flexibility, Fossil allows those elements to be supplied as part of the configurable Skin Header and Skin Footer. If the Skin Header contains the text "`<body`", then Fossil assumes that the Skin Header and Skin Footer will handle all of the `<html>`, `<head>`, and `<body>` text itself, and the Fossil-generated header and footer will be blank. When overriding the HTML Header in this way, you will probably want to use some of the [TH1 variables documented below](#vars) such as `$stylesheet_url` to avoid hand-writing code that Fossil can generate for you. # Designing, Debugging, and Installing A Custom Skin It is possible to develop a new skin from scratch. But a better and easier approach is to use one of the existing built-in skins as a baseline and make incremental modifications, testing after each step, to obtain the desired result. The skin is controlled by five files: <dl> <dt><b>css.txt</b></dt> <dd>The css.txt file is the text of the CSS for Fossil. Fossil might add additional CSS elements after the css.txt file, if it sees that the css.txt omits some CSS components that Fossil needs. But for the most part, the content of the css.txt is the CSS for the page.</dd> <dt><b>details.txt</b><dt> <dd>The details.txt file is short list of settings that control the look and feel, mostly of the timeline. The default details.txt file looks like this: <pre> pikchr-background: "" pikchr-fontscale: "" pikchr-foreground: "" pikchr-scale: "" timeline-arrowheads: 1 timeline-circle-nodes: 1 timeline-color-graph-lines: 1 white-foreground: 0 </pre> The three "timeline-" settings in details.txt control the appearance of certain aspects of the timeline graph. The number on the right is a boolean - "1" to activate the feature and "0" to disable it. The "white-foreground:" setting should be set to "1" if the page color has light-color text on a darker background, and "0" if the page has dark text on a light-colored background. |
︙ | ︙ | |||
224 225 226 227 228 229 230 | empty strings, then they should be floating point values (close to 1.0) that specify relative scaling of the fonts in pikchr diagrams and other elements of the diagrams, respectively. </dd> <dt><b>footer.txt</b> and <b>header.txt</b></dt> | | | | | | | | | | 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | empty strings, then they should be floating point values (close to 1.0) that specify relative scaling of the fonts in pikchr diagrams and other elements of the diagrams, respectively. </dd> <dt><b>footer.txt</b> and <b>header.txt</b></dt> <dd>The footer.txt and header.txt files contain the Skin Footer and Skin Header respectively. Of these, the Skin Header is the most important, as it contains the markup used to generate the banner and menu bar for each page. Both the footer.txt and header.txt file are [processed using TH1](#headfoot) prior to being output as part of the overall web page.</dd> <dt><b>js.txt</b></dt> <dd>The js.txt file is optional. It is intended to be javascript. The complete text of this javascript might be inserted into the Skin Footer, after being processed using TH1, using code like the following in the "footer.txt" file: <pre> <script nonce="$nonce"> <th1>styleScript</th1> </script> </pre> The js.txt file was originally used to insert javascript that controls the hamburger menu in the default skin. More recently, the javascript for the hamburger menu was moved into a separate built-in file. Skins that use the hamburger menu typically cause the javascript to be loaded by including the following TH1 code in the "header.txt" file: <pre> <th1>builtin_request_js hbmenu.js</th1> </pre> The difference between styleScript and builtin_request_js is that the styleScript command interprets the file using TH1 and injects the content directly into the output stream, whereas the builtin_request_js command inserts the javascript verbatim and does so at some unspecified future time down inside the Fossil-generated footer. The built-in skins of Fossil originally used the styleScript command to load the hamburger menu javascript, but as of version 2.15 switched to using the builtin_request_js method. You can use either approach in custom skins that you right yourself. Note that the "js.txt" file is *not* automatically inserted into the generate HTML for a page. You, the skin designer, must cause the javascript to be inserted by issuing appropriate TH1 commands in the "header.txt" or "footer.txt" files.</dd> </dl> Developing a new skin is simply a matter of creating appropriate versions of these five control files. ### Skin Development Using The Web Interface Users with admin privileges can use the Admin/Skin configuration page |
︙ | ︙ | |||
317 318 319 320 321 322 323 | did not change. After you have finished work your skin, the caches should synchronize with your new design and you can reactivate your web browser's cache and take it out of developer mode. ## <a id="headfoot"></a>Header and Footer Processing The `header.txt` and `footer.txt` control files of a skin are the HTML text | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 | did not change. After you have finished work your skin, the caches should synchronize with your new design and you can reactivate your web browser's cache and take it out of developer mode. ## <a id="headfoot"></a>Header and Footer Processing The `header.txt` and `footer.txt` control files of a skin are the HTML text of the Skin Header and Skin Footer, except that before being inserted into the output stream, the text is run through a [TH1 interpreter](./th1.md) that might adjust the text as follows: * All text within <th1>...</th1> is omitted from the output and is instead run as a TH1 script. That TH1 script has the opportunity to insert new text in place of itself, or to inhibit or enable the output of subsequent text. * Text of the form "$NAME" or "$<NAME>" is replaced with the value of the TH1 variable NAME. For example, first few lines of a typical Skin Header will look like this: <div class="header"> <div class="title"><h1>$<project_name></h1>$<title>/div> After variables are substituted by TH1, that will look more like this: <div class="header"> <div class="title"><h1>Project Name</h1>Page Title</div> As you can see, two TH1 variable substitutions were done. The same TH1 interpreter is used for both the header and the footer and for all scripts contained within them both. Hence, any global TH1 variables that are set by the header are available to the footer. ## <a id="menu"></a>Customizing the ≡ Hamburger Menu The menu bar of the default skin has an entry to open a drop-down menu with additional navigation links, represented by the ≡ button (hence the name "hamburger menu"). The Javascript logic to open and close the hamburger menu when the button is clicked is usually handled by a script named "hbmenu.js" that is one of the [built-in resource files](/test-builtin-files) that are part of Fossil. The ≡ button for the hamburger menu is added to the menu bar by the following TH1 commands in the `header.txt` file, right before the menu bar links: html "<a id='hbbtn' href='$home/sitemap'>☰</a>" builtin_request_js hbmenu.js The hamburger button can be repositioned between the other menu links (but the drop-down menu is always left-aligned with the menu bar), or it can be removed by deleting the above statements. The "html" statement inserts the appropriate `<a>` for the hamburger menu button (some skins require something slightly different - for example the ardoise skins wants "`<li><a>`"). The "builtin_request_js hbmenu.js" asks Fossil to include the "hbmenu.js" resource files in the Fossil-generated footer. The hbmenu.js script requires the following `<div>` element somewhere in your header, in which to build the hamburger menu. <div id='hbdrop'></div> Out of the box, the contents of the panel is populated with the [Site Map](/sitemap), but only if the panel does not already contain any HTML elements (that is, not just comments, plain text or non-presentational white space). So the hamburger menu can be customized by replacing the empty `<div id='hbdrop'></div>` element with a menu structure knitted according to the following template: <div id="hbdrop" data-anim-ms="400"> <ul class="columns" style="column-width: 20em; column-count: auto"> <!-- NEW GROUP WITH HEADING LINK --> <li> <a href="$home$index_page">Link: Home</a> <ul> <li><a href="$home/timeline">Link: Timeline</a></li> <li><a href="$home/dir?ci=tip">Link: File List</a></li> </ul> </li> <!-- NEW GROUP WITH HEADING TEXT --> <li> Heading Text <ul> <li><a href="$home/doc/trunk/www/customskin.md">Link: Theming</a></li> <li><a href="$home/doc/trunk/www/th1.md">Link: TH1 Scripts</a></li> </ul> </li> <!-- NEXT GROUP GOES HERE --> </ul> </div> The custom `data-anim-ms` attribute can be added to the panel element to direct the Javascript logic to override the default menu animation duration of 400 ms. A faster animation duration of 80-200 ms may be preferred for smaller menus. The animation is disabled by setting the attribute to `"0"`. ## <a id="vars"></a>TH1 Variables Before expanding the TH1 within the header and footer, Fossil first initializes a number of TH1 variables to values that depend on repository settings and the specific page being generated. * **`project_name`** - The project_name variable is filled with the name of the project as configured under the Admin/Configuration menu. * **`project_description`** - The project_description variable is filled with the description of the project as configured under the Admin/Configuration menu. * **`title`** - The title variable holds the title of the page being generated. The title variable is special in that it is deleted after the header script runs and before the footer script. This is necessary to avoid a conflict with a variable by the same name used in my ticket-screen scripts. * **`baseurl`** - The root of the URL namespace for this server. * **`secureurl`** - The same as $baseurl except that if the scheme is "http:" it is changed to "https:" * **`home`** - The $baseurl without the scheme and hostname. For example, if the $baseurl is "http://projectX.com/cgi-bin/fossil" then the $home will be just "/cgi-bin/fossil". * **`index_page`** - The landing page URI as specified by the Admin/Configuration setup page. * **`current_page`** - The name of the page currently being processed, without the leading "/" and without query parameters. Examples: "timeline", "doc/trunk/README.txt", "wiki". * **`csrf_token`** - A token used to prevent cross-site request forgery. * **`default_csp`** - [Fossil’s default CSP](./defcsp.md) unless [overridden by custom TH1 code](./defcsp.md#th1). Useful within the skin for inserting the CSP into a `<meta>` tag within [a custom `<head>` element](#headfoot). * **`nonce`** - The value of the cryptographic nonce for the request being processed. * **`release_version`** - The release version of Fossil. Ex: "1.31" * **`manifest_version`** - A prefix on the check-in hash of the specific version of fossil that is running. Ex: "\[47bb6432a1\]" * **`manifest_date`** - The date of the source-code check-in for the version of fossil that is running. * **`compiler_name`** - The name and version of the compiler used to build the fossil executable. * **`login`** - This variable only exists if the user has logged in. The value is the username of the user. * **`stylesheet_url`** - A URL for the internal style-sheet maintained by Fossil. * **`logo_image_url`** - A URL for the logo image for this project, as configured on the Admin/Logo page. * **`background_image_url`** - A URL for a background image for this project, as configured on the Admin/Logo page. All of the above are variables in the sense that either the header or the footer is free to change or erase them. But they should probably be treated as constants. New predefined values are likely to be added in future releases of Fossil. |
︙ | ︙ |
Changes to www/defcsp.md.
︙ | ︙ | |||
21 22 23 24 25 26 27 | bugs that might lead to a vulnerability. ## The Default Restrictions The default CSP used by Fossil is as follows: <pre> | | | | | < | < | | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | bugs that might lead to a vulnerability. ## The Default Restrictions The default CSP used by Fossil is as follows: <pre> default-src 'self' data:; script-src 'self' 'nonce-$nonce'; style-src 'self' 'unsafe-inline'; img-src * data:; </pre> The default is recommended for most installations. However, the site administrators can overwrite this default CSP using the [default-csp setting](/help?cmd=default-csp). For example, CSP restrictions can be completely disabled by setting the default-csp to: default-src *; The following sections detail the maining of the default CSP setting. ### <a id="base"></a> default-src 'self' data: This policy means mixed-origin content isn’t allowed, so you can’t refer to resources on other web domains. Browsers will ignore a link like the one in the following Markdown under our default CSP: ![fancy 3D Fossil logotype](https://i.imgur.com/HalpMgt.png) If you look in the browser’s developer console, you should see a CSP error when attempting to render such a page. The default policy does allow inline `data:` URIs, which means you could [data-encode][de] your image content and put it inline within the document: ![small inline image](data:image/gif;base64,R0lGODlh...) That method is best used for fairly small resources. Large `data:` URIs are hard to read and edit. There are secondary problems as well: if you put a large image into a Fossil forum post this way, anyone subscribed to email alerts will get a copy of the raw URI text, which can amount to pages and pages of [ugly Base64-encoded text][b64]. For inline images within [embedded documentation][ed], it suffices to store the referred-to files in the repo and then refer to them using repo-relative URLs: ![large inline image](./inlineimage.jpg) This avoids bloating the doc text with `data:` URI blobs: There are many other cases, [covered below](#serving). [b64]: https://en.wikipedia.org/wiki/Base64 [svr]: ./server/ |
︙ | ︙ | |||
97 98 99 100 101 102 103 | This policy allows CSS information to come from separate files hosted under the Fossil repo server’s Internet domain. It also allows inline CSS `<style>` tags within the document text. The `'unsafe-inline'` declaration allows CSS within individual HTML elements: | | | 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | This policy allows CSS information to come from separate files hosted under the Fossil repo server’s Internet domain. It also allows inline CSS `<style>` tags within the document text. The `'unsafe-inline'` declaration allows CSS within individual HTML elements: <p style="margin-left: 4em">Indented text.</p> As the "`unsafe-`" prefix on the name implies, the `'unsafe-inline'` feature is suboptimal for security. However, there are a few places in the Fossil-generated HTML that benefit from this flexibility and the work-arounds are verbose and difficult to maintain. Furthermore, the harm that can be done with style injections is far less than the harm possible with injected javascript. And so the |
︙ | ︙ | |||
172 173 174 175 176 177 178 | offers free Fossil repository hosting to anyone on the Internet, all served under the same `http://chiselapp.com/user/$NAME/$REPO` URL scheme. Any one of those hundreds of repositories could trick you into visiting their repository home page, set to [an HTML-formatted embedded doc page][hfed] via Admin → Configuration → Index Page, with this content: | | | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | offers free Fossil repository hosting to anyone on the Internet, all served under the same `http://chiselapp.com/user/$NAME/$REPO` URL scheme. Any one of those hundreds of repositories could trick you into visiting their repository home page, set to [an HTML-formatted embedded doc page][hfed] via Admin → Configuration → Index Page, with this content: <script src="/doc/trunk/bad.js"></script> That script can then do anything allowed in JavaScript to *any other* Chisel repository your browser can access. The possibilities for mischief are *vast*. For just one example, if you have login cookies on four different Chisel repositories, your attacker could harvest the login cookies for all of them through this path if we allowed Fossil to serve JavaScript files under the same CSP policy as we do for CSS files. |
︙ | ︙ | |||
196 197 198 199 200 201 202 | path around this restriction. If you are serving a Fossil repository that has any user you do not implicitly trust to a level that you would willingly run any JavaScript code they’ve provided, blind, you **must not** give the `--with-th1-docs` option when configuring Fossil, because that allows substitution of the [pre-defined `$nonce` TH1 variable](./th1.md#nonce) into [HTML-formatted embedded docs][hfed]: | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | path around this restriction. If you are serving a Fossil repository that has any user you do not implicitly trust to a level that you would willingly run any JavaScript code they’ve provided, blind, you **must not** give the `--with-th1-docs` option when configuring Fossil, because that allows substitution of the [pre-defined `$nonce` TH1 variable](./th1.md#nonce) into [HTML-formatted embedded docs][hfed]: <script src="/doc/trunk/bad.js" nonce="$nonce"></script> Even with this feature enabled, you cannot put `<script>` tags into Fossil Wiki or Markdown-formatted content, because our HTML generators for those formats purposely strip or disable such tags in the output. Therefore, if you trust those users with check-in rights to provide JavaScript but not those allowed to file tickets, append to wiki articles, etc., you might justify enabling TH1 docs on your repository, |
︙ | ︙ | |||
329 330 331 332 333 334 335 | Changing this setting is the easiest way to set a nonstandard CSP on your site. Because a blank setting tells Fossil to use its hard-coded default CSP, you have to say something like the following to get a repository without content security policy restrictions: | | | 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 | Changing this setting is the easiest way to set a nonstandard CSP on your site. Because a blank setting tells Fossil to use its hard-coded default CSP, you have to say something like the following to get a repository without content security policy restrictions: $ fossil set -R /path/to/served/repo.fossil default-csp 'default-src *' We recommend that instead of using the command line to change this setting that you do it via the repository’s web interface, in Admin → Settings. Write your CSP rules in the edit box marked "`default-csp`". Do not add hard newlines in that box: the setting needs to be on a single long line. Beware that changes take effect immediately, so be careful with your edits: you could end up locking |
︙ | ︙ | |||
364 365 366 367 368 369 370 | `default-csp` setting and uses *that* to inject the value into generated HTML pages in its stock configuration. This means that another way you can override this value is to use the [`th1-setup` hook script](./th1-hooks.md), which runs before TH1 processing happens during skin processing: | | | 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | `default-csp` setting and uses *that* to inject the value into generated HTML pages in its stock configuration. This means that another way you can override this value is to use the [`th1-setup` hook script](./th1-hooks.md), which runs before TH1 processing happens during skin processing: $ fossil set th1-setup "set default_csp {default-src 'self'}" After [the above](#admin-ui), this is the cleanest method. [thvar]: ./customskin.md#vars |
︙ | ︙ |
Changes to www/delta-manifests.md.
1 2 3 4 5 6 7 8 9 10 11 12 13 | # Delta Manifests This article describes "delta manifests," a special-case form of checkin manifest which is intended to take up far less space than a normal checkin manifest, in particular for repositories with many files. We'll see, however, that the space savings, if indeed there are any, come with some caveats. This article assumes that the reader is at least moderately familiar with Fossil's [artifact file format](./fileformat.wiki), in particular the structure of checkin manifests, and it won't make much sense to readers unfamiliar with that topic. | > > > > < < < < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | # Delta Manifests <div class="sidebar">Do not confuse these with the core [Fossil delta format](./delta_format.wiki). This document describes an optional feature not enabled by default.</div> This article describes "delta manifests," a special-case form of checkin manifest which is intended to take up far less space than a normal checkin manifest, in particular for repositories with many files. We'll see, however, that the space savings, if indeed there are any, come with some caveats. This article assumes that the reader is at least moderately familiar with Fossil's [artifact file format](./fileformat.wiki), in particular the structure of checkin manifests, and it won't make much sense to readers unfamiliar with that topic. # Background and Motivation of Delta Manifests A checkin manifest includes a list of every file in that checkin. A moderately-sized project can easily have a thousand files, and every checkin manifest will include those thousand files. As of this writing Fossil's own checkins contain 989 files and the manifests are 80kb each. Thus a checkin which changes only 2 bytes of source code |
︙ | ︙ |
Changes to www/delta_encoder_algorithm.wiki.
1 | <title>Fossil Delta Encoding Algorithm</title> | | | 1 2 3 4 5 6 7 8 9 | <title>Fossil Delta Encoding Algorithm</title> <h2>Abstract</h2> <p>A key component for the efficient storage of multiple revisions of a file in fossil repositories is the use of delta-compression, i.e. to store only the changes between revisions instead of the whole file.</p> |
︙ | ︙ | |||
105 106 107 108 109 110 111 | to <a href="delta_format.wiki#copyrange">copy a range</a>, or </li> <li>move the window forward one byte. </li> </ul> </p> | < | > | 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | to <a href="delta_format.wiki#copyrange">copy a range</a>, or </li> <li>move the window forward one byte. </li> </ul> </p> <verbatim type="pikchr float-right"> TARGET: [ scale = 0.8 down "Target" bold box fill palegreen width 150% height 200% "Processed" GI: box same as first box fill yellow height 25% "Gap → Insert" CC: box same fill orange height 200% "Common → Copy" W: box same as GI fill lightgray width 125% height 200% "Window" bold box same as CC height 125% "" |
︙ | ︙ | |||
131 132 133 134 135 136 137 | B1: box fill white B2: box fill orange height 200% B3: box fill white height 200% ] with .nw at 0.75 right of TARGET.ne arrow from TARGET.W.e to ORIGIN.B2.w "Signature" aligned above </verbatim> | < | 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | B1: box fill white B2: box fill orange height 200% B3: box fill white height 200% ] with .nw at 0.75 right of TARGET.ne arrow from TARGET.W.e to ORIGIN.B2.w "Signature" aligned above </verbatim> <p>To make this decision the encoder first computes the hash value for the NHASH bytes in the window and then looks at all the locations in the "origin" which have the same signature. This part uses the hash table created by the pre-processing step to efficiently find these locations.</p> |
︙ | ︙ | |||
215 216 217 218 219 220 221 | and a new byte is shifted in.<p> <h3 id="rhdef">4.1 Definition</h3> <p>Assuming an array Z of NHASH bytes (indexing starting at 0) the hash V is computed via</p> | < | | | < < | | | < | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 | and a new byte is shifted in.<p> <h3 id="rhdef">4.1 Definition</h3> <p>Assuming an array Z of NHASH bytes (indexing starting at 0) the hash V is computed via</p> <div align="center"><img src="encode1.gif"></div> <div align="center"><img src="encode2.gif"></div> <div align="center"><img src="encode3.gif"></div> where A and B are unsigned 16-bit integers (hence the <u>mod</u>), and V is a 32-bit unsigned integer with B as MSB, A as LSB. <h3 id="rhincr">4.2 Incremental recalculation</h3> <p>Assuming an array Z of NHASH bytes (indexing starting at 0) with hash V (and components A and B), the dropped byte <img src="encode4.gif" align="center">, and the new byte <img src="encode5.gif" align="center"> , the new hash can be computed incrementally via: </p> <div align="center"><img src="encode6.gif"></div> <div align="center"><img src="encode7.gif"></div> <div align="center"><img src="encode8.gif"></div> <p>For A, the regular sum, it can be seen easily that this the correct way recomputing that component.</p> <p>For B, the weighted sum, note first that <img src="encode4.gif" align="center"> has the weight NHASH in the sum, so that is what has to be removed. Then adding in <img src="encode9.gif" align="center"> adds one weight factor to all the other values of Z, and at last adds in <img src="encode5.gif" align="center"> with weight 1, also generating the correct new sum</p> |
Changes to www/delta_format.wiki.
︙ | ︙ | |||
188 189 190 191 192 193 194 | The format currently handles only 32 bit integer numbers. They are written base-64 encoded, MSB first, and without leading "0"-characters, except if they are significant (i.e. 0 => "0"). The base-64 encoding uses one character for each 6 bits of the integer to be encoded. The encoding characters are: | | | | | 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | The format currently handles only 32 bit integer numbers. They are written base-64 encoded, MSB first, and without leading "0"-characters, except if they are significant (i.e. 0 => "0"). The base-64 encoding uses one character for each 6 bits of the integer to be encoded. The encoding characters are: <pre> 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~ </pre> The least significant 6 bits of the integer are encoded by the first character, followed by the next 6 bits, and so on until all non-zero bits of the integer are encoded. The minimum number of encoding characters is used. Note that for integers less than 10, the base-64 coding is a ASCII decimal rendering of the number itself. <h1 id="examples">4.0 Examples</h1> <h2 id="examplesint">4.1 Integer encoding</h2> <table> <tr> <th>Value</th> <th>Encoding</th> </tr> <tr> <td>0</td> <td>0</td> |
︙ | ︙ | |||
226 227 228 229 230 231 232 | </tr> </table> <h2 id="examplesdelta">4.2 Delta encoding</h2> An example of a delta using the specified encoding is: | | | | | | 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | </tr> </table> <h2 id="examplesdelta">4.2 Delta encoding</h2> An example of a delta using the specified encoding is: <pre> 1Xb 4E@0,2:thFN@4C,6:scenda1B@Jd,6:scenda5x@Kt,6:pieces79@Qt,F: Example: eskil~E@Y0,2zMM3E;</pre> </pre> This can be taken apart into the following parts: <table> <tr><th>What </th> <th>Encoding </th><th>Meaning </th><th>Details</th></tr> <tr><td>Header</td> <td>1Xb </td><td>Size </td><td> 6246 </td></tr> <tr><td>S-List</td> <td>4E@0, </td><td>Copy </td><td> 270 @ 0 </td></tr> <tr><td> </td> <td>2:th </td><td>Literal </td><td> 2 'th' </td></tr> <tr><td> </td> <td>FN@4C, </td><td>Copy </td><td> 983 @ 268 </td></tr> <tr><td> </td> <td>6:scenda </td><td>Literal </td><td> 6 'scenda' </td></tr> <tr><td> </td> <td>1B@Jd, </td><td>Copy </td><td> 75 @ 1256 </td></tr> <tr><td> </td> <td>6:scenda </td><td>Literal </td><td> 6 'scenda' </td></tr> <tr><td> </td> <td>5x@Kt, </td><td>Copy </td><td> 380 @ 1336 </td></tr> <tr><td> </td> <td>6:pieces </td><td>Literal </td><td> 6 'pieces' </td></tr> <tr><td> </td> <td>79@Qt, </td><td>Copy </td><td> 457 @ 1720 </td></tr> <tr><td> </td> <td>F: Example: eskil</td><td>Literal </td><td> 15 ' Example: eskil'</td></tr> <tr><td> </td> <td>~E@Y0, </td><td>Copy </td><td> 4046 @ 2176 </td></tr> <tr><td>Trailer</td><td>2zMM3E </td><td>Checksum</td><td> -1101438770 </td></tr> </table> The unified diff behind the above delta is <verbatim> bluepeak:(761) ~/Projects/Tcl/Fossil/Devel/devel > diff -u ../DELTA/old ../DELTA/new --- ../DELTA/old 2007-08-23 21:14:40.000000000 -0700 +++ ../DELTA/new 2007-08-23 21:14:33.000000000 -0700 @@ -5,7 +5,7 @@ * If the server does not have write permission on the database file, or on the directory containing the database file (and |
︙ | ︙ | |||
293 294 295 296 297 298 299 | single file. Allow diffs against any two arbitrary versions, not just diffs against the current check-out. Allow configuration options to replace tkdiff with some other - visual differ of the users choice. + visual differ of the users choice. Example: eskil. * Ticketing interface (expand this bullet) | | < | 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 | single file. Allow diffs against any two arbitrary versions, not just diffs against the current check-out. Allow configuration options to replace tkdiff with some other - visual differ of the users choice. + visual differ of the users choice. Example: eskil. * Ticketing interface (expand this bullet) </verbatim> <h1 id="notes">Notes</h1> <ul> <li>Pure text files generate a pure text delta. |
︙ | ︙ |
Changes to www/embeddeddoc.wiki.
1 | <title>Project Documentation</title> | < | 1 2 3 4 5 6 7 8 | <title>Project Documentation</title> Fossil provides a built-in <a href="wikitheory.wiki">wiki</a> that can be used to store the documentation for a project. This is sufficient for many projects. If your project is well-served by wiki documentation, then you need read no further. |
︙ | ︙ | |||
28 29 30 31 32 33 34 | <h1>1.0 Fossil Support For Embedded Documentation</h1> The fossil web interface supports embedded documentation using the "/doc" page. To access embedded documentation, one points a web browser to a fossil URL of the following form: | | | | 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | <h1>1.0 Fossil Support For Embedded Documentation</h1> The fossil web interface supports embedded documentation using the "/doc" page. To access embedded documentation, one points a web browser to a fossil URL of the following form: <pre> <i><baseurl></i><big><b>/doc/</b></big><i><version></i><big><b>/</b></big><i><filename></i> </pre> The <i><baseurl></i> is the main URL used to access the fossil web server. For example, the <i><baseurl></i> for the fossil project itself is [https://fossil-scm.org/home]. If you launch the web server using the "[/help?cmd=ui|fossil ui]" command line, then the <i><baseurl></i> is usually <b>http://localhost:8080/</b>. |
︙ | ︙ | |||
137 138 139 140 141 142 143 | Hyperlinks in Markdown and HTML embedded documents can reference the root of the Fossil repository using the special text "$ROOT" at the beginning of a URL. For example, a Markdown hyperlink to the Markdown formatting rules might be written in the embedded document like this: | | | | | | | | | | | | 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | Hyperlinks in Markdown and HTML embedded documents can reference the root of the Fossil repository using the special text "$ROOT" at the beginning of a URL. For example, a Markdown hyperlink to the Markdown formatting rules might be written in the embedded document like this: <verbatim> [Markdown formatting rules]($ROOT/wiki_rules) </verbatim> Depending on how the how the Fossil server is configured, that hyperlink might be renderer like one of the following: <verbatim> <a href="/wiki_rules">Wiki formatting rule</a> <a href="/cgi-bin/fossil/wiki_rules">Wiki formatting rules</a> </verbatim> So, in other words, the "$ROOT" text is converted into whatever the "<baseurl>" is for the document. This substitution works for HTML and Markdown documents. It does not work for Wiki embedded documents, since with Wiki you can just begin a URL with "/" and it automatically knows to prepend the $ROOT. <h2>2.2 "$CURRENT" In "/doc/" Hyperlinks</h2> Similarly, URLs of the form "/doc/$CURRENT/..." have the check-in hash of the check-in currently being viewed substituted in place of the "$CURRENT" text. This feature, in combination with the "$ROOT" substitution above, allows an absolute path to be used for hyperlinks. For example, if an embedded document documented wanted to reference some other document in a separate file named "www/otherdoc.md", it could use a URL like this: <verbatim> [Other Document]($ROOT/doc/$CURRENT/www/otherdoc.md) </verbatim> As with "$ROOT", this substitution only works for Markdown and HTML documents. For Wiki documents, you would need to use a relative URL. <h2 id="th1">2.3 TH1 Documents</h2> Fossil will substitute the value of [./th1.md | TH1 expressions] within |
︙ | ︙ | |||
199 200 201 202 203 204 205 | This file that you are currently reading is an example of embedded documentation. The name of this file in the fossil source tree is "<b>www/embeddeddoc.wiki</b>". You are perhaps looking at this file using the URL: | | | | | | < < | | < | 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | This file that you are currently reading is an example of embedded documentation. The name of this file in the fossil source tree is "<b>www/embeddeddoc.wiki</b>". You are perhaps looking at this file using the URL: <pre>[https://fossil-scm.org/home/doc/trunk/www/embeddeddoc.wiki]</pre> The first part of this path, the "[https://fossil-scm.org/home]", is the base URL. You might have originally typed: [https://fossil-scm.org/]. The web server at the fossil-scm.org site automatically redirects such links by appending "home". The "home" file on fossil-scm.org is really a [./server/any/cgi.md|CGI script] which runs the fossil web service in CGI mode. The "home" CGI script looks like this: <pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre> This is one of the many ways to set up a <a href="./server/">Fossil server</a>. The "<b>/trunk/</b>" part of the URL tells fossil to use the documentation files from the most recent trunk check-in. If you wanted to see an historical version of this document, you could substitute the name of a check-in for "<b>/trunk/</b>". For example, to see the version of this document associated with check-in [9be1b00392], simply replace the "<b>/trunk/</b>" with "<b>/9be1b00392/</b>". You can also substitute the symbolic name for a particular version or branch. For example, you might replace "<b>/trunk/</b>" with "<b>/experimental/</b>" to get the latest version of this document in the "experimental" branch. The symbolic name can also be a date and time string in any of the following formats:</p> <ul> <li> <i>YYYY-MM-DD</i> <li> <i>YYYY-MM-DD<b>T</b>HH:MM</i> <li> <i>YYYY-MM-DD<b>T</b>HH:MM:SS</i> </ul> When the symbolic name is a date and time, fossil shows the version of the document that was most recently checked in as of the date and time specified. So, for example, to see what the fossil website looked like at the beginning of 2010, enter: <pre><a href="/doc/2010-01-01/www/index.wiki">https://fossil-scm.org/home/doc/<b>2010-01-01</b>/www/index.wiki </a></pre> The file that encodes this document is stored in the fossil source tree under the name "<b>www/embeddeddoc.wiki</b>" and so that name forms the last part of the URL for this document. As I sit writing this documentation file, I am testing my work by running the "<b>fossil ui</b>" command line and viewing <b>http://localhost:8080/doc/ckout/www/embeddeddoc.wiki</b> in Firefox. I am doing this even though I have not yet checked in the "<b>www/embeddeddoc.wiki</b>" file for the first time. Using the special "<b>ckout</b>" version identifier on the "<b>/doc</b>" page it is easy to make multiple changes to multiple files and see how they all look together before committing anything to the repository. |
Changes to www/encryptedrepos.wiki.
1 | <title>How To Use Encrypted Repositories</title> | > | > | | > > | | > | | | | > | | > > | | > > | | > > | | > < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | <title>How To Use Encrypted Repositories</title> <h2>Introduction</h2> Fossil can be compiled so that it works with encrypted repositories using the [https://www.sqlite.org/see/doc/trunk/www/readme.wiki|SQLite Encryption Extension]. This technical note explains the process. <h2>Building An Encryption-Enabled Fossil</h2> The SQLite Encryption Extension (SEE) is proprietary software and requires [https://sqlite.org/purchase/see|purchasing a license]. Assuming you have an SEE license, the first step of compiling Fossil to use SEE is to create an SEE-enabled version of the SQLite database source code. This alternative SQLite database source file should be called "sqlite3-see.c" and should be placed in the extsrc/ subfolder of the Fossil sources, right beside the public-domain "sqlite3.c" source file. Also make a copy of the SEE-enabled "shell.c" file, renamed as "shell-see.c", and place it in the extsrc/ subfolder beside the original "shell.c". Add the --with-see command-line option to the configuration script to enable the use of SEE on unix-like systems. <pre> ./configure --with-see; make </pre> To build for Windows using MSVC, add the "USE_SEE=1" argument to the "nmake" command line. <pre> nmake -f makefile.msc USE_SEE=1 </pre> <h2>Using Encrypted Repositories</h2> Any Fossil repositories whose filename ends with ".efossil" is taken to be an encrypted repository. Fossil will prompt for the encryption password and attempt to open the repository database using that password. Every invocation of fossil on an encrypted repository requires retyping the encryption password. To avoid excess password typing, consider using the "fossil shell" command which prompts for the password just once, then reuses it for each subsequent Fossil command entered at the prompt. On Windows, the "fossil server", "fossil ui", and "fossil shell" commands do not (currently) work on an encrypted repository. <h2>Additional Security</h2> Use the FOSSIL_SECURITY_LEVEL environment for additional protection. <pre> export FOSSIL_SECURITY_LEVEL=1 </pre> A setting of 1 or greater prevents fossil from trying to remember the previous sync password. <pre> export FOSSIL_SECURITY_LEVEL=2 </pre> A setting of 2 or greater causes all password prompts to be preceded by a random translation matrix similar to the following: <pre> abcde fghij klmno pqrst uvwyz qresw gjymu dpcoa fhkzv inlbt </pre> When entering the password, the user must substitute the letter on the second line that corresponds to the letter on the first line. Uppercase substitutes for uppercase inputs, and lowercase substitutes for lowercase inputs. Letters that are not in the translation matrix (digits, punctuation, and "x") are not modified. For example, given the translation matrix above, if the password is "pilot-9crazy-xube", then the user must type "fmpav-9ekqtb-xirw". This simple substitution cypher helps prevent password capture by keyloggers. |
Changes to www/event.wiki.
︙ | ︙ | |||
71 72 73 74 75 76 77 | There is a hyperlink under the /wikihelp menu that can be used to create new technotes. And there is a submenu hyperlink on technote displays for editing existing technotes. Technotes can also be created using the <b>wiki create</b> command: | < | | | | < < | | | | | < < | | | < | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | There is a hyperlink under the /wikihelp menu that can be used to create new technotes. And there is a submenu hyperlink on technote displays for editing existing technotes. Technotes can also be created using the <b>wiki create</b> command: <verbatim> fossil wiki create TestTechnote -t now --technote-bgcolor lightgreen technote.md Created new tech note 2021-03-15 13:05:56 </verbatim> This command inserts a light green technote in the timeline at 2021-03-15 13:05:56, with the contents of file <b>technote.md</b> and comment "TestTechnote". Specifying a different time using <b>-t DATETIME</b> will insert the technote at the specified timestamp location in the timeline. Different technotes can have the same timestamp. The first argument to create, <b>TECHNOTE-COMMENT</b>, is the title text for the technote that appears in the timeline. To view all technotes, use the <b>wiki ls</b> command: <verbatim> fossil wiki ls --technote --show-technote-ids z739263a134bf0da1d28e939f4c4367f51ef4c51 2020-12-19 13:20:19 e15a918a8bed71c2ac091d74dc397b8d3340d5e1 2018-09-22 17:40:10 </verbatim> A technote ID is the UUID of the technote. To view an individual technote, use the <b>wiki export</b> command: <verbatim> fossil wiki export --technote version-2.16 Release Notes 2021-07-02 This note describes changes in the Fossil snapshot for ... </verbatim> The <b>-t|--technote</b> option to the <b>export</b> subcommand takes one of three identifiers: <b>DATETIME</b>; <b>TECHNOTE-ID</b>; and <b>TAG</b>. See the [/help?cmd=wiki | wiki help] for specifics. Users must have check-in privileges (permission "i") in order to create or edit technotes. In addition, users must have create-wiki |
︙ | ︙ |
Changes to www/faq.tcl.
︙ | ︙ | |||
10 11 12 13 14 15 16 | faq { What GUIs are available for fossil? } { The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: | | | | 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | faq { What GUIs are available for fossil? } { The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: <pre> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </pre> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> if you are within an open check-out.) } faq { |
︙ | ︙ | |||
40 41 42 43 44 45 46 | When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: <pre> <b>fossil [/help/branch|branch] new</b> <i>BRANCH-NAME BASIS</i> </pre> The <i>BRANCH-NAME</i> argument is the name of the new branch and the <i>BASIS</i> argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. |
︙ | ︙ | |||
73 74 75 76 77 78 79 | "--tag <i>TAGNAME</i>" command-line option. You can repeat the --tag option to give a check-in multiple tags. Tags need not be unique. So, for example, it is common to give every released version a "release" tag. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: | | | | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | "--tag <i>TAGNAME</i>" command-line option. You can repeat the --tag option to give a check-in multiple tags. Tags need not be unique. So, for example, it is common to give every released version a "release" tag. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: <pre> <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </pre> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the timeline, then click on the link to go the detailed |
︙ | ︙ | |||
125 126 127 128 129 130 131 | See the article on [./shunning.wiki | "shunning"] for details. } faq { How do I make a clone of the fossil self-hosting repository? } { Any of the following commands should work: | > | | > > | | > > | | | < | | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | See the article on [./shunning.wiki | "shunning"] for details. } faq { How do I make a clone of the fossil self-hosting repository? } { Any of the following commands should work: <pre> fossil [/help/clone|clone] https://fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www2.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www3.fossil-scm.org/site.cgi fossil.fossil </pre> Once you have the repository cloned, you can open a local check-out as follows: <pre> mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil </pre> Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing: <pre> fossil [/help/update|update] </pre> } faq { How do I import or export content from and to other version control systems? } { Please see [./inout.wiki | Import And Export] } ############################################################################# # Code to actually generate the FAQ # puts "<title>Fossil FAQ</title>\n" puts "Note: See also <a href=\"qandc.wiki\">Questions and Criticisms</a>.\n" puts {<ol>} for {set i 1} {$i<$cnt} {incr i} { puts "<li><a href=\"#q$i\">[lindex $faq($i) 0]</a></li>" } puts {</ol>} puts {<hr>} for {set i 1} {$i<$cnt} {incr i} { puts "<p id=\"q$i\"><b>($i) [lindex $faq($i) 0]</b></p>\n" set body [lindex $faq($i) 1] regsub -all "\n *" [string trim $body] "\n" body puts "$body</li>\n" } puts {</ol>} |
Changes to www/faq.wiki.
1 | <title>Fossil FAQ</title> | < | | | | | | | | | | | | | | | | | | > | | > > | | > > | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | <title>Fossil FAQ</title> Note: See also <a href="qandc.wiki">Questions and Criticisms</a>. <ol> <li><a href="#q1">What GUIs are available for fossil?</a></li> <li><a href="#q2">What is the difference between a "branch" and a "fork"?</a></li> <li><a href="#q3">How do I create a new branch?</a></li> <li><a href="#q4">How do I tag a check-in?</a></li> <li><a href="#q5">How do I create a private branch that won't get pushed back to the main repository.</a></li> <li><a href="#q6">How can I delete inappropriate content from my fossil repository?</a></li> <li><a href="#q7">How do I make a clone of the fossil self-hosting repository?</a></li> <li><a href="#q8">How do I import or export content from and to other version control systems?</a></li> </ol> <hr> <p id="q1"><b>(1) What GUIs are available for fossil?</b></p> The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: <pre> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </pre> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> if you are within an open check-out.)</li> <p id="q2"><b>(2) What is the difference between a "branch" and a "fork"?</b></p> This is a big question - too big to answer in a FAQ. Please read the <a href="branching.wiki">Branching, Forking, Merging, and Tagging</a> document.</li> <p id="q3"><b>(3) How do I create a new branch?</b></p> There are lots of ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: <pre> <b>fossil [/help/branch|branch] new</b> <i>BRANCH-NAME BASIS</i> </pre> The <i>BRANCH-NAME</i> argument is the name of the new branch and the <i>BASIS</i> argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. On the "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button.</li> <p id="q4"><b>(4) How do I tag a check-in?</b></p> There are several ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add a tag to that check-in using the "--tag <i>TAGNAME</i>" command-line option. You can repeat the --tag option to give a check-in multiple tags. Tags need not be unique. So, for example, it is common to give every released version a "release" tag. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: <pre> <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </pre> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the timeline, then click on the link to go the detailed information page for that check-in. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. There are controls on the edit page that allow new tags to be added and existing tags to be removed.</li> <p id="q5"><b>(5) How do I create a private branch that won't get pushed back to the main repository.</b></p> Use the <b>--private</b> command-line option on the <b>commit</b> command. The result will be a check-in which exists on your local repository only and is never pushed to other repositories. All descendants of a private check-in are also private. Unless you specify something different using the <b>--branch</b> and/or <b>--bgcolor</b> options, the new private check-in will be put on a branch named "private" with an orange background color. You can merge from the trunk into your private branch in order to keep your private branch in sync with the latest changes on the trunk. Once you have everything in your private branch the way you want it, you can then merge your private branch back into the trunk and push. Only the final merge operation will appear in other repositories. It will seem as if all the changes that occurred on your private branch occurred in a single check-in. Of course, you can also keep your branch private forever simply by not merging the changes in the private branch back into the trunk. [./private.wiki | Additional information]</li> <p id="q6"><b>(6) How can I delete inappropriate content from my fossil repository?</b></p> See the article on [./shunning.wiki | "shunning"] for details.</li> <p id="q7"><b>(7) How do I make a clone of the fossil self-hosting repository?</b></p> Any of the following commands should work: <pre> fossil [/help/clone|clone] https://fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www2.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www3.fossil-scm.org/site.cgi fossil.fossil </pre> Once you have the repository cloned, you can open a local check-out as follows: <pre> mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil </pre> Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing: <pre> fossil [/help/update|update] </pre></li> <p id="q8"><b>(8) How do I import or export content from and to other version control systems?</b></p> Please see [./inout.wiki | Import And Export]</li> </ol> |
Changes to www/fileformat.wiki.
1 | <title>Fossil File Formats</title> | < < < | 1 2 3 4 5 6 7 8 | <title>Fossil File Formats</title> The global state of a fossil repository is kept simple so that it can endure in useful form for decades or centuries. A fossil repository is intended to be readable, searchable, and extensible by people not yet born. The global state of a fossil repository is an unordered |
︙ | ︙ | |||
108 109 110 111 112 113 114 | well as information such as parent check-ins, the username of the programmer who created the check-in, the date and time when the check-in was created, and any check-in comments associated with the check-in. Allowed cards in the manifest are as follows: | | | | 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | well as information such as parent check-ins, the username of the programmer who created the check-in, the date and time when the check-in was created, and any check-in comments associated with the check-in. Allowed cards in the manifest are as follows: <div class="indent"> <b>B</b> <i>baseline-manifest</i><br> <b>C</b> <i>checkin-comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br> <b>F</b> <i>filename</i> ?<i>hash</i>? ?<i>permissions</i>? ?<i>old-name</i>?<br> <b>N</b> <i>mimetype</i><br> <b>P</b> <i>artifact-hash</i>+<br> <b>Q</b> (<b>+</b>|<b>-</b>)<i>artifact-hash</i> ?<i>artifact-hash</i>?<br> <b>R</b> <i>repository-checksum</i><br> <b>T</b> (<b>+</b>|<b>-</b>|<b>*</b>)<i>tag-name</i> <b>*</b> ?<i>value</i>?<br> <b>U</b> <i>user-login</i><br> <b>Z</b> <i>manifest-checksum</i> </div> A manifest may optionally have a single <b>B</b> card. The <b>B</b> card specifies another manifest that serves as the "baseline" for this manifest. A manifest that has a <b>B</b> card is called a delta-manifest and a manifest that omits the <b>B</b> card is a baseline-manifest. The other manifest identified by the argument of the <b>B</b> card must be a baseline-manifest. A baseline-manifest records the complete contents of a check-in. |
︙ | ︙ | |||
146 147 148 149 150 151 152 | in the comment. A manifest must have exactly one <b>D</b> card. The sole argument to the <b>D</b> card is a date-time stamp in the ISO8601 format. The date and time should be in coordinated universal time (UTC). The format one of: | | | < < | 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | in the comment. A manifest must have exactly one <b>D</b> card. The sole argument to the <b>D</b> card is a date-time stamp in the ISO8601 format. The date and time should be in coordinated universal time (UTC). The format one of: <pre class="indent"><i>YYYY-MM-DD<b>T</b>HH:MM:SS YYYY-MM-DD<b>T</b>HH:MM:SS.SSS</i></pre> A manifest has zero or more <b>F</b> cards. Each <b>F</b> card identifies a file that is part of the check-in. There are one, two, three, or four arguments. The first argument is the pathname of the file in the check-in relative to the root of the project file hierarchy. No ".." or "." directories are allowed within the filename. Space characters are escaped as in <b>C</b> card comment text. Backslash characters and |
︙ | ︙ | |||
261 262 263 264 265 266 267 | Clusters are used during repository synchronization to help reduce network traffic. As such, clusters are an optimization and may be removed from a repository without loss or damage to the underlying project code. Allowed cards in the cluster are as follows: | | | | | | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | Clusters are used during repository synchronization to help reduce network traffic. As such, clusters are an optimization and may be removed from a repository without loss or damage to the underlying project code. Allowed cards in the cluster are as follows: <div class="indent"> <b>M</b> <i>artifact-id</i><br /> <b>Z</b> <i>checksum</i> </div> A cluster contains one or more <b>M</b> cards followed by a single <b>Z</b> card. Each <b>M</b> card has a single argument which is the artifact ID of another artifact in the repository. The <b>Z</b> card works exactly like the <b>Z</b> card of a manifest. The argument to the <b>Z</b> card is the lower-case hexadecimal representation of the MD5 checksum of all prior cards in the cluster. The <b>Z</b> card is required. An example cluster from Fossil can be seen [/artifact/d03dbdd73a2a8 | here]. <h3 id="ctrl">2.3 Control Artifacts</h3> Control artifacts are used to assign properties to other artifacts within the repository. Allowed cards in a control artifact are as follows: <div class="indent"> <b>D</b> <i>time-and-date-stamp</i><br /> <b>T</b> (<b>+</b>|<b>-</b>|<b>*</b>)<i>tag-name</i> <i>artifact-id</i> ?<i>value</i>?<br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i><br /> </div> A control artifact must have one <b>D</b> card, one <b>U</b> card, one <b>Z</b> card and one or more <b>T</b> cards. No other cards or other text is allowed in a control artifact. Control artifacts might be PGP clearsigned. The <b>D</b> card and the <b>Z</b> card of a control artifact are the same |
︙ | ︙ | |||
336 337 338 339 340 341 342 | <h3 id="wikichng">2.4 Wiki Pages</h3> A wiki artifact defines a single version of a single wiki page. Wiki artifacts accept the following card types: | | | | 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 | <h3 id="wikichng">2.4 Wiki Pages</h3> A wiki artifact defines a single version of a single wiki page. Wiki artifacts accept the following card types: <div class="indent"> <b>C</b> <i>change-comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br /> <b>L</b> <i>wiki-title</i><br /> <b>N</b> <i>mimetype</i><br /> <b>P</b> <i>parent-artifact-id</i>+<br /> <b>U</b> <i>user-name</i><br /> <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </div> The <b>D</b> card is the date and time when the wiki page was edited. The <b>P</b> card specifies the parent wiki pages, if any. The <b>L</b> card gives the name of the wiki page. The optional <b>N</b> card specifies the mimetype of the wiki text. If the <b>N</b> card is omitted, the mimetype is assumed to be text/x-fossil-wiki. The <b>U</b> card specifies the login |
︙ | ︙ | |||
377 378 379 380 381 382 383 | [/artifact?name=7b2f5fd0e0&txt=1 | here]. <h3 id="tktchng">2.5 Ticket Changes</h3> A ticket-change artifact represents a change to a trouble ticket. The following cards are allowed on a ticket change artifact: | | | | 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | [/artifact?name=7b2f5fd0e0&txt=1 | here]. <h3 id="tktchng">2.5 Ticket Changes</h3> A ticket-change artifact represents a change to a trouble ticket. The following cards are allowed on a ticket change artifact: <div class="indent"> <b>D</b> <i>time-and-date-stamp</i><br /> <b>J</b> ?<b>+</b>?<i>name</i> ?<i>value</i>?<br /> <b>K</b> <i>ticket-id</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </div> The <b>D</b> card is the usual date and time stamp and represents the point in time when the change was entered. The <b>U</b> card is the login of the programmer who entered this change. The <b>Z</b> card is the required checksum over the entire artifact. Every ticket has a distinct ticket-id: |
︙ | ︙ | |||
425 426 427 428 429 430 431 | An attachment artifact associates some other artifact that is the attachment (the source artifact) with a ticket or wiki page or technical note to which the attachment is connected (the target artifact). The following cards are allowed on an attachment artifact: | | | | 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 | An attachment artifact associates some other artifact that is the attachment (the source artifact) with a ticket or wiki page or technical note to which the attachment is connected (the target artifact). The following cards are allowed on an attachment artifact: <div class="indent"> <b>A</b> <i>filename target</i> ?<i>source</i>?<br /> <b>C</b> <i>comment</i><br /> <b>D</b> <i>time-and-date-stamp</i><br /> <b>N</b> <i>mimetype</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </div> The <b>A</b> card specifies a filename for the attachment in its first argument. The second argument to the <b>A</b> card is the name of the wiki page or ticket or technical note to which the attachment is connected. The third argument is either missing or else it is the lower-case artifact ID of the attachment itself. A missing third argument means that the attachment should be deleted. |
︙ | ︙ | |||
467 468 469 470 471 472 473 | A technical note or "technote" artifact (formerly known as an "event" artifact) associates a timeline comment and a page of text (similar to a wiki page) with a point in time. Technotes can be used to record project milestones, release notes, blog entries, process checkpoints, or news articles. The following cards are allowed on an technote artifact: | | | | 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 | A technical note or "technote" artifact (formerly known as an "event" artifact) associates a timeline comment and a page of text (similar to a wiki page) with a point in time. Technotes can be used to record project milestones, release notes, blog entries, process checkpoints, or news articles. The following cards are allowed on an technote artifact: <div class="indent"> <b>C</b> <i>comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br /> <b>E</b> <i>technote-time</i> <i>technote-id</i><br /> <b>N</b> <i>mimetype</i><br /> <b>P</b> <i>parent-artifact-id</i>+<br /> <b>T</b> <b>+</b><i>tag-name</i> <b>*</b> ?<i>value</i>?<br /> <b>U</b> <i>user-name</i><br /> <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </div> The <b>C</b> card contains text that is displayed on the timeline for the technote. The <b>C</b> card is optional, but there can only be one. A single <b>D</b> card is required to give the date and time when the technote artifact was created. This is different from the time at which the technote appears on the timeline. |
︙ | ︙ | |||
532 533 534 535 536 537 538 | <h3 id="forum">2.8 Forum Posts</h3> Forum posts are intended as a mechanism for users and developers to discuss a project. Forum posts are like messages on a mailing list. The following cards are allowed on an forum post artifact: | | | | 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 | <h3 id="forum">2.8 Forum Posts</h3> Forum posts are intended as a mechanism for users and developers to discuss a project. Forum posts are like messages on a mailing list. The following cards are allowed on an forum post artifact: <div class="indent"> <b>D</b> <i>time-and-date-stamp</i><br /> <b>G</b> <i>thread-root</i><br /> <b>H</b> <i>thread-title</i><br /> <b>I</b> <i>in-reply-to</i><br /> <b>N</b> <i>mimetype</i><br /> <b>P</b> <i>parent-artifact-id</i><br /> <b>U</b> <i>user-name</i><br /> <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </div> Every forum post must have either one <b>I</b> card and one <b>G</b> card or one <b>H</b> card. Forum posts are organized into topic threads. The initial post for a thread (the root post) has an <b>H</b> card giving the title or subject for that thread. The argument to the <b>H</b> card is a string in the same format as a comment string in a <b>C</b> card. |
︙ | ︙ | |||
608 609 610 611 612 613 614 | The following table summarizes the various kinds of cards that appear on Fossil artifacts. A blank entry means that combination of card and artifact is not legal. A number or range of numbers indicates the number of times a card may (or must) appear in the corresponding artifact type. e.g. a value of 1 indicates a required unique card and 1+ indicates that one or more such cards are required. | | | < < < | 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 | The following table summarizes the various kinds of cards that appear on Fossil artifacts. A blank entry means that combination of card and artifact is not legal. A number or range of numbers indicates the number of times a card may (or must) appear in the corresponding artifact type. e.g. a value of 1 indicates a required unique card and 1+ indicates that one or more such cards are required. <table> <tr> <th>⇩ Card Format / Used By ⇨</th> <th>Manifest</th> <th>Cluster</th> <th>Control</th> <th>Wiki</th> <th>Ticket</th> <th>Attachment</th> <th>Technote</th> |
︙ | ︙ | |||
907 908 909 910 911 912 913 | wrong order. Both bugs have now been fixed. However, to prevent historical Technical Note artifacts that were inserted by users in good faith from being rejected by newer Fossil builds, the card ordering requirement is relaxed slightly. The actual implementation is this: | | | | 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 | wrong order. Both bugs have now been fixed. However, to prevent historical Technical Note artifacts that were inserted by users in good faith from being rejected by newer Fossil builds, the card ordering requirement is relaxed slightly. The actual implementation is this: <p class=blockquote> "All cards must be in strict lexicographic order, except that the N and P cards of a Technical Note artifact are allowed to be interchanged." </p> Future versions of Fossil might strengthen this slightly to only allow the out of order N and P cards for Technical Notes entered before a certain date. <h3>4.2 R-Card Hash Calculation</h3> |
︙ | ︙ |
Changes to www/forum.wiki.
︙ | ︙ | |||
132 133 134 135 136 137 138 | The remainder of this section summarizes the differences you're expected to see when taking option #2. The first thing is that you'll need to add something like the following to the Header part of the skin to create the navbar link: <verbatim> | | | | | | | | | | | | | | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | The remainder of this section summarizes the differences you're expected to see when taking option #2. The first thing is that you'll need to add something like the following to the Header part of the skin to create the navbar link: <verbatim> if {[anycap 23456] || [anoncap 2] || [anoncap 3]} { menulink /forum Forum } </verbatim> These rules say that any logged-in user with any [./caps/ref.html#2 | forum-related capability] or an anonymous user <b>RdForum</b> or <b>WrForum</b> capability will see the "Forum" navbar link, which just takes you to <tt>/forum</tt>. The exact code you need here varies depending on which skin you're using. Follow the style you see for the other navbar links. The new forum feature also brings many new CSS styles to the table. If you're using the stock skin or something sufficiently close, the changes may work with your existing skin as-is. Otherwise, you might need to adjust some things, such as the background color used for the selected forum post: <verbatim> div.forumSel { background-color: rgba(0, 0, 0, 0.05); } </verbatim> That overrides the default — a hard-coded light cyan — with a 95% transparent black overlay instead, which simply darkens your skin's normal background color underneath the selected post. That should work with almost any background color except for very dark background colors. For dark skins, an inverse of the above trick will work better: <verbatim> div.forumSel { background-color: rgba(255, 255, 255, 0.05); } </verbatim> That overlays the background with 5% white to lighten it slightly. Another new forum-related CSS style you might want to reflect into your existing skin is: <verbatim> div.forumPosts a:visited { color: #6A7F94; } </verbatim> This changes the clicked-hyperlink color for the forum post links on the main <tt>/forum</tt> page only, which allows your browser's history mechanism to show which threads a user has read and which not. The link color will change back to the normal link color — indicating "unread" — when a reply is added to an existing thread because that changes where |
︙ | ︙ |
Changes to www/fossil-v-git.wiki.
︙ | ︙ | |||
32 33 34 35 36 37 38 | <h2>2.0 Differences Between Fossil And Git</h2> Differences between Fossil and Git are summarized by the following table, with further description in the text that follows. | | | > | | > | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | <h2>2.0 Differences Between Fossil And Git</h2> Differences between Fossil and Git are summarized by the following table, with further description in the text that follows. <table style="width: fit-content"> <tr><th>GIT</th><th>FOSSIL</th><th>more</th></tr> <tr> <td>File versioning only</td> <td> VCS, tickets, wiki, docs, notes, forum, chat, UI, [https://en.wikipedia.org/wiki/Role-based_access_control|RBAC] </td> <td><a href="#features">2.1 ↓</a></td> </tr> <tr> <td>A federation of many small programs</td> <td>One self-contained, stand-alone executable</td> <td><a href="#selfcontained">2.2 ↓</a></td> </tr> |
︙ | ︙ | |||
95 96 97 98 99 100 101 | <td><a href="#testing">2.8 ↓</a></td> </tr> <tr> <td>SHA-1 or SHA-2</td> <td>SHA-1 and/or SHA-3, in the same repository</td> <td><a href="#hash">2.9 ↓</a></td> </tr> | | | 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | <td><a href="#testing">2.8 ↓</a></td> </tr> <tr> <td>SHA-1 or SHA-2</td> <td>SHA-1 and/or SHA-3, in the same repository</td> <td><a href="#hash">2.9 ↓</a></td> </tr> </table> <h3 id="features">2.1 Featureful</h3> Git provides file versioning services only, whereas Fossil adds an integrated [./wikitheory.wiki | wiki], [./bugtheory.wiki | ticketing & bug tracking], [./embeddeddoc.wiki | embedded documentation], |
︙ | ︙ | |||
795 796 797 798 799 800 801 | which every commit is tested first. It encourages thinking before acting. We believe this is an inherently good thing. Incidentally, this is a good example of Git's messy command design. These three commands: <pre> | | | | | | | | 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 | which every commit is tested first. It encourages thinking before acting. We believe this is an inherently good thing. Incidentally, this is a good example of Git's messy command design. These three commands: <pre> $ git merge HASH $ git cherry-pick HASH $ git revert HASH </pre> ...are all the same command in Fossil: <pre> $ fossil merge HASH $ fossil merge --cherrypick HASH $ fossil merge --backout HASH </pre> If you think about it, they're all the same function: apply work done on one branch to another. All that changes between these commands is how much work gets applied — just one check-in or a whole branch — and the merge direction. This is the sort of thing we mean when we point out that Fossil's command interface is simpler than Git's: there are fewer |
︙ | ︙ |
Changes to www/fossil_prompt.wiki.
1 | <title>Fossilized Bash Prompt</title> | < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | <title>Fossilized Bash Prompt</title> Dan Kennedy has contributed a [./fossil_prompt.sh?mimetype=text/plain | bash script] that manipulates the bash prompt to show the status of the Fossil repository that the user is currently visiting. The prompt shows the branch, version, and time stamp for the current checkout, and the prompt changes colors from blue to red when there are uncommitted changes. To try out this script, simply download it from the link above, then type: <pre> . fossil_prompt.sh </pre> For a permanent installation, you can graft the code into your <tt>.bashrc</tt> file in your home directory. The code is very simple (only 32 non-comment lines, as of this writing) and hence easy to customized. |
Changes to www/gitusers.md.
︙ | ︙ | |||
71 72 73 74 75 76 77 | advocate a switch-in-place working mode instead, so that is how most users end up working with Git. Contrast [Fossil’s check-out workflow document][ckwf] to see the practical differences. There is one Git-specific detail we wish to add beyond what that document already covers. This command: | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | advocate a switch-in-place working mode instead, so that is how most users end up working with Git. Contrast [Fossil’s check-out workflow document][ckwf] to see the practical differences. There is one Git-specific detail we wish to add beyond what that document already covers. This command: git checkout some-branch …is best given as: fossil update some-branch …in Fossil. There is a [`fossil checkout`][co] command, but it has [several differences](./co-vs-up.md) that make it less broadly useful than [`fossil update`][up] in everyday operation, so we recommend that Git users moving to Fossil develop a habit of typing `fossil up` rather than `fossil checkout`. That said, one of those differences does match up with Git users’ expectations: `fossil checkout` doesn’t pull changes |
︙ | ︙ | |||
107 108 109 110 111 112 113 | choice also tends to make Fossil feel comfortable to Subversion expatriates.) The `fossil pull` command is simply the reverse of `fossil push`, so that `fossil sync` [is functionally equivalent to](./sync.wiki#sync): | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | choice also tends to make Fossil feel comfortable to Subversion expatriates.) The `fossil pull` command is simply the reverse of `fossil push`, so that `fossil sync` [is functionally equivalent to](./sync.wiki#sync): fossil push ; fossil pull There is no implicit “and update the local working directory” step in Fossil’s push, pull, or sync commands, as there is with `git pull`. Someone coming from the Git perspective may perceive that `fossil up` has two purposes: |
︙ | ︙ | |||
178 179 180 181 182 183 184 | There are at least three different ways to get [Fossil-style multiple check-out directories][mcw] with Git. The old way is to simply symlink the `.git` directory between working trees: | | | | | | | | | | | | | | | | | | | | | 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | There are at least three different ways to get [Fossil-style multiple check-out directories][mcw] with Git. The old way is to simply symlink the `.git` directory between working trees: mkdir ../foo-branch ln -s ../actual-clone-dir/.git . git checkout foo-branch The symlink trick has a number of problems, the largest being that symlinks weren’t available on Windows until Vista, and until the Windows 10 Creators Update was released in spring of 2017, you had to be an Administrator to use the feature besides. ([Source][wsyml]) Git 2.5 solved this problem back when Windows XP was Microsoft’s current offering by adding the `git-worktree` command: git worktree add ../foo-branch foo-branch cd ../foo-branch That is approximately equivalent to this in Fossil: mkdir ../foo-branch cd ../foo-branch fossil open /path/to/repo.fossil foo-branch The Fossil alternative is wordier, but since this tends to be one-time setup, not something you do everyday, the overhead is insignificant. This author keeps a “scratch” check-out for cases where it’s inappropriate to reuse the “trunk” check-out, isolating all of my expedient switch-in-place actions to that one working directory. Since the other peer check-outs track long-lived branches, and that set rarely changes once a development machine is set up, I rarely pay the cost of these wordier commands. That then leads us to the closest equivalent in Git to [closing a Fossil check-out](#close): git worktree remove . Note, however, that unlike `fossil close`, once the Git command determines that there are no uncommitted changes, it blows away all of the checked-out files! Fossil’s alternative is shorter, easier to remember, and safer. There’s another way to get Fossil-like separate worktrees in Git: git clone --separate-git-dir repo.git https://example.com/repo This allows you to have your Git repository directory entirely separate from your working tree, with `.git` in the check-out directory being a file that points to `../repo.git`, in this example. [mcw]: ./ckout-workflows.md#mcw [wsyml]: https://blogs.windows.com/windowsdeveloper/2016/12/02/symlinks-windows-10/ #### <a id="iip"></a> Init in Place To illustrate the differences that Fossil’s separation of repository from working directory creates in practice, consider this common Git “init in place” method for creating a new repository from an existing tree of files, perhaps because you are placing that project under version control for the first time: cd long-established-project git init git add * git commit -m "Initial commit of project." The closest equivalent in Fossil is: cd long-established-project fossil init .fsl fossil open --force .fsl fossil add * fossil ci -m "Initial commit of project." Note that unlike in Git, you can abbreviate the “`commit`” command in Fossil as “`ci`” for compatibility with CVS, Subversion, etc. This creates a `.fsl` repo DB at the root of the project check-out to emulate the `.git` repo dir. We have to use the `--force` flag on opening the new repo because Fossil expects you to open a repo into an |
︙ | ︙ | |||
314 315 316 317 318 319 320 | #### <a id="emu-log"></a> Emulating `git log` If you truly need a backwards-in-time-only view of history in Fossil to emulate `git log`, this is as close as you can currently come: | | | | | | | | | | | | | | | | | | | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | #### <a id="emu-log"></a> Emulating `git log` If you truly need a backwards-in-time-only view of history in Fossil to emulate `git log`, this is as close as you can currently come: fossil timeline parents current Again, though, this isn’t restricted to a single branch, as `git log` is. Another useful rough equivalent is: git log --raw fossil time -v This shows what changed in each version, though Fossil’s view is more a summary than a list of raw changes. To dig deeper into single commits, you can use Fossil’s [`info` command][infoc] or its [`/info` view][infow]. Inversely, you may more exactly emulate the default `fossil timeline` output with `git log --name-status`. #### <a id="whatchanged"></a> What Changed? A related — though deprecated — command is `git whatchanged`, which gives results similar to `git log --raw`, so we cover it here. Though there is no `fossil whatchanged` command, the same sort of information is available. For example, to pull the current changes from the remote repository and then inspect them before updating the local working directory, you might say this in Git: git fetch git whatchanged ..@{u} …which you can approximate in Fossil as: fossil pull fossil up -n fossil diff --from tip To invert the `diff` to show a more natural patch, the command needs to be a bit more complicated, since you can’t currently give `--to` without `--from`. fossil diff --from current --to tip Rather than use the “dry run” form of [the `update` command][up], you can say: fossil timeline after current …or if you want to restrict the output to the current branch: fossil timeline descendants current #### <a id="ckin-names"></a> Symbolic Check-In Names Note the use of [human-readable symbolic version names][scin] in Fossil rather than [Git’s cryptic notations][gcn]. For a more dramatic example of this, let us ask Git, “What changed since the beginning of last month?” being October 2020 as I write this: git log master@{2020-10-01}..HEAD That’s rather obscure! Fossil answers the same question with a simpler command: fossil timeline after 2020-10-01 You may need to add `-n 0` to bypass the default output limit of `fossil timeline`, 20 entries. Without that, this command reads almost like English. Some Git users like to write commands like the above so: git log @{2020-10-01}..@ Is that better? “@” now means two different things: an at-time reference and a shortcut for `HEAD`! If you are one of those that like short commands, Fossil’s method is less cryptic: it lets you shorten words in most cases up to the point that they become ambiguous. For example, you may abbreviate the last `fossil` command in the prior section: fossil tim d c …beyond which the `timeline` command becomes ambiguous with `ticket`. Some Fossil users employ shell aliases, symlinks, or scripts to shorten the command still further: alias f=fossil f tim d c Granted, that’s rather obscure, but you you can also choose something intermediate like “`f time desc curr`”, which is reasonably clear. [35pct]: https://www.sqlite.org/fasterthanfs.html [btree]: https://sqlite.org/btreemodule.html [gcn]: https://git-scm.com/docs/gitrevisions |
︙ | ︙ | |||
466 467 468 469 470 471 472 | Fossil omits the "Git index" or "staging area" concept. When you type "`fossil commit`" _all_ changes in your check-out are committed, automatically. There is no need for the "-a" option as with Git. If you only want to commit _some_ of the changes, list the names of the files or directories you want to commit as arguments, like this: | | | | | 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 | Fossil omits the "Git index" or "staging area" concept. When you type "`fossil commit`" _all_ changes in your check-out are committed, automatically. There is no need for the "-a" option as with Git. If you only want to commit _some_ of the changes, list the names of the files or directories you want to commit as arguments, like this: fossil commit src/feature.c doc/feature.md examples/feature Note that the last element is a directory name, meaning “any changed file under the `examples/feature` directory.” Although there are currently no <a id="csplit"></a>[commit splitting][gcspl] features in Fossil like `git add -p`, `git commit -p`, or `git rebase -i`, you can get the same effect by converting an uncommitted change set to a patch and then running it through [Patchouli]. Rather than use `fossil diff -i` to produce such a patch, a safer and more idiomatic method would be: fossil stash save -m 'my big ball-o-hackage' fossil stash diff > my-changes.patch That stores your changes in the stash, then lets you operate on a copy of that patch. Each time you re-run the second command, it will take the current state of the working directory into account to produce a potentially different patch, likely smaller because it leaves out patch hunks already applied. |
︙ | ︙ | |||
524 525 526 527 528 529 530 | <a id="bneed"></a> ## Create Branches at Point of Need, Rather Than Ahead of Need Fossil prefers that you create new branches as part of the first commit on that branch: | | | | | | | | | 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 | <a id="bneed"></a> ## Create Branches at Point of Need, Rather Than Ahead of Need Fossil prefers that you create new branches as part of the first commit on that branch: fossil commit --branch my-branch If that commit is successful, your local check-out directory is then switched to the tip of that branch, so subsequent commits don’t need the “`--branch`” option. You simply say `fossil commit` again to continue adding commits to the tip of that branch. To switch back to the parent branch, say something like: fossil update trunk (This is approximately equivalent to `git checkout master`.) Fossil does also support the Git style, creating the branch ahead of need: fossil branch new my-branch fossil up my-branch ...work on first commit... fossil commit This is more verbose, giving the same overall effect though the initial actions are inverted: create a new branch for the first commit, switch the check-out directory to that branch, and make that first commit. As above, subsequent commits are descendants of that initial branch commit. We think you’ll agree that creating a branch as part of the initial commit is simpler. Fossil also allows you to move a check-in to a different branch *after* you commit it, using the "`fossil amend`" command. For example: fossil amend current --branch my-branch This works by inserting a tag into the repository that causes the web UI to relabel commits from that point forward with the new name. Like Git, Fossil’s fundamental data structure is the interlinked DAG of commit hashes; branch names are supplemental data for making it easier for the humans to understand this DAG, so this command does not change the core history of the project, only annotate it for better display to the |
︙ | ︙ | |||
589 590 591 592 593 594 595 | [Fossil is an AP-mode system][capt], which in this case means it works *very hard* to ensure that all repos are as close to identical as it can make them under this eventually-consistent design philosophy. Branch *names* sync automatically in Fossil, not just the content of those branches. That means this common Git command: | | | | | 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 | [Fossil is an AP-mode system][capt], which in this case means it works *very hard* to ensure that all repos are as close to identical as it can make them under this eventually-consistent design philosophy. Branch *names* sync automatically in Fossil, not just the content of those branches. That means this common Git command: git push origin master …is simply this in Fossil: fossil push Fossil doesn’t need to be told what to push or where to push it: it just keeps using the same remote server URL you gave it last until you [tell it to do something different][rem]. It pushes all branches, not just one named local branch. [capt]: ./cap-theorem.md [rem]: /help?cmd=remote <a id="autosync"></a> ## Autosync Fossil’s [autosync][wflow] feature, normally enabled, has no equivalent in Git. If you want Fossil to behave like Git, you can turn it off: fossil set autosync 0 Let’s say that you have a typical server-and-workstations model with two working clones on different machines, that you have disabled autosync, and that this common sequence then occurs: 1. Alice commits to her local clone and *separately* pushes the change up to Condor — their central server — in typical Git fashion. |
︙ | ︙ | |||
690 691 692 693 694 695 696 | We make no guarantee that there will always be a line beginning with “`repo`” and that it will be separated from the repository’s file name by a colon. The simplified example above is also liable to become confused by whitespace in file names.) ``` | | | | | | | | | | | 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 | We make no guarantee that there will always be a line beginning with “`repo`” and that it will be separated from the repository’s file name by a colon. The simplified example above is also liable to become confused by whitespace in file names.) ``` $ repo=$(fossil status | grep ^repo | cut -f2 -d:) $ url=$(fossil remote) $ fossil close # Stop here and think if it warns you! $ mv $repo ${repo}.old $ fossil clone $url $repo $ fossil open --force $repo ``` What, then, should you as a Git transplant do instead when you find yourself reaching for “`git reset`”? Since the correct answer to that depends on why you think it’s a good solution to your immediate problem, we’ll take our motivating scenario from the problem setup above, where we discussed Fossil’s [autosync] feature. Let us further say Alice’s pique results from a belief that Bob’s commit is objectively wrong-headed and should be expunged henceforth. Since Fossil goes out of its way to ensure that [commits are durable][wdm], it should be no further surprise that there is no easier method to reset Bob’s clone in favor of Alice’s than the above sequence in Fossil’s command set. Except in extreme situations, we believe that sort of thing is unnecessary. Instead, Bob can say something like this: ``` fossil amend --branch MISTAKE --hide --close -m "mea culpa" tip fossil up trunk fossil push ``` Unlike in Git, the “`amend`” command doesn’t modify prior committed artifacts. Bob’s first command doesn’t delete anything, merely tells Fossil to hide his mistake from timeline views by inserting a few new records into the local repository to change how the client interprets the data it finds there henceforth.(^One to change the tag marking this |
︙ | ︙ | |||
748 749 750 751 752 753 754 | to return her check-out’s parent commit to the previous version lest her next attempted commit land atop this mistake branch. The fact that Bob marked the branch as closed will prevent that from going thru, cluing Alice into what she needs to do to remedy the situation, but that merely shows why it’s a better workflow if Alice makes the amendment herself: ``` | | | | | | 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 | to return her check-out’s parent commit to the previous version lest her next attempted commit land atop this mistake branch. The fact that Bob marked the branch as closed will prevent that from going thru, cluing Alice into what she needs to do to remedy the situation, but that merely shows why it’s a better workflow if Alice makes the amendment herself: ``` fossil amend --branch MISTAKE --hide --close \ -m "shunt Bob’s erroneous commit off" tip fossil up trunk fossil push ``` Then she can fire off an email listing Bob’s assorted failings and go about her work. This asynchronous workflow solves the problem without requiring explicit coordination with Bob. When he gets his email, he can then say “`fossil up trunk`” himself, which by default will trigger an autosync, pulling down Alice’s amendments and getting him back onto her |
︙ | ︙ | |||
832 833 834 835 836 837 838 | format][udiff] output, suitable for producing a [patch file][pfile]. Nevertheless, there are multiple ways to get colorized diff output from Fossil: * The most direct method is to delegate diff behavior back to Git: | | | | 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 | format][udiff] output, suitable for producing a [patch file][pfile]. Nevertheless, there are multiple ways to get colorized diff output from Fossil: * The most direct method is to delegate diff behavior back to Git: fossil set --global diff-command 'git diff --no-index' The flag permits it to diff files that aren’t inside a Git repository. * Another method is to install [`colordiff`][cdiff] — included in [many package systems][cdpkg] — then say: fossil set --global diff-command 'colordiff -wu' Because this is unconditional, unlike `git diff --color=auto`, you will then have to remember to add the `-i` option to `fossil diff` commands when you want color disabled, such as when producing `patch(1)` files or piping diff output to another command that doesn’t understand ANSI escape sequences. There’s an example of this [below](#dstat). |
︙ | ︙ | |||
874 875 876 877 878 879 880 | While there is no direct equivalent to Git’s “`show`” command, similar functionality is present in Fossil under other commands: #### <a id="patch"></a> Show a Patch for a Commit | | | | | | | | | | | 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 | While there is no direct equivalent to Git’s “`show`” command, similar functionality is present in Fossil under other commands: #### <a id="patch"></a> Show a Patch for a Commit git show -p COMMIT_ID …gives much the same output as fossil diff --checkin COMMIT_ID …only without the patch email header. Git comes out of the [LKML] world, where emailing a patch is a normal thing to do. Fossil is [designed for cohesive teams][devorg] where drive-by patches are rarer. You can use any of [Fossil’s special check-in names][scin] in place of the `COMMIT_ID` in this and later examples. Fossil docs usually say “`VERSION`” or “`NAME`” where this is allowed, since the version string or name might not refer to a commit ID, but instead to a forum post, a wiki document, etc. For instance, the following command answers the question “What did I just commit?” fossil diff --checkin tip …or equivalently using a different symbolic commit name: fossil diff --from prev [devorg]: ./fossil-v-git.wiki#devorg [LKML]: https://lkml.org/ #### <a id="cmsg"></a> Show a Specific Commit Message git show -s COMMIT_ID …is fossil time -n 1 COMMIT_ID …or with a shorter, more obvious command, though with more verbose output: fossil info COMMIT_ID The `fossil info` command isn’t otherwise a good equivalent to `git show`; it just overlaps its functionality in some areas. Much of what’s missing is present in the corresponding [`/info` web view][infow], though. #### <a id="dstat"></a> Diff Statistics Fossil’s closest internal equivalent to commands like `git show --stat` is: fossil diff -i --from 2020-04-01 --numstat The `--numstat` output is a bit cryptic, so we recommend delegating this task to [the widely-available `diffstat` tool][dst], which gives a histogram in its default output mode rather than bare integers: fossil diff -i -v --from 2020-04-01 | diffstat We gave the `-i` flag in both cases to force Fossil to use its internal diff implementation, bypassing [your local `diff-command` setting][dcset]. The `--numstat` option has no effect when you have an external diff command set, and some diff command alternatives like [`colordiff`][cdiff] (covered [above](#cdiff)) produce output that confuses `diffstat`. |
︙ | ︙ | |||
997 998 999 1000 1001 1002 1003 | The "[`fossil mv`][mv]" and "[`fossil rm`][rm]" commands work like they do in CVS in that they schedule the changes for the next commit by default: they do not actually rename or delete the files in your check-out. If you don’t like that default, you can change it globally: | | | | 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 | The "[`fossil mv`][mv]" and "[`fossil rm`][rm]" commands work like they do in CVS in that they schedule the changes for the next commit by default: they do not actually rename or delete the files in your check-out. If you don’t like that default, you can change it globally: fossil setting --global mv-rm-files 1 Now these commands behave like in Git in any Fossil repository where this setting hasn’t been overridden locally. If you want to keep Fossil’s soft `mv/rm` behavior most of the time, you can cast it away on a per-command basis: fossil mv --hard old-name new-name [mv]: /help?cmd=mv [rm]: /help?cmd=rm ---- |
︙ | ︙ | |||
1030 1031 1032 1033 1034 1035 1036 | history to find a “good” version to anchor the start point of a [`fossil bisect`][fbis] operation. My search engine’s first result for “git checkout by date” is [this highly-upvoted accepted Stack Overflow answer][gcod]. The first command it gives is based on Git’s [`rev-parse` feature][grp]: | | | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 | history to find a “good” version to anchor the start point of a [`fossil bisect`][fbis] operation. My search engine’s first result for “git checkout by date” is [this highly-upvoted accepted Stack Overflow answer][gcod]. The first command it gives is based on Git’s [`rev-parse` feature][grp]: git checkout master@{2020-03-17} There are a number of weaknesses in this command. From least to most critical: 1. It’s a bit cryptic. Leave off the refname or punctuation, and it means something else. You cannot simplify the cryptic incantation in the typical use case. |
︙ | ︙ | |||
1070 1071 1072 1073 1074 1075 1076 | Consequently, we cannot recommend this command at all. It’s unreliable even in the best case. That same Stack Overflow answer therefore goes on to recommend an entirely different command: | | | | | | | | | 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 | Consequently, we cannot recommend this command at all. It’s unreliable even in the best case. That same Stack Overflow answer therefore goes on to recommend an entirely different command: git checkout $(git rev-list -n 1 --first-parent --before="2020-03-17" master) We believe you get such answers to Git help requests in part because of its lack of an always-up-to-date [index into its log](#log) and in part because of its “small tools loosely joined” design philosophy. This sort of command is therefore composed piece by piece: <p style="text-align:center">◆ ◆ ◆</p> “Oh, I know, I’ll search the rev-list, which outputs commit IDs by parsing the log backwards from `HEAD`! Easy!” git rev-list --before=2020-03-17 “Blast! Forgot the commit ID!” git rev-list --before=2020-03-17 master “Double blast! It just spammed my terminal with revision IDs! I need to limit it to the single closest match: git rev-list -n 1 --before=2020-03-17 master “Okay, it gives me a single revision ID now, but is it what I’m after? Let’s take a look…” git show $(git rev-list -n 1 --before=2020-03-17 master) “Oops, that’s giving me a merge commit, not what I want. Off to search the web… Okay, it says I need to give either the `--first-parent` or `--no-merges` flag to show only regular commits, not merge-commits. Let’s try the first one:” git show $(git rev-list -n 1 --first-parent --before=2020-03-17 master) “Better. Let’s check it out:” git checkout $(git rev-list -n 1 --first-parent --before=2020-03-17 master) “Success, I guess?” <p style="text-align:center">◆ ◆ ◆</p> This vignette is meant to explain some of Git’s popularity: it rewards the sort of people who enjoy puzzles, many of whom are software |
︙ | ︙ | |||
1130 1131 1132 1133 1134 1135 1136 | second `git show` command above on [Git’s own repository][gitgh], your results may vary because there were four non-merge commits to Git on the 17th of March, 2020. You may be asking with an exasperated huff, “What is your *point*, man?” The point is that the equivalent in Fossil is simply: | | | 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 | second `git show` command above on [Git’s own repository][gitgh], your results may vary because there were four non-merge commits to Git on the 17th of March, 2020. You may be asking with an exasperated huff, “What is your *point*, man?” The point is that the equivalent in Fossil is simply: fossil up 2020-03-17 …which will *always* give the commit closest to midnight UTC on the 17th of March, 2020, no matter whether you do it on a fresh clone or a stale one. The answer won’t shift about from one clone to the next or from one local time of day to the next. We owe this reliability and stability to three Fossil design choices: |
︙ | ︙ | |||
1179 1180 1181 1182 1183 1184 1185 | and your family’s home NAS. #### Git Method We first need to clone the work repo down to our laptop, so we can work on it at home: | | | | | | | | | | | | | | | | | 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 | and your family’s home NAS. #### Git Method We first need to clone the work repo down to our laptop, so we can work on it at home: git clone https://dev-server.example.com/repo cd repo git remote rename origin work The last command is optional, strictly speaking. We could continue to use Git’s default name for the work repo’s origin — sensibly enough called “`origin`” — but it makes later commands harder to understand, so we rename it here. This will also make the parallel with Fossil easier to draw. The first time we go home after this, we have to reverse-clone the work repo up to the NAS: ssh my-nas.local 'git init --bare /SHARES/dayjob/repo.git' git push --all ssh://my-nas.local//SHARES/dayjob/repo.git Realize that this is carefully optimized down to these two long commands. In practice, we’d expect a user typing these commands by hand from memory to need to give four or more commands here instead. Packing the “`git init`” call into the “`ssh`” call is something more often done in scripts and documentation examples than done interactively, which then necessitates a third command before the push, “`exit`”. There’s also a good chance that you’ll forget the need for the `--bare` option here to avoid a fatal complaint from Git that the laptop can’t push into a non-empty repo. If you fall into this trap, among the many that Git lays for newbies, you have to nuke the incorrectly initted repo, search the web or Git man pages to find out about `--bare`, and try again. Having navigated that little minefield, we can tell Git that there is a second origin, a “home” repo in addition to the named “work” repo we set up earlier: git remote add home ssh://my-nas.local//SHARES/dayjob/repo.git git config master.remote home We don’t have to push or pull because the remote repo is a complete clone of the repo on the laptop at this point, so we can just get to work now, committing along the way to get our work safely off-machine and onto our home NAS, like so: git add git commit git push We didn’t need to give a remote name on the push because we told it the new upstream is the home NAS earlier. Now Friday comes along, and one of your office-mates needs a feature you’re working on. You agree to come into the office later that afternoon to sync up via the dev server: git push work master # send your changes from home up git pull work master # get your coworkers’ changes Alternately, we could add “`--set-upstream/-u work`” to the first command if we were coming into work long enough to do several Git-based things, not just pop in and sync. That would allow the second to be just “`git pull`”, but the cost is that when returning home, you’d have to manually reset the upstream again. This example also shows a consequence of that fact that [Git doesn’t sync branch names](#syncall): you have to keep repeating yourself like an obsequious supplicant: “Master, master.” Didn’t we invent computers to serve humans, rather than the other way around? #### Fossil Method Now we’re going to do the same thing using Fossil, with the commands arranged in blocks corresponding to those above for comparison. We start the same way, cloning the work repo down to the laptop: fossil clone https://dev-server.example.com/repo cd repo fossil remote add work https://dev-server.example.com/repo We’ve chosen the new “`fossil clone URI`” syntax added in Fossil 2.14 rather than separate `clone` and `open` commands to make the parallel with Git clearer. [See above](#mwd) for more on that topic. Our [`remote` command][rem] is longer than the Git equivalent because Fossil currently has no short command |
︙ | ︙ | |||
1276 1277 1278 1279 1280 1281 1282 | they’re one-time setup costs, easily amortized to insignificance by the shorter day-to-day commands below. On first beginning to work from home, we reverse-clone the Fossil repo up to the NAS: | | | | | | | | 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 | they’re one-time setup costs, easily amortized to insignificance by the shorter day-to-day commands below. On first beginning to work from home, we reverse-clone the Fossil repo up to the NAS: rsync repo.fossil my-nas.local:/SHARES/dayjob/ Now we’re beginning to see the advantage of Fossil’s simpler model, relative to the tricky “`git init && git push`” sequence above. Fossil’s alternative is almost impossible to get wrong: copy this to that. *Done.* We’re relying on the `rsync` feature that creates up to one level of missing directory (here, `dayjob/`) on the remote. If you know in advance that the remote directory already exists, you could use a slightly shorter `scp` command instead. Even with the extra 2 characters in the `rsync` form, it’s much shorter because a Fossil repository is a single SQLite database file, not a tree containing a pile of assorted files. Because of this, it works reliably without any of [the caveats inherent in using `rsync` to clone a Git repo][grsync]. Now we set up the second remote, which is again simpler in the Fossil case: fossil remote add home ssh://my-nas.local//SHARES/dayjob/repo.fossil fossil remote home The first command is nearly identical to the Git version, but the second is considerably simpler. And to be fair, you won’t find the “`git config`” command above in all Git tutorials. The more common alternative we found with web searches is even longer: “`git push --set-upstream home master`”. Where Fossil really wins is in the next step, making the initial commit from home: fossil ci It’s one short command for Fossil instead of three for Git — or two if you abbreviate it as “`git commit -a && git push`” — because of Fossil’s [autosync] feature and deliberate omission of a [staging feature](#staging). The “Friday afternoon sync-up” case is simpler, too: fossil remote work fossil sync Back at home, it’s simpler still: we may be able to do away with the second command, saying just “`fossil remote home`” because the sync will happen as part of the next commit, thanks once again to Fossil’s autosync feature. If the working branch now has commits from other developers after syncing with the central repository, though, you’ll want to say “`fossil up`” to avoid creating an inadvertent fork in the branch. |
︙ | ︙ |
Changes to www/globs.md.
︙ | ︙ | |||
40 41 42 43 44 45 46 | The parser allows whitespace and commas in a pattern by quoting _the entire pattern_ with either single or double quotation marks. Internal quotation marks are treated literally. Moreover, a pattern that begins with a quote mark ends when the first instance of the same mark occurs, _not_ at a whitespace or comma. Thus, this: | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | The parser allows whitespace and commas in a pattern by quoting _the entire pattern_ with either single or double quotation marks. Internal quotation marks are treated literally. Moreover, a pattern that begins with a quote mark ends when the first instance of the same mark occurs, _not_ at a whitespace or comma. Thus, this: "foo bar"qux …constitutes _two_ patterns rather than one with an embedded space, in contravention of normal shell quoting rules. A list matches a file when any pattern in that list matches. A pattern must consume and |
︙ | ︙ |
Changes to www/glossary.md.
︙ | ︙ | |||
165 166 167 168 169 170 171 | recommend keeping them all in a single subdirectory such as "`~/fossils`" or "`%USERPROFILE%\Fossils`". A flat set of files suffices for simple purposes, but you may have use for something more complicated. This author uses a scheme like the following on mobile machines that shuttle between home and the office: ``` pikchr toggle indent | < | 165 166 167 168 169 170 171 172 173 174 175 176 177 178 | recommend keeping them all in a single subdirectory such as "`~/fossils`" or "`%USERPROFILE%\Fossils`". A flat set of files suffices for simple purposes, but you may have use for something more complicated. This author uses a scheme like the following on mobile machines that shuttle between home and the office: ``` pikchr toggle indent box "~/museum/" fit move right 0.1 line right dotted move right 0.05 box invis "where one stores valuable fossils" ljust arrow down 50% from first box.s then right 50% |
︙ | ︙ |
Changes to www/grep.md.
︙ | ︙ | |||
43 44 45 46 47 48 49 | Fossil `grep` doesn’t support any of the GNU and BSD `grep` extensions. For instance, it doesn’t support the common `-R` extension to POSIX, which would presumably search a subtree of managed files. If Fossil does one day get this feature, it would have a different option letter, since `-R` in Fossil has a different meaning, by convention. Until then, you can get the same effect on systems with a POSIX shell like so: | | | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | Fossil `grep` doesn’t support any of the GNU and BSD `grep` extensions. For instance, it doesn’t support the common `-R` extension to POSIX, which would presumably search a subtree of managed files. If Fossil does one day get this feature, it would have a different option letter, since `-R` in Fossil has a different meaning, by convention. Until then, you can get the same effect on systems with a POSIX shell like so: $ fossil grep COMMAND: $(fossil ls src) If you run that in a check-out of the [Fossil self-hosting source repository][fshsr], that returns the first line of the built-in documentation for each Fossil command, across all historical verisons. Fossil `grep` has extensions relative to these other `grep` standards, such as `--verbose` to print each checkin ID considered, regardless of |
︙ | ︙ |
Changes to www/hashes.md.
1 2 3 4 5 6 | # Hashes: Fossil Artifact Identification All artifacts in Fossil are identified by a unique hash, currently using [the SHA3 algorithm by default][hpol], but historically using the SHA1 algorithm: | < | > | | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # Hashes: Fossil Artifact Identification All artifacts in Fossil are identified by a unique hash, currently using [the SHA3 algorithm by default][hpol], but historically using the SHA1 algorithm: | Algorithm | Raw Bits | Hexadecimal digits | |-----------|----------|--------------------| | SHA3-256 | 256 | 64 | | SHA1 | 160 | 40 | There are many types of artifacts in Fossil: commits (a.k.a. check-ins), tickets, ticket comments, wiki articles, forum postings, file data belonging to check-ins, etc. ([More info...](./concepts.wiki#artifacts)). There is a loose hierarchy of terms used instead of “hash” in various parts of the Fossil UI, which we cover in the sections below. |
︙ | ︙ |
Changes to www/hashpolicy.wiki.
︙ | ︙ | |||
166 167 168 169 170 171 172 | repositories can be overridden using the "--sha1" option to the "fossil new" command. If you are still on Fossil 2.1 through 2.9 but you want Fossil to go ahead and start using SHA3 hashes, change the hash policy to "sha3" using a command like this: | | | | 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 | repositories can be overridden using the "--sha1" option to the "fossil new" command. If you are still on Fossil 2.1 through 2.9 but you want Fossil to go ahead and start using SHA3 hashes, change the hash policy to "sha3" using a command like this: <verbatim> fossil hash-policy sha3 </verbatim> The next check-in will use a SHA3 hash, so that when that check-in is pushed to colleagues, their clones will include the new SHA3-named artifact, so their local Fossil instances will automatically convert their clones to "sha3" mode as well. Of course, if some members of your team stubbornly refuse to upgrade past |
︙ | ︙ |
Changes to www/hints.wiki.
︙ | ︙ | |||
35 36 37 38 39 40 41 | on in the Fossil repository on 2008-01-01, visit [/timeline?c=2008-01-01]. 7. Further to the previous two hints, there are lots of query parameters that you can add to timeline pages. The available query parameters are tersely documented [/help?cmd=/timeline | here]. | | | | > | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | on in the Fossil repository on 2008-01-01, visit [/timeline?c=2008-01-01]. 7. Further to the previous two hints, there are lots of query parameters that you can add to timeline pages. The available query parameters are tersely documented [/help?cmd=/timeline | here]. 8. You can run "[/help?cmd=xdiff | fossil xdiff --tk $file1 $file2]" to get a Tk pop-up window with side-by-side diffs of two files, even if neither of the two files is part of any Fossil repository. Note that this command is "xdiff", not "diff". Change <nobr>--tk</nobr> to <nobr>--by</nobr> to see the diff in your web browser. 9. On web pages showing the content of a file (for example [/artifact/c7dd1de9f]) you can manually add a query parameter of the form "ln=FROM,TO" to the URL that will cause the range of lines indicated to be highlighted. This is useful in pointing out a few lines of code using a hyperlink in an email or text message. Example: |
︙ | ︙ |
Changes to www/image-format-vs-repo-size.md.
︙ | ︙ | |||
157 158 159 160 161 162 163 | Since programs that produce and consume binary-compressed data files often make it either difficult or impossible to work with the uncompressed form, we want an automated method for producing the uncompressed form to make Fossil happy while still having the compressed form to keep our content creation applications happy. This `Makefile` should[^makefile] do that for BMP, PNG, SVG, and XLSX files: | | | | | | | | | | | | | | | | | | | | 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | Since programs that produce and consume binary-compressed data files often make it either difficult or impossible to work with the uncompressed form, we want an automated method for producing the uncompressed form to make Fossil happy while still having the compressed form to keep our content creation applications happy. This `Makefile` should[^makefile] do that for BMP, PNG, SVG, and XLSX files: .SUFFIXES: .bmp .png .svg .svgz .svgz.svg: gzip -dc < $< > $@ .svg.svgz: gzip -9c < $< > $@ .bmp.png: convert -quality 95 $< $@ .png.bmp: convert $< $@ SS_FILES := $(wildcard spreadsheet/*) all: $(SS_FILES) illus.svg image.bmp doc-big.pdf reconstitute: illus.svgz image.png ( cd spreadsheet ; zip -9 ../spreadsheet.xlsx) * ) qpdf doc-big.pdf doc-small.pdf $(SS_FILES): spreadsheet.xlsx unzip $@ -d $< doc-big.pdf: doc-small.pdf qpdf --stream-data=uncompress $@ $< This `Makefile` allows you to treat the compressed version as the process input, but to actually check in only the changes against the uncompressed version by typing “`make`” before “`fossil ci`”. This is not actually an extra step in practice, since if you’ve got a `Makefile`-based project, you should be building — and testing — it before checking each change in anyway! |
︙ | ︙ |
Changes to www/index.wiki.
|
| | | | | | > | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | <title>A Coherent Software Configuration Management System</title> <h3>What Is It?</h3> <div class="nomargins" style='float:right;border:2px solid #446979;padding:0 15px 10px 0;margin:0 50px 0 10px'> <ul> <li> [/uv/download.html | Download] <li> [./quickstart.wiki | Quick Start] <li> [./build.wiki | Install] <li> [https://fossil-scm.org/forum | Support/Forum ] <li> [./hints.wiki | Tips & Hints] <li> [./changes.wiki | Change Log] <li> [../COPYRIGHT-BSD2.txt | License] <li> [./userlinks.wiki | User Links] <li> [./hacker-howto.wiki | Hacker How-To] <li> [./fossil-v-git.wiki | Fossil vs. Git] <li> [./permutedindex.html | Doc Index] </ul> <p style="text-align:center"><img src="fossil3.gif" alt="Fossil logo"></p> </div> Fossil is a simple, high-reliability, distributed [https://en.wikipedia.org/wiki/Software_configuration_management | SCM] system with these advanced features: 1. <b>Project Management</b> — In addition to doing [./concepts.wiki | distributed version control] like Git and Mercurial, Fossil also supports [./bugtheory.wiki | bug tracking], [./wikitheory.wiki | wiki], [./forum.wiki | forum], [./alerts.md|email alerts], [./chat.md | chat], and [./event.wiki | technotes]. 2. <b>Built-in Web Interface</b> — Fossil has a built-in, [/skins | themeable], [./serverext.wiki | extensible], and intuitive [./webui.wiki | web interface] with a rich variety of information pages ([./webpage-ex.md|examples]) promoting situational awareness. <br><br> This entire website is just a running instance of Fossil. The pages you see here are all [./wikitheory.wiki | wiki] or [./embeddeddoc.wiki | embedded documentation] or (in the case of the [/uv/download.html|download] page) [./unvers.wiki | unversioned files]. When you clone Fossil from one of its [./selfhost.wiki | self-hosting repositories], you get more than just source code — you get this entire website. 3. <b>All-in-one</b> — Fossil is a single self-contained, stand-alone executable. To install, simply download a [/uv/download.html | precompiled binary] for Linux, Mac, or Windows and put it on your $PATH. [./build.wiki | Easy-to-compile source code] is also available. 4. <b>Self-host Friendly</b> — Stand up a project website in minutes using [./server/ | a variety of techniques]. Fossil is CPU and memory efficient. Most projects can be hosted comfortably on a $5/month VPS or a Raspberry Pi. You can also set up an automatic [./mirrortogithub.md | GitHub mirror]. 5. <b>Simple Networking</b> — Fossil uses ordinary HTTPS (or SSH if you prefer) for network communications, so it works fine from behind firewalls and [./quickstart.wiki#proxy|proxies]. The protocol is [./stats.wiki | bandwidth efficient] to the point that Fossil can be used comfortably over dial-up, weak 3G, or airliner Wifi. 6. <b>Autosync</b> — Fossil supports [./concepts.wiki#workflow | "autosync" mode] which helps to keep projects moving forward by reducing the amount of needless [./branching.wiki | forking and merging] often associated with distributed projects. 7. <b>Robust & Reliable</b> — Fossil stores content using an [./fileformat.wiki | enduring file format] in an SQLite database so that transactions are atomic even if interrupted by a power loss or system crash. Automatic [./selfcheck.wiki | self-checks] verify that all aspects of the repository are consistent prior to each commit. 8. <b>Free and Open-Source</b> — [../COPYRIGHT-BSD2.txt|2-clause BSD license]. <hr> <h3>Latest Release: 2.23 ([/timeline?c=version-2.23|2023-11-01])</h3> * [/uv/download.html|Download] * [./changes.wiki#v2_23|Change Summary] * [/timeline?p=version-2.23&bt=version-2.22&y=ci|Check-ins in version 2.23] |
︙ | ︙ |
Changes to www/inout.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 | <title>Import And Export</title> Fossil has the ability to import and export repositories from and to [http://git-scm.com/ | Git]. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary. <h2>Git → Fossil</h2> To import a Git repository into Fossil, say something like: | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | <title>Import And Export</title> Fossil has the ability to import and export repositories from and to [http://git-scm.com/ | Git]. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary. <h2>Git → Fossil</h2> To import a Git repository into Fossil, say something like: <pre> cd git-repo git fast-export --all | fossil import --git new-repo.fossil </pre> The 3rd argument to the "fossil import" command is the name of a new Fossil repository that is created to hold the Git content. The --git option is not actually required. The git-fast-export file format is currently the only VCS interchange format that Fossil understands. But |
︙ | ︙ | |||
56 57 58 59 60 61 62 | any dependency on the amount of data involved. <h2>Fossil → Git</h2> To convert a Fossil repository into a Git repository, run commands like this: | | | | 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | any dependency on the amount of data involved. <h2>Fossil → Git</h2> To convert a Fossil repository into a Git repository, run commands like this: <pre> git init new-repo cd new-repo fossil export --git ../repo.fossil | git fast-import </pre> In other words, create a new Git repository, then pipe the output from the "fossil export --git" command into the "git fast-import" command. Note that the "fossil export --git" command only exports the versioned files. Tickets and wiki and events are not exported, since Git does not understand those concepts. |
︙ | ︙ | |||
95 96 97 98 99 100 101 | artifacts which are known by both Git and Fossil to exist at a given point in time. To illustrate, consider the example of a remote Fossil repository that a user wants to import into a local Git repository. First, the user would clone the remote repository and import it into a new Git repository: | | | | | | | | 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | artifacts which are known by both Git and Fossil to exist at a given point in time. To illustrate, consider the example of a remote Fossil repository that a user wants to import into a local Git repository. First, the user would clone the remote repository and import it into a new Git repository: <pre> fossil clone /path/to/remote/repo.fossil repo.fossil mkdir repo cd repo fossil open ../repo.fossil mkdir ../repo.git cd ../repo.git git init . fossil export --git --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --export-marks=../repo/git.marks </pre> Once the import has completed, the user would need to <tt>git checkout trunk</tt>. At any point after this, new changes can be imported from the remote Fossil repository: <pre> cd ../repo fossil pull cd ../repo.git fossil export --git --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks </pre> Changes in the Git repository can be exported to the Fossil repository and then pushed to the remote: <pre> git fast-export --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks --all | fossil import --git \ --incremental --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks ../repo.fossil cd ../repo fossil push </pre> |
Changes to www/javascript.md.
︙ | ︙ | |||
369 370 371 372 373 374 375 | _Workaround:_ You don’t have to use the browser-based wiki editor to maintain your repository’s wiki at all. Fossil’s [`wiki` command][fwc] lets you manipulate wiki documents from the command line. For example, consider this Vi based workflow: ```shell | | | | | | | | | | | | | | | 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 | _Workaround:_ You don’t have to use the browser-based wiki editor to maintain your repository’s wiki at all. Fossil’s [`wiki` command][fwc] lets you manipulate wiki documents from the command line. For example, consider this Vi based workflow: ```shell $ vi 'My Article.wiki' # begin work on new article ...write, write, write... :w # save changes to disk copy :!fossil wiki create 'My Article' '%' # current file (%) to new article ...write, write, write some more... :w # save again :!fossil wiki commit 'My Article' '%' # update article from disk :q # done writing for today ....days later... $ vi # work sans named file today :r !fossil wiki export 'My Article' - # pull article text into vi buffer ...write, write, write yet more... :w !fossil wiki commit - # vi buffer updates article ``` Extending this concept to other text editors is an exercise left to the reader. [fwc]: /help?cmd=wiki [fwt]: ./wikitheory.wiki |
︙ | ︙ | |||
574 575 576 577 578 579 580 | _Potential Workaround:_ It would not be especially difficult for someone sufficiently motivated to build a Fossil chat gateway, connecting to IRC, Jabber, etc. The messages are stored in the repository’s `chat` table with monotonically increasing IDs, so a poller that did something like | | | 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 | _Potential Workaround:_ It would not be especially difficult for someone sufficiently motivated to build a Fossil chat gateway, connecting to IRC, Jabber, etc. The messages are stored in the repository’s `chat` table with monotonically increasing IDs, so a poller that did something like SELECT xfrom, xmsg FROM chat WHERE msgid > 1234; …would pull the messages submitted since the last poll. Making the gateway bidirectional should be possible as well, as long as it properly uses SQLite transactions. ### <a id="brlist"></a>List of branches |
︙ | ︙ |
Changes to www/loadmgmt.md.
︙ | ︙ | |||
23 24 25 26 27 28 29 | due to excessive requests to expensive pages: 1. An optional cache is available that remembers the 10 most recently requested `/zip` or `/tarball` pages and returns the precomputed answer if the same page is requested again. 2. Page requests can be configured to fail with a | | | > > > | | | | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | due to excessive requests to expensive pages: 1. An optional cache is available that remembers the 10 most recently requested `/zip` or `/tarball` pages and returns the precomputed answer if the same page is requested again. 2. Page requests can be configured to fail with a “[503 Server Overload][503]” HTTP error if any request is received while the host load average is too high. Both of these load-control mechanisms are turned off by default, but they are recommended for high-traffic sites. Users with [admin permissions](caps/index.md) are exempt from these restrictions, provided they are logged in before the load gets too high (login is disabled under high load). The webpage cache is activated using the [`fossil cache init`](/help/cache) command-line on the server. Add a `-R` option to specify the specific repository for which to enable caching. If running this command as root, be sure to “`chown`” the cache database to give the Fossil server write permission for the user ID of the web server; this is a separate file in the same directory and with the same name as the repository but with the “`.fossil`” suffix changed to “`.cache`”. To activate the server load control feature visit the Admin → Access setup page in the administrative web interface; in the “**Server Load Average Limit**” box enter the load average threshold above which “503 Server Overload” replies will be issued for expensive requests. On the self-hosting Fossil server, that value is set to 1.5, but you could easily set it higher on a multi-core server. The maximum load average can also be set on the command line using commands like this: fossil set max-loadavg 1.5 fossil all set max-loadavg 1.5 The second form is especially useful for changing the maximum load average simultaneously on a large number of repositories. Note that this load-average limiting feature is only available on operating systems that support the [`getloadavg()`][gla] API. Most modern Unix systems have this interface, but Windows does not, so the feature will not work on Windows. Because Linux implements `getloadavg()` by accessing the `/proc/loadavg` virtual file, you will need to make sure `/proc` is available to the Fossil server. The most common reason for it to not be available is that you are running a Fossil instance [inside a `chroot(2)` jail](./chroot.md) and you have not mounted the `/proc` virtual file system inside that jail. On the [self-hosting Fossil repositories][sh], this was accomplished by adding a line to the `/etc/fstab` file: chroot_jail_proc /home/www/proc proc ro 0 0 The `/home/www/proc` pathname should be adjusted so that the `/proc` component is at the root of the chroot jail, of course. To see if the load-average limiter is functional, visit the [`/test_env`][hte] page of the server to view the current load average. If the value for the load average is greater than zero, that means that |
︙ | ︙ |
Changes to www/makefile.wiki.
︙ | ︙ | |||
144 145 146 147 148 149 150 | The VERSION.h header file is generated by a C program: tools/mkversion.c. To run the VERSION.h generator, first compile the tools/mkversion.c source file into a command-line program (named "mkversion.exe") then run: | | | | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | The VERSION.h header file is generated by a C program: tools/mkversion.c. To run the VERSION.h generator, first compile the tools/mkversion.c source file into a command-line program (named "mkversion.exe") then run: <pre> mkversion.exe manifest.uuid manifest VERSION >VERSION.h </pre> The pathnames in the above command might need to be adjusted to get the directories right. The point is that the manifest.uuid, manifest, and VERSION files in the root of the source tree are the three arguments and the generated VERSION.h file appears on standard output. The builtin_data.h header file is generated by a C program: tools/mkbuiltin.c. The builtin_data.h file contains C-language byte-array definitions for the content of resource files used by Fossil. To generate the builtin_data.h file, first compile the mkbuiltin.c program, then run: <pre> mkbuiltin.exe diff.tcl <i>OtherFiles...</i> >builtin_data.h </pre> At the time of this writing, the "diff.tcl" script (a Tcl/Tk script used to generate implement --tk option on the diff command) is the only resource file processed using mkbuiltin.exe. However, new resources will likely be added using this facility in future versions of Fossil. <h1 id="preprocessing">4.0 Preprocessing</h1> |
︙ | ︙ | |||
183 184 185 186 187 188 189 | The mkindex program scans the "src.c" source files looking for special comments that identify routines that implement various Fossil commands, web interface methods, and help text comments. The mkindex program generates some C code that Fossil uses in order to dispatch commands and HTTP requests and to show on-line help. Compile the mkindex program from the mkindex.c source file. Then run: | | | | | | 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 | The mkindex program scans the "src.c" source files looking for special comments that identify routines that implement various Fossil commands, web interface methods, and help text comments. The mkindex program generates some C code that Fossil uses in order to dispatch commands and HTTP requests and to show on-line help. Compile the mkindex program from the mkindex.c source file. Then run: <pre> ./mkindex src.c >page_index.h </pre> Note that "src.c" in the above is a stand-in for the (79) regular source files of Fossil - all source files except for the exceptions described in section 2.0 above. The output of the mkindex program is a header file that is #include-ed by the main.c source file during the final compilation step. <h2>4.2 The translate preprocessor</h2> The translate preprocessor looks for lines of source code that begin with "@" and converts those lines into string constants or (depending on context) into special "printf" operations for generating the output of an HTTP request. The translate preprocessor is a simple C program whose sources are in the translate.c source file. The translate preprocess is run on each of the other ordinary source files separately, like this: <pre> ./translate src.c >src_.c </pre> In this case, the "src.c" file represents any single source file from the set of ordinary source files as described in section 2.0 above. Note that each source file is translated separately. By convention, the names of the translated source files are the names of the input sources with a single "_" character at the end. But a new makefile can use any naming convention it wants - the "_" is not critical to the build process. |
︙ | ︙ | |||
233 234 235 236 237 238 239 | The makeheaders program is run once. It scans all inputs source files and generates header files for each one. Note that the sqlite3.c and shell.c source files are not scanned by makeheaders. Makeheaders only runs over "ordinary" source files, not the exceptional source files. However, makeheaders also uses some extra header files as input. The general format is like this: | | | | 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 | The makeheaders program is run once. It scans all inputs source files and generates header files for each one. Note that the sqlite3.c and shell.c source files are not scanned by makeheaders. Makeheaders only runs over "ordinary" source files, not the exceptional source files. However, makeheaders also uses some extra header files as input. The general format is like this: <pre> makeheaders src_.c:src.h sqlite3.h th.h VERSION.h </pre> In the example above the "src_.c" and "src.h" names represent all of the (79) ordinary C source files, each as a separate argument. <h1>5.0 Compilation</h1> After all generated files have been created and all ordinary source files |
︙ | ︙ | |||
302 303 304 305 306 307 308 | However, in practice it is instead recommended to add a respective configure option for the target platform and then perform a clean build. This way the Debug flags are consistently applied across the whole build process. For example, use these Debug flags in addition to other flags passed to the configure scripts: On Linux, *NIX and similar platforms: | | | | | | 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 | However, in practice it is instead recommended to add a respective configure option for the target platform and then perform a clean build. This way the Debug flags are consistently applied across the whole build process. For example, use these Debug flags in addition to other flags passed to the configure scripts: On Linux, *NIX and similar platforms: <pre> ./configure --fossil-debug </pre> On Windows: <pre> win\buildmsvc.bat FOSSIL_DEBUG=1 </pre> The resulting fossil binary could then be loaded into a platform-specific debugger. Source files displayed in the debugger correspond to the ones generated from the translation stage of the build process, that is what was actually compiled into the object files. <h1>8.0 See Also</h1> * [./tech_overview.wiki | A Technical Overview Of Fossil] * [./adding_code.wiki | How To Add Features To Fossil] |
Changes to www/mirrortogithub.md.
︙ | ︙ | |||
9 10 11 12 13 14 15 | 2. Create a new project. GitHub will ask you if you want to prepopulate your project with various things like a README file. Answer "no" to everything. You want a completely blank project. GitHub will then supply you with a URL for your project that will look something like this: | | | | | 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | 2. Create a new project. GitHub will ask you if you want to prepopulate your project with various things like a README file. Answer "no" to everything. You want a completely blank project. GitHub will then supply you with a URL for your project that will look something like this: https://github.com/username/project.git 3. Back on your workstation, move to a checkout for your Fossil project and type: <blockquote> <pre> $ fossil git export /path/to/git/repo --autopush \ https://<font color="orange">username</font>:<font color="red">password</font>@github.com/username/project.git </pre> </blockquote> In place of the <code>/path/to...</code> argument above, put in some directory name that is <i>outside</i> of your Fossil checkout. If you keep multiple Fossil checkouts in a directory of their own, consider using <code>../git-mirror</code> to place the Git export |
︙ | ︙ | |||
56 57 58 59 60 61 62 | 5. And you are done! Assuming everything worked, your project is now mirrored on GitHub. 6. Whenever you update your project, simply run this command to update the mirror: | | | | 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | 5. And you are done! Assuming everything worked, your project is now mirrored on GitHub. 6. Whenever you update your project, simply run this command to update the mirror: $ fossil git export Unlike with the first time you ran that command, you don’t need the remaining arguments, because Fossil remembers those things. Subsequent mirror updates should usually happen in a fraction of a second. 7. To see the status of your mirror, run: $ fossil git status ## Notes: * Unless you specify --force, the mirroring only happens if the Fossil repo has changed, with Fossil reporting "no changes", because Fossil does not care about the success or failure of the mirror run. If a mirror run failed (for example, due to an incorrect password, or a transient |
︙ | ︙ | |||
140 141 142 143 144 145 146 | ## <a id='ex1'></a>Example GitHub Mirrors As of this writing (2019-03-16) Fossil’s own repository is mirrored on GitHub at: | < | < | < | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 | ## <a id='ex1'></a>Example GitHub Mirrors As of this writing (2019-03-16) Fossil’s own repository is mirrored on GitHub at: > <https://github.com/drhsqlite/fossil-mirror> In addition, an official Git mirror of SQLite is available: > <https://github.com/sqlite/sqlite> The Fossil source repositories for these mirrors are at <https://www2.fossil-scm.org/fossil> and <https://www2.sqlite.org/src>, respectively. Both repositories are hosted on the same VM at [Linode](https://www.linode.com). On that machine, there is a [cron job](https://linux.die.net/man/8/cron) that runs at 17 minutes after the hour, every hour that does: /usr/bin/fossil sync -u -R /home/www/fossil/fossil.fossil /usr/bin/fossil sync -R /home/www/fossil/sqlite.fossil /usr/bin/fossil git export -R /home/www/fossil/fossil.fossil /usr/bin/fossil git export -R /home/www/fossil/sqlite.fossil The initial two "sync" commands pull in changes from the primary Fossil repositories for Fossil and SQLite. The last two lines export the changes to Git and push the results up to GitHub. |
Changes to www/mkindex.tcl.
︙ | ︙ | |||
164 165 166 167 168 169 170 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> | | | | 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> <li><a href='$ROOT/wiki?name=Release Build How-To'>Release Build How-To</a>, a.k.a. how deliverables are built</li> </li> <li> <a href='$ROOT/wiki?name=To+Do+List'>To Do List (Wiki)</a> <li> <a href='https://fossil-scm.org/fossil-book/'>Fossil book</a> </ul> <h2 id="pindex">Other Documents:</h2> <ul>} foreach entry $permindex { |
︙ | ︙ |
Changes to www/newrepo.wiki.
1 2 3 4 5 6 7 8 9 10 11 | <title>How To Create A New Fossil Repository</title> The [/doc/tip/www/quickstart.wiki|quickstart guide] explains how to get up and running with fossil. But once you're running, what can you do with it? This document will walk you through the process of creating a fossil repository, populating it with files, and then sharing it over the web. The first thing we need to do is create a fossil repository file: <verbatim> | | | | | | | | | < | | | | | | | | | | | | | | | | < | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | <title>How To Create A New Fossil Repository</title> The [/doc/tip/www/quickstart.wiki|quickstart guide] explains how to get up and running with fossil. But once you're running, what can you do with it? This document will walk you through the process of creating a fossil repository, populating it with files, and then sharing it over the web. The first thing we need to do is create a fossil repository file: <verbatim> $ fossil new demo.fossil project-id: 9d8ccff5671796ee04e60af6932aa7788f0a990a server-id: 145fe7d71e3b513ac37ac283979d73e12ca04bfe admin-user: stephan (initial password is ******) </verbatim> The numbers it spits out are unimportant (they are version numbers). Now we have an empty repository file named <tt>demo.fossil</tt>. There is nothing magical about the extension <tt>.fossil</tt> - it's just a convention. You may name your files anything you like. The first thing we normally want to do is to run fossil as a local server so that you can configure the access rights to the repo: <verbatim> $ fossil ui demo.fossil </verbatim> The <tt>ui</tt> command starts up a server (with an optional <tt>-port NUMBER</tt> argument) and launches a web browser pointing at the fossil server. From there it takes just a few moments to configure the repo. Most importantly, go to the Admin menu, then the Users link, and set your account name and password, and grant your account all access privileges. (I also like to grant Clone access to the anonymous user, but that's personal preference.) Once you are done, kill the fossil server (with Ctrl-C or equivalent) and close the browser window. <div class="sidebar"> It is not strictly required to configure a repository this way, but if you are going to share a repo over the net then it is highly recommended. If you are only going to work with the repo locally, you can skip the configuration step and do it later if you decide you want to share your repo. </div> The next thing we need to do is <em>open</em> the repository. To do so we create a working directory and then <tt>cd</tt> to it: <verbatim> $ mkdir demo $ cd demo $ fossil open ../demo.fossil </verbatim> That creates a file called <tt>_FOSSIL_</tt> in the current directory, and this file contains all kinds of fossil-related information about your local repository. You can ignore it for all purposes, but be sure not to accidentally remove it or otherwise damage it - it belongs to fossil, not you. The next thing we need to do is add files to our repository. As it happens, we have a few C source files lying around, which we'll simply copy into our working directory. <verbatim> $ cp ../csnip/*.{c,h} . $ ls clob.c clob.h clobz.c mkdep.c test-clob.c tokenize_path.c tokenize_path.h vappendf.c vappendf.h </verbatim> Fossil doesn't know about those files yet. Telling fossil about a new file is a two-step process. First we <em>add</em> the file to the repository, then we <em>commit</em> the file. This is a familiar process for anyone who's worked with SCM systems before: <verbatim> $ fossil add *.{c,h} $ fossil commit -m "egg" New_Version: d1296b4a08b9f8b943bb6c73698e51eed23f8f91 </verbatim> We now have a working repository! The file <tt>demo.fossil</tt> is the central storage, and we can share it amongst an arbitrary number of trees. As a silly example: <verbatim> $ cd ~/fossil $ mkdir demo2 $ cd demo2 $ fossil open ../demo.fossil ADD clob.c ADD clob.h ADD clobz.c ADD mkdep.c ADD test-clob.c ADD tokenize_path.c ADD tokenize_path.h ADD vappendf.c </verbatim> You may modify the repository (e.g. add, remove, or commit files) from both working directories, and doing so might be useful when working on a branch or experimental code. Making your repository available over the web is trivial to do. We assume you have some web space where you can store your fossil file and run a CGI script. If not, then this option is not for you. If you do, then here's how... Copy the fossil repository file to your web server (it doesn't matter where, really, but it "should" be unreachable by web browser traffic). In your <tt>cgi-bin</tt> (or equivalent) directory, create a file which looks like this: <verbatim> #!/path/to/fossil repository: /path/to/my_repo.fossil </verbatim> Make that script executable, and you're all ready to go: <verbatim> $ chmod +x ~/www/cgi-bin/myrepo.cgi </verbatim> Now simply point your browser to <tt>https://my.domain/cgi-bin/myrepo.cgi</tt> and you should be able to manage the repository from there. To check out a copy of your remote repository, use the <em>clone</em> command: <verbatim> $ fossil clone \ https://MyAccountName:MyAccountPassword@my.domain/cgi-bin/myrepo.cgi </verbatim> If you do not provide your password in the URL, fossil will interactively prompt you for it. A clone is a local copy of a remote repository, and can be opened just like a local one (as shown above). It is treated identically to your local repository, with one very important difference. When you commit changes to a cloned remote repository, they will be pushed back to the remote repository. If you have <tt>autosync</tt> on then this sync happens automatically, otherwise you will need to use the |
︙ | ︙ |
Changes to www/password.wiki.
1 | <title>Fossil Password Management</title> | < | 1 2 3 4 5 6 7 8 | <title>Fossil Password Management</title> Fossil handles user authentication using passwords. Passwords are unique to each repository. Passwords are not part of the persistent state of a project. Passwords are not versioned and are not transmitted from one repository to another during a sync. Passwords are local configuration information that can (and usually does) vary from one repository to the next within the same project. |
︙ | ︙ | |||
20 21 22 23 24 25 26 | The SHA1 hash in the USER.PW field is a hash of a string composed of the project-code, the user login, and the user cleartext password. Suppose user "alice" with password "asdfg" had an account on the Fossil self-hosting repository. Then the value of USER.PW for alice would be the SHA1 hash of | | | | | | | | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | The SHA1 hash in the USER.PW field is a hash of a string composed of the project-code, the user login, and the user cleartext password. Suppose user "alice" with password "asdfg" had an account on the Fossil self-hosting repository. Then the value of USER.PW for alice would be the SHA1 hash of <pre> CE59BB9F186226D80E49D1FA2DB29F935CCA0333/alice/asdfg </pre> Note that by including the project-code and the login as part of the hash, a different USER.PW value results even if two or more users on the repository select the same "asdfg" password or if user alice reuses the same password on multiple projects. Whenever a password is changed using the web interface or using the "user" command-line method, the new password is stored using the SHA1 encoding. Thus, cleartext passwords will gradually migrate to become SHA1 passwords. All remaining cleartext passwords can be converted to SHA1 passwords using the following command: <pre> fossil test-hash-passwords <i>REPOSITORY-NAME</i> </pre> Remember that converting from cleartext to SHA1 passwords is an irreversible operation. The only way to insert a new cleartext password into the USER table is to do so manually using SQL commands. For example: <pre> UPDATE user SET pw='asdfg' WHERE login='alice'; </pre> Note that an password that is an empty string or NULL will disable all login for that user. Thus, to lock a user out of the system, one has only to set their password to an empty string, using either the web interface or direct SQL manipulation of the USER table. Note also that the password field is essentially ignored for the special users named "anonymous", "developer", |
︙ | ︙ | |||
115 116 117 118 119 120 121 | This means that when USER.PW holds a cleartext password, the login card will work for both older and newer clients. If the USER.PW on the server only holds the SHA1 hash of the password, then only newer clients will be able to authenticate to the server. The client normally gets the login and password from the "remote URL". | | | | | | 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | This means that when USER.PW holds a cleartext password, the login card will work for both older and newer clients. If the USER.PW on the server only holds the SHA1 hash of the password, then only newer clients will be able to authenticate to the server. The client normally gets the login and password from the "remote URL". <pre> http://<span style="color:blue">login:password</span>@servername.org/path </pre> For older clients, the password is used for the shared secret as stated in the URL and with no encoding. For newer clients, the shared secret is derived from the password by transformed the password using the SHA1 hash encoding described above. However, if the first character of the password is "*" (ASCII 0x2a) then the "*" is skipped and the rest of the password is used directly as the share secret without the SHA1 encoding. <pre> http://<span style="color:blue">login:*password</span>@servername.org/path </pre> This *-before-the-password trick can be used by newer clients to sync against a legacy server that does not understand the new SHA1 password encoding. |
Changes to www/patchcmd.md.
1 2 3 4 5 6 7 8 9 10 11 | # The "fossil patch" command The "[fossil patch](/help?cmd=patch)" command is designed to transfer uncommitted changes from one check-out to another, including transfering those changes to other machines. For example, if you are working on a Windows desktop and you want to test your changes on a Linux server before you commit, you can use the "fossil patch push" command to make a copy of all your changes on the remote Linux server: | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | # The "fossil patch" command The "[fossil patch](/help?cmd=patch)" command is designed to transfer uncommitted changes from one check-out to another, including transfering those changes to other machines. For example, if you are working on a Windows desktop and you want to test your changes on a Linux server before you commit, you can use the "fossil patch push" command to make a copy of all your changes on the remote Linux server: fossil patch push linuxserver:/path/to/checkout In the previous "linuxserver" is the name of the remote machine and "/path/to/checkout" is an existing checkout directory for the same project on the remote machine. The "fossil patch push" command works by first creating a patch file, then transfering that patch file to the remote machine using "ssh", then |
︙ | ︙ | |||
33 34 35 36 37 38 39 | The "fossil patch push" and "fossil patch pull" commands will only work if you have "ssh" available on the local machine and if "fossil" is on the default PATH on the remote machine. To check if Fossil is installed correctly on the remote, try a command like this: | | | | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | The "fossil patch push" and "fossil patch pull" commands will only work if you have "ssh" available on the local machine and if "fossil" is on the default PATH on the remote machine. To check if Fossil is installed correctly on the remote, try a command like this: ssh -T remote "fossil version" If the command above shows a recent version of Fossil, then you should be set to go. If you get "fossil not found", or if the version shown is too old, put a newer fossil executable on the default PATH. The default PATH can be shown using: ssh -T remote 'echo $PATH' ### Custom PATH Caveat On Unix-like systems, the init script for the user's login shell (e.g. `~/.profile` or `~/.bash_profile`) may be configured to *not do anything* when running under a non-interactive shell. Thus a fossil binary installed to a custom directory might not be found. To allow |
︙ | ︙ | |||
90 91 92 93 94 95 96 | The "fossil patch apply" command reads the database that is the patch file and applies it to the local check-out. If a filename is given as an argument, then the database is read from that file. If the argument is "-" then the database is read from standard input. Hence the command: | | | | | | 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | The "fossil patch apply" command reads the database that is the patch file and applies it to the local check-out. If a filename is given as an argument, then the database is read from that file. If the argument is "-" then the database is read from standard input. Hence the command: fossil patch push remote:projectA Is equivalent to: fossil patch create - | ssh -T remote 'cd projectA;fossil patch apply -' Likewise, a command like this: fossil patch pull remote:projB Could be entered like this: ssh -T remote 'cd projB;fossil patch create -' | fossil patch apply - The "fossil patch view" command just opens the database file and prints a summary of its contents on standard output. |
Changes to www/permutedindex.html.
︙ | ︙ | |||
9 10 11 12 13 14 15 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> | | | | 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> <li><a href='$ROOT/wiki?name=Release Build How-To'>Release Build How-To</a>, a.k.a. how deliverables are built</li> </li> <li> <a href='$ROOT/wiki?name=To+Do+List'>To Do List (Wiki)</a> <li> <a href='https://fossil-scm.org/fossil-book/'>Fossil book</a> </ul> <h2 id="pindex">Other Documents:</h2> <ul> <li><a href="tech_overview.wiki">A Technical Overview Of The Design And Implementation Of Fossil</a></li> |
︙ | ︙ |
Changes to www/pikchr.md.
︙ | ︙ | |||
20 21 22 23 24 25 26 | arrow <-> down 70% from last box.s box same "Pikchr" "Formatter" "(pikchr.c)" fit ``` The diagram above was generated by the following lines of Markdown: ~~~~~ | | | | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | arrow <-> down 70% from last box.s box same "Pikchr" "Formatter" "(pikchr.c)" fit ``` The diagram above was generated by the following lines of Markdown: ~~~~~ ``` pikchr arrow right 200% "Markdown" "Source" box rad 10px "Markdown" "Formatter" "(markdown.c)" fit arrow right 200% "HTML+SVG" "Output" arrow <-> down 70% from last box.s box same "Pikchr" "Formatter" "(pikchr.c)" fit ``` ~~~~~ See the [original Markdown source text of this document][4] for an example of Pikchr in operation. [4]: ./pikchr.md?mimetype=text/plain |
︙ | ︙ | |||
89 90 91 92 93 94 95 | content is interpreted as Pikchr script and is replaced by the equivalent SVG. So either of these work: [fcb]: https://spec.commonmark.org/0.29/#fenced-code-blocks ~~~~~~ | | | | | | | | | | | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | content is interpreted as Pikchr script and is replaced by the equivalent SVG. So either of these work: [fcb]: https://spec.commonmark.org/0.29/#fenced-code-blocks ~~~~~~ ~~~ pikchr arrow; box "Hello" "World!" fit; arrow ~~~ ``` pikchr arrow; box "Hello" "World!" fit; arrow ``` ~~~~~~ For Fossil Wiki, the Pikchr code goes within `<verbatim type="pikchr"> ... </verbatim>`. Normally `<verbatim>` content is displayed verbatim. The extra `type="pikchr"` attribute causes the content to be interpreted as Pikchr and replaced by SVG. ~~~~~~ <verbatim type="pikchr"> arrow; box "Hello" "World!" fit; arrow </verbatim> ~~~~~~ ## Extra Arguments In "Pikchr" Code Blocks Extra formatting arguments can be included in the fenced code block start tag, or in the "`type=`" attribute of `<verbatim>`, to change the formatting of the diagram. |
︙ | ︙ |
Changes to www/pop.wiki.
1 | <title>Principles Of Operation</title> | < | 1 2 3 4 5 6 7 8 | <title>Principles Of Operation</title> This page attempts to define the foundational principals upon which Fossil is built. * A project consists of source files, wiki pages, and trouble tickets, and control files (collectively "artifacts"). All historical copies of all artifacts |
︙ | ︙ |
Changes to www/private.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 | <title>Private Branches</title> By default, everything you check into a Fossil repository is shared to all clones of that repository. In Fossil, you don't push and pull individual branches; you push and pull everything all at once. But sometimes users want to keep some private work that is not shared with others. This might be a preliminary or experimental change that needs further refinement before it is shared and which might never be shared at all. To do this in Fossil, simply commit the change with the --private command-line option: | | | | | > > > > > > > > > > > > > < < < < < < < < < < < < < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | <title>Private Branches</title> By default, everything you check into a Fossil repository is shared to all clones of that repository. In Fossil, you don't push and pull individual branches; you push and pull everything all at once. But sometimes users want to keep some private work that is not shared with others. This might be a preliminary or experimental change that needs further refinement before it is shared and which might never be shared at all. To do this in Fossil, simply commit the change with the --private command-line option: <pre> fossil commit --private </pre> The --private option causes Fossil to put the check-in in a new branch named "private". That branch will not participate in subsequent clone, sync, push, or pull operations. The branch will remain on the one local repository where it was created. Note that you only use the --private option for the first check-in that creates the private branch. Additional checkins into the private branch remain private automatically. <h2>Publishing Private Changes</h2> After additional work, one might desire to publish the changes associated with a private branch. The usual way to do this is to merge those changes into a public branch. For example: <pre> fossil update trunk fossil merge private fossil commit </pre> The private branch remains private and is not recorded as a parent in the merge manifest's P-card, but all of the changes associated with the private branch are now folded into the public branch and are hence visible to other users of the project. A private branch created with Fossil version 1.30 or newer can also be converted into a public branch using the <code>fossil publish</code> command. However, there is no way to convert a private branch created with older versions of Fossil into a public branch. <div class="sidebar"> To avoid generating a missing artifact reference on peer repositories without the private branch, the merge parent is not recorded when merging the private branch into a public branch. As a consequence, the web UI timeline does not draw a merge line from the private merge parent to the public merge child. Moreover, repeat private-to-public merge operations (without the [/help?cmd=merge | --force option]) with files added on the private branch may only work once, but later abort with "WARNING: no common ancestor for FILE", as the parent-child relationship is not recorded. (See the [/doc/trunk/www/branching.wiki | Branching, Forking, Merging, and Tagging] document for more information.) </div> The <code>--integrate</code> option of <code>fossil merge</code> (to close the merged branch when committing) is ignored for a private branch -- or the check-in manifest of the resulting merge child would include a <code>+close</code> tag referring to the leaf check-in on the private branch, and generate a missing artifact reference on repository clones without that private branch. It's still possible to close the leaf of the private branch (after committing the merge child) with the <code>fossil amend --close</code> command. <h2>Syncing Private Branches</h2> A private branch normally stays on the one repository where it was originally created. But sometimes you want to share private branches with another repository. For example, you might be building a cross-platform application and have separate repositories on your Windows laptop, your Linux desktop, and your iMac. You can transfer private branches between these machines by using the --private option on the "sync", "push", "pull", and "clone" commands. For example, if you are running "fossil server" on your Linux box and you want to clone that repository to your Mac, including all private branches, use: <verbatim> fossil clone --private http://user@linux.localnetwork:8080/ mac-clone.fossil </verbatim> You'll have to supply a username and password in order for this to work. Fossil will not clone (or sync) private branches anonymously. By default, there are no users that can do private branch syncing. You will have to give a user the "Private" capability ("x") if you want them to be able to do this. |
︙ | ︙ | |||
99 100 101 102 103 104 105 | again, this restriction is designed to make it hard to accidentally push private branches beyond their intended audience. <h2>Purging Private Branches</h2> You can remove all private branches from a repository using this command: | | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | again, this restriction is designed to make it hard to accidentally push private branches beyond their intended audience. <h2>Purging Private Branches</h2> You can remove all private branches from a repository using this command: <pre> fossil scrub --private </pre> Note that the above is a permanent and irreversible change. You will be asked to confirm before continuing. Once the private branches are removed, they cannot be retrieved (unless you have synced them to another repository.) So be careful with the command. <h2>Additional Notes</h2> All of the features above apply to <u>all</u> private branches in a single repository at once. There is no mechanism in Fossil (currently) that allows you to push, pull, clone, sync, or scrub an individual private branch within a repository that contains multiple private branches. |
Changes to www/qandc.wiki.
1 | <title>Questions And Criticisms</title> | < < | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | <title>Questions And Criticisms</title> This page is a collection of real questions and criticisms that were raised against Fossil early in its history (circa 2008). This page is old and has not been kept up-to-date. See the [/finfo?name=www/qandc.wiki|change history of this page]. <b>Fossil sounds like a lot of reinvention of the wheel. Why create your own DVCS when you could have reused mercurial?</b> <div class="indent"> I wrote fossil because none of the other available DVCSes met my needs. If the other DVCSes do meet your needs, then you might not need fossil. But they don't meet mine, and so fossil is necessary for me. Features provided by fossil that one does not get with other DVCSes include: <ol> <li> Integrated <a href="wikitheory.wiki">wiki</a>. </li> <li> Integrated <a href="bugtheory.wiki">bug tracking</a> </li> <li> Immutable artifacts </li> <li> Self-contained, stand-alone executable that can be run in a <a href="http://en.wikipedia.org/wiki/Chroot">chroot jail</a> </li> <li> Simple, well-defined, <a href="fileformat.wiki">enduring file format</a> </li> <li> Integrated <a href="webui.wiki">web interface</a> </li> </ol> </div> <b>Why should I use this rather than Trac?</b> <div class="indent"> <ol> <li> Fossil is distributed. You can view and/or edit tickets, wiki, and code while off network, then sync your changes later. With Trac, you can only view and edit tickets and wiki while you are connected to the server. </li> <li> Fossil is lightweight and fully self-contained. It is very easy to setup on a low-resource machine. Fossil does not require an administrator.</li> <li> Fossil integrates code versioning into the same repository with wiki and tickets. There is nothing extra to add or install. Fossil is an all-in-one turnkey solution. </li> </ol> </div> <b>Love the concept here. Anyone using this for real work yet?</b> <div class="indent"> Fossil is <a href="https://fossil-scm.org/">self-hosting</a>. In fact, this page was probably delivered to your web-browser via a working fossil instance. The same virtual machine that hosts https://fossil-scm.org/ (a <a href="http://www.linode.com/">Linode 720</a>) also hosts 24 other fossil repositories for various small projects. The documentation files for <a href="http://www.sqlite.org/">SQLite</a> are hosted in a fossil repository <a href="http://www.sqlite.org/docsrc/">here</a>, for example. Other projects are also adopting fossil. But fossil does not yet have the massive user base of git or mercurial. </div> <b>Fossil looks like the bug tracker that would be in your Linksys Router's administration screen.</b> <div class="indent"> I take a pragmatic approach to software: form follows function. To me, it is more important to have a reliable, fast, efficient, enduring, and simple DVCS than one that looks pretty. On the other hand, if you have patches that improve the appearance of Fossil without seriously compromising its reliability, performance, and/or maintainability, I will be happy to accept them. Fossil is self-hosting. Send email to request a password that will let you push to the main fossil repository. </div> <b>It would be useful to have a separate application that keeps the bug-tracking database in a versioned file. That file can then be pushed and pulled along with the rest repository.</b> <div class="indent"> Fossil already <u>does</u> push and pull bugs along with the files in your repository. But fossil does <u>not</u> track bugs as files in the source tree. That approach to bug tracking was rejected for three reasons: <ol> <li> Check-ins in fossil are immutable. So if |
︙ | ︙ | |||
105 106 107 108 109 110 111 | of tickets to developers with check-in privileges and an installed copy of the fossil executable. Casual passers-by on the internet should be permitted to create tickets. </ol> These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document. | | | | | | | < | < < | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | of tickets to developers with check-in privileges and an installed copy of the fossil executable. Casual passers-by on the internet should be permitted to create tickets. </ol> These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document. </div> <b>Fossil is already the name of a plan9 versioned append-only filesystem.</b> <div class="indent"> I did not know that. Perhaps they selected the name for the same reason that I did: because a repository with immutable artifacts preserves an excellent fossil record of a long-running project. </div> <b>The idea of storing a repository in a binary blob like an SQLite database terrifies me.</b> <div class="indent"> The use of SQLite to store the database is likely more stable and secure than any other approach, due to the fact that SQLite is transactional. Fossil also implements several internal <a href="selfcheck.wiki">self-checks</a> to insure that no information is ever lost. </div> <b>I am dubious of the benefits of including wikis and bug trackers directly in the VCS - either they are under-featured compared to full software like Trac, or the VCS is massively bloated compared to Subversion or Bazaar.</b> <div class="indent"> I have no doubt that Trac has many features that fossil lacks. But that is not the point. Fossil has several key features that Trac lacks and that I need: most notably the fact that fossil supports disconnected operation. As for bloat: Fossil is a single self-contained executable. You do not need any other packages (diff, patch, merge, cvs, svn, rcs, git, python, perl, tcl, apache, sqlite, and so forth) in order to run fossil. Fossil runs just fine in a chroot jail all by itself. And the self-contained fossil executable is much less than 1MB in size. (Update 2015-01-12: Fossil has grown in the years since the previous sentence was written but is still much less than 2MB according to "size" when compiled using -Os on x64 Linux.) Fossil is the very opposite of bloat. </div> |
Changes to www/quickstart.wiki.
1 | <title>Fossil Quick Start Guide</title> | < < < | | | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <title>Fossil Quick Start Guide</title> This is a guide to help you get started using the Fossil [https://en.wikipedia.org/wiki/Distributed_version_control|Distributed Version Control System] quickly and painlessly. <h2 id="install">Installing</h2> Fossil is a single self-contained C program. You need to either download a [https://fossil-scm.org/home/uv/download.html|precompiled binary] or <a href="build.wiki">compile it yourself</a> from sources. Install Fossil by putting the fossil binary someplace on your $PATH. You can test that Fossil is present and working like this: <pre><b>fossil version This is fossil version 2.13 [309af345ab] 2020-09-28 04:02:55 UTC </b></pre> <h2 id="workflow" name="fslclone">General Work Flow</h2> Fossil works with repository files (a database in a single file with the project's complete history) and with checked-out local trees (the working directory you use to do your work). (See [./glossary.md | the glossary] for more background.) |
︙ | ︙ | |||
46 47 48 49 50 51 52 | operations. <h2 id="new">Starting A New Project</h2> To start a new project with fossil create a new empty repository this way: ([/help/init | more info]) | < | | < | | < | < < < | | | | | | | | | | < < | < | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | operations. <h2 id="new">Starting A New Project</h2> To start a new project with fossil create a new empty repository this way: ([/help/init | more info]) <pre><b>fossil init</b> <i>repository-filename</i> </pre> You can name the database anything you like, and you can place it anywhere in the filesystem. The <tt>.fossil</tt> extension is traditional but only required if you are going to use the <tt>[/help/server | fossil server DIRECTORY]</tt> feature.” <h2 id="clone">Cloning An Existing Repository</h2> Most fossil operations interact with a repository that is on the local disk drive, not on a remote system. Hence, before accessing a remote repository it is necessary to make a local copy of that repository. Making a local copy of a remote repository is called "cloning". Clone a remote repository as follows: ([/help/clone | more info]) <pre><b>fossil clone</b> <i>URL repository-filename</i> </pre> The <i>URL</i> specifies the fossil repository you want to clone. The <i>repository-filename</i> is the new local filename into which the cloned repository will be written. For example, to clone the source code of Fossil itself: <pre><b>fossil clone https://fossil-scm.org/ myclone.fossil</b></pre> If your logged-in username is 'exampleuser', you should see output something like this: <pre><b>Round-trips: 8 Artifacts sent: 0 received: 39421 Clone done, sent: 2424 received: 42965725 ip: 10.10.10.0 Rebuilding repository meta-data... 100% complete... Extra delta compression... Vacuuming the database... project-id: 94259BB9F186226D80E49D1FA2DB29F935CCA0333 server-id: 016595e9043054038a9ea9bc526d7f33f7ac0e42 admin-user: exampleuser (password is "yoWgDR42iv")> </b></pre> If the remote repository requires a login, include a userid in the URL like this: <pre><b>fossil clone https://</b><i>remoteuserid</i><b>@www.example.org/ myclone.fossil</b></pre> You will be prompted separately for the password. Use [https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters|"%HH"] escapes for special characters in the userid. For example "/" would be replaced by "%2F" meaning that a userid of "Projects/Budget" would become "Projects%2FBudget") If you are behind a restrictive firewall, you might need to <a href="#proxy">specify an HTTP proxy</a>. |
︙ | ︙ | |||
141 142 143 144 145 146 147 | <h2 id="checkout">Checking Out A Local Tree</h2> To work on a project in fossil, you need to check out a local copy of the source tree. Create the directory you want to be the root of your tree and cd into that directory. Then do this: ([/help/open | more info]) | < | < < < | | | | | | < | | | | | | | | | | | | | | | | | | < < | < < < | | | < < < | | | | | | | | | < | < | | | | | | | | | | | | | | | | < | < | | < > | | < > | | | | < | 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | <h2 id="checkout">Checking Out A Local Tree</h2> To work on a project in fossil, you need to check out a local copy of the source tree. Create the directory you want to be the root of your tree and cd into that directory. Then do this: ([/help/open | more info]) <pre><b>fossil open</b> <i>repository-filename</i></pre> for example: <pre><b>fossil open ../myclone.fossil BUILD.txt COPYRIGHT-BSD2.txt README.md ︙ </tt></b></pre> (or "fossil open ..\myclone.fossil" on Windows). This leaves you with the newest version of the tree checked out. From anywhere underneath the root of your local tree, you can type commands like the following to find out the status of your local tree: <pre> <b>[/help/info | fossil info]</b> <b>[/help/status | fossil status]</b> <b>[/help/changes | fossil changes]</b> <b>[/help/diff | fossil diff]</b> <b>[/help/timeline | fossil timeline]</b> <b>[/help/ls | fossil ls]</b> <b>[/help/branch | fossil branch]</b> </pre> If you created a new repository using "fossil init" some commands will not produce much output. Note that Fossil allows you to make multiple check-outs in separate directories from the same repository. This enables you, for example, to do builds from multiple branches or versions at the same time without having to generate extra clones. To switch a checkout between different versions and branches, use:< <pre> <b>[/help/update | fossil update]</b> <b>[/help/checkout | fossil checkout]</b> </pre> [/help/update | update] honors the "autosync" option and does a "soft" switch, merging any local changes into the target version, whereas [/help/checkout | checkout] does not automatically sync and does a "hard" switch, overwriting local changes if told to do so. <h2 id="changes">Making and Committing Changes</h2> To add new files to your project or remove existing ones, use these commands: <pre> <b>[/help/add | fossil add]</b> <i>file...</i> <b>[/help/rm | fossil rm]</b> <i>file...</i> <b>[/help/addremove | fossil addremove]</b> <i>file...</i> </pre> The command: <pre><b>[/help/changes | fossil changes]</b></pre> lists files that have changed since the last commit to the repository. For example, if you edit the file "README.md": <pre><b>fossil changes EDITED README.md </b></pre> To see exactly what change was made you can use the command <b>[/help/diff | fossil diff]</b>: <pre><b>fossil diff Index: README.md ============================================================ --- README.md +++ README.md @@ -1,5 +1,6 @@ +Made some changes to the project # Original text </b></pre> "fossil diff" shows the difference between your tree on disk now and as the tree was when you last committed changes. If you haven't committed yet, then it shows the difference relative to the tip-of-trunk commit in the repository, being what you get when you "fossil open" a repository without specifying a version, populating the working directory. To see the most recent changes made to the repository by other users, use "fossil timeline" to find out the most recent commit, and then "fossil diff" between that commit and the current tree: <pre><b>fossil timeline === 2021-03-28 === 03:18:54 [ad75dfa4a0] *CURRENT* Added details to frobnicate command (user: user-one tags: trunk) === 2021-03-27 === 23:58:05 [ab975c6632] Update README.md. (user: user-two tags: trunk) ⋮ fossil diff --from current --to ab975c6632 Index: frobnicate.c ============================================================ --- frobnicate.c +++ frobnicate.c @@ -1,10 +1,11 @@ +/* made a change to the source file */ # Original text </b></pre> "current" is an alias for the checkout version, so the command "fossil diff --from ad75dfa4a0 --to ab975c6632" gives identical results. To commit your changes to a local-only repository: <pre><b>fossil commit</b> <i>(... Fossil will start your editor, if defined)</i><b> # Enter a commit message for this check-in. Lines beginning with # are ignored. # # user: exampleuser # tags: trunk # # EDITED README.md Edited file to add description of code changes New_Version: 7b9a416ced4a69a60589dde1aedd1a30fde8eec3528d265dbeed5135530440ab </b></pre> You will be prompted for check-in comments using whatever editor is specified by your VISUAL or EDITOR environment variable. If none is specified Fossil uses line-editing in the terminal. To commit your changes to a repository that was cloned from a remote repository, you give the same command, but the results are different. |
︙ | ︙ | |||
331 332 333 334 335 336 337 | When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some local configuration. This is easily accomplished using the web-server that is built into fossil. Start the fossil web server like this: ([/help/ui | more info]) | | | | | | | | | | | | | | | | | 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some local configuration. This is easily accomplished using the web-server that is built into fossil. Start the fossil web server like this: ([/help/ui | more info]) <pre> <b>fossil ui</b> <i>repository-filename</i> </pre> You can omit the <i>repository-filename</i> from the command above if you are inside a checked-out local tree. This starts a web server then automatically launches your web browser and makes it point to this web server. If your system has an unusual configuration, fossil might not be able to figure out how to start your web browser. In that case, first tell fossil where to find your web browser using a command like this: <pre> <b>fossil setting web-browser</b> <i>path-to-web-browser</i> </pre> By default, fossil does not require a login for HTTP connections coming in from the IP loopback address 127.0.0.1. You can, and perhaps should, change this after you create a few users. When you are finished configuring, just press Control-C or use the <b>kill</b> command to shut down the mini-server. <h2 id="sharing">Sharing Changes</h2> When [./concepts.wiki#workflow|autosync] is turned off, the changes you [/help/commit | commit] are only on your local repository. To share those changes with other repositories, do: <pre> <b>[/help/push | fossil push]</b> <i>URL</i> </pre> Where <i>URL</i> is the http: URL of the server repository you want to share your changes with. If you omit the <i>URL</i> argument, fossil will use whatever server you most recently synced with. The [/help/push | push] command only sends your changes to others. To Receive changes from others, use [/help/pull | pull]. Or go both ways at once using [/help/sync | sync]: <pre> <b>[/help/pull | fossil pull]</b> <i>URL</i> <b>[/help/sync | fossil sync]</b> <i>URL</i> </pre> When you pull in changes from others, they go into your repository, not into your checked-out local tree. To get the changes into your local tree, use [/help/update | update]: <pre> <b>[/help/update | fossil update]</b> <i>VERSION</i> </pre> The <i>VERSION</i> can be the name of a branch or tag or any abbreviation to the 40-character artifact identifier for a particular check-in, or it can be a date/time stamp. ([./checkin_names.wiki | more info]) If you omit the <i>VERSION</i>, then fossil moves you to the latest version of the branch your are currently on. The default behavior is for [./concepts.wiki#workflow|autosync] to be turned on. That means that a [/help/pull|pull] automatically occurs when you run [/help/update|update] and a [/help/push|push] happens automatically after you [/help/commit|commit]. So in normal practice, the push, pull, and sync commands are rarely used. But it is important to know about them, all the same. <pre> <b>[/help/checkout | fossil checkout]</b> <i>VERSION</i> </pre> Is similar to update except that it does not honor the autosync setting, nor does it merge in local changes - it prefers to overwrite them and fails if local changes exist unless the <tt>--force</tt> flag is used. <h2 id="branch" name="merge">Branching And Merging</h2> |
︙ | ︙ | |||
426 427 428 429 430 431 432 | To merge two branches back together, first [/help/update | update] to the branch you want to merge into. Then do a [/help/merge|merge] of the other branch that you want to incorporate the changes from. For example, to merge "featureX" changes into "trunk" do this: | | | | | | | 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 | To merge two branches back together, first [/help/update | update] to the branch you want to merge into. Then do a [/help/merge|merge] of the other branch that you want to incorporate the changes from. For example, to merge "featureX" changes into "trunk" do this: <pre> <b>fossil [/help/update|update] trunk</b> <b>fossil [/help/merge|merge] featureX</b> <i># make sure the merge didn't break anything...</i> <b>fossil [/help/commit|commit] </pre> The argument to the [/help/merge|merge] command can be any of the version identifier forms that work for [/help/update|update]. ([./checkin_names.wiki|more info].) The merge command has options to cherry-pick individual changes, or to back out individual changes, if you don't want to do a full merge. |
︙ | ︙ | |||
458 459 460 461 462 463 464 | into trunk previously, you can do so again and Fossil will automatically know to pull in only those changes that have occurred since the previous merge. If a merge or update doesn't work out (perhaps something breaks or there are many merge conflicts) then you back up using: | | | | | | | 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 | into trunk previously, you can do so again and Fossil will automatically know to pull in only those changes that have occurred since the previous merge. If a merge or update doesn't work out (perhaps something breaks or there are many merge conflicts) then you back up using: <pre> <b>[/help/undo | fossil undo]</b> </pre> This will back out the changes that the merge or update made to the working checkout. There is also a [/help/redo|redo] command if you undo by mistake. Undo and redo only work for changes that have not yet been checked in using commit and there is only a single level of undo/redo. <h2 id="server">Setting Up A Server</h2> Fossil can act as a stand-alone web server using one of these commands: <pre> <b>[/help/server | fossil server]</b> <i>repository-filename</i> <b>[/help/ui | fossil ui]</b> <i>repository-filename</i> </pre> The <i>repository-filename</i> can be omitted when these commands are run from within an open check-out, which is a particularly useful shortcut with the <b>fossil ui</b> command. The <b>ui</b> command is intended for accessing the web user interface from a local desktop. (We sometimes call this mode "Fossil UI.") |
︙ | ︙ | |||
525 526 527 528 529 530 531 | If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using a command-line option on commands that use the network, <b>sync</b>, <b>clone</b>, <b>push</b>, and <b>pull</b>. | | | | | | | | | | | | | 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 | If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using a command-line option on commands that use the network, <b>sync</b>, <b>clone</b>, <b>push</b>, and <b>pull</b>. <pre> <b>fossil clone </b><i>URL</i> <b>--proxy</b> <i>Proxy-URL</i> </pre> It is annoying to have to type in the proxy URL every time you sync your project, though, so you can make the proxy configuration persistent using the [/help/setting | setting] command: <pre> <b>fossil setting proxy </b><i>Proxy-URL</i> </pre> Or, you can set the "<b>http_proxy</b>" environment variable: <pre> <b>export http_proxy=</b><i>Proxy-URL</i> </pre> To stop using the proxy, do: <pre> <b>fossil setting proxy off</b> </pre> Or unset the environment variable. The fossil setting for the HTTP proxy takes precedence over the environment variable and the command-line option overrides both. If you have a persistent proxy setting that you want to override for a one-time sync, that is easily done on the command-line. For example, to sync with a co-worker's repository on your LAN, you might type: <pre> <b>fossil sync http://192.168.1.36:8080/ --proxy off</b> </pre> <h2 id="links">Other Resources</h2> <ul> <li> <a href="./gitusers.md">Hints For Users With Prior Git Experience</a> <li> <a href="./whyusefossil.wiki">Why You Should Use Fossil</a> <li> <a href="./history.md">The History and Purpose of Fossil</a> <li> <a href="./branching.wiki">Branching, Forking, and Tagging</a> <li> <a href="./hints.wiki">Fossil Tips and Usage Hints</a> <li> <a href="./permutedindex.html">Comprehensive Fossil Doc Index</a> </ul> |
Changes to www/quotes.wiki.
1 2 3 4 5 6 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... <h2>On The Usability Of Git</h2> <ol> <li>Git approaches the usability of iptables, which is to say, utterly unusable unless you have the manpage tattooed on you arm. <p class="local-indent"> <i>by mml at [http://news.ycombinator.com/item?id=1433387]</i> </p> <li><nowiki>It's simplest to think of the state of your [git] repository as a point in a high-dimensional "code-space", in which branches are represented as n-dimensional membranes, mapping the spatial loci of successive commits onto the projected manifold of each cloned repository.</nowiki> <p class="local-indent"> <i>by Jonathan Hartley at [https://www.tartley.com/posts/a-guide-to-git-using-spatial-analogies]; <br>Quoted here: [https://lwn.net/Articles/420152/].</i> </p> <li>Git is not a Prius. Git is a Model T. Its plumbing and wiring sticks out all over the place. You have to be a mechanic to operate it successfully or you'll be stuck on the side of the road when it breaks down. And it <b>will</b> break down. <p class="local-indent"> <i>Nick Farina at [http://nfarina.com/post/9868516270/git-is-simpler]</i> </p> <li>Initial revision of "git", The information manager from hell <p class="local-indent"> <i>Linus Torvalds - 2005-04-07 22:13:13<br> Commit comment on the very first source-code check-in for git </p> <li>I've been experimenting a lot with git at work. Damn, it's complicated. It has things to trip you up with that sane people just wouldn't ever both with including the ability to allow you to commit stuff in such a way that you can't find it again afterwards (!!!) Demented workflow complexity on acid? <p>* dkf really wishes he could use fossil instead</p> <p class="local-indent"> <i>by Donal K. Fellow (dkf) on the Tcl/Tk chatroom, 2013-04-09.</i> </p> <li>[G]it is <i>designed</i> to forget things. <p class="local-indent"> <i>[http://www.cs.cmu.edu/~davide/howto/git_lose.html] </p> <li>[I]n nearly 31 years of using a computer i have, in total, lost more data to git (while following the instructions!!!) than any other single piece of software. <p class="local-indent"> <i>Stephan Beal on the [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg17181.html|Fossil mailing list] 2014-09-01.</i> </p> <li>If programmers _really_ wanted to help scientists, they'd build a version control system that was more usable than Git. <p class="local-indent"> <i>Tweet by Greg Wilson @gvwilson on 2015-02-22 17:47</i> </p> <li><img src='xkcd-git.gif' align='top'> <p class="local-indent"><i>Randall Munroe. [http://xkcd.com/1597/]</i><p> </ol> <h2>On The Usability Of Fossil</h2> <ol> <li value=11> Fossil mesmerizes me with simplicity especially after I struggled to get a bug-tracking system to work with mercurial. <p class="local-indent"> <i>rawjeev at [https://stackoverflow.com/a/2100469/142454]</i> </p> <li>Fossil is the best thing to happen to my development workflow this year, as I am pretty sure that using Git has resulted in the premature death of too many of my brain cells. I'm glad to be able to replace Git in every place that I possibly can with Fossil. <p class="local-indent"> <i>Joe Prostko at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg16716.html] </p> <li>This is my favourite VCS. I can carry it on a USB. And it's a complete system, with it's own server, ticketing system, Wiki pages, and a very, very helpful timeline visualization. And the entire program in a single file! <p class="local-indent"> <i>thunderbong commenting on hacker news: [https://news.ycombinator.com/item?id=9131619]</i> </p> </ol> <h2>On Git Versus Fossil</h2> <ol> <li value=14> After prolonged exposure to fossil, i tend to get the jitters when I work with git... <p class="local-indent"> <i>sriku - at [https://news.ycombinator.com/item?id=16104427]</i> </p> <li> Just want to say thanks for fossil making my life easier.... Also <nowiki>[for]</nowiki> not having a misanthropic command line interface. <p class="local-indent"> <i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i> </p> <li>We use it at a large university to manage code that small teams write. The runs everywhere, ease of installation and portability is something that seems to be a good fit with the environment we have (highly ditrobuted, sometimes very restrictive firewalls, OSX/Win/Linux). We are happy with it and teaching a Msc/Phd student (read complete novice) fossil has just been a smoother ride than Git was. <p class="local-indent"> <i>viablepanic at [https://www.reddit.com/r/programming/comments/bxcto/why_not_fossil_scm/c0p30b4?utm_source=share&utm_medium=web2x&context=3]</i> </p> <li>In the fossil community - and hence in fossil itself - development history is pretty much sacrosanct. The very name "fossil" was to chosen to reflect the unchanging nature of things in that history. <br><br> In git (or rather, the git community), the development history is part of the published aspect of the project, so it provides tools for rearranging that history so you can present what you "should" have done rather than what you actually did. <p class="local-indent"> <i>Mike Meyer on the Fossil mailing list, 2011-10-04</i> </p> <li>github is such a pale shadow of what fossil does. <p class="local-indent"> <i>dkf on the Tcl chatroom, 2013-12-06</i> </p> <li>[With fossil] I actually enjoy keeping track of source files again. <p class="local-indent"> <a href="https://wholesomedonut.prose.sh/using-fossil-not-git">https://wholesomedonut.prose.sh/using-fossil-not-git</a> </p> </ol> |
Changes to www/rebaseharm.md.
︙ | ︙ | |||
28 29 30 31 32 33 34 | A rebase is really nothing more than a merge (or a series of merges) that deliberately forgets one of the parents of each merge step. To help illustrate this fact, consider the first rebase example from the [Git documentation][gitrebase]. The merge looks like this: | | | | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | A rebase is really nothing more than a merge (or a series of merges) that deliberately forgets one of the parents of each merge step. To help illustrate this fact, consider the first rebase example from the [Git documentation][gitrebase]. The merge looks like this: ~~~ pikchr toggle center scale = 0.8 circle "C0" fit arrow right 50% circle same "C1" arrow same circle same "C2" arrow same circle same "C3" arrow same circle same "C5" circle same "C4" at 1cm above C3 arrow from C2 to C4 chop arrow from C4 to C5 chop ~~~ And the rebase looks like this: ~~~ pikchr toggle center scale = 0.8 circle "C0" fit arrow right 50% circle same "C1" arrow same circle same "C2" arrow same |
︙ | ︙ | |||
93 94 95 96 97 98 99 | ### <a id="clean-diffs"></a>2.2 Rebase does not actually provide better feature-branch diffs Another argument, often cited, is that rebasing a feature branch allows one to see just the changes in the feature branch without the concurrent changes in the main line of development. Consider a hypothetical case: | | | 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | ### <a id="clean-diffs"></a>2.2 Rebase does not actually provide better feature-branch diffs Another argument, often cited, is that rebasing a feature branch allows one to see just the changes in the feature branch without the concurrent changes in the main line of development. Consider a hypothetical case: ~~~ pikchr toggle center scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" arrow same |
︙ | ︙ | |||
121 122 123 124 125 126 127 | In the above, a feature branch consisting of check-ins C3 and C5 is run concurrently with the main line in check-ins C4 and C6. Advocates for rebase say that you should rebase the feature branch to the tip of main in order to remove main-line development differences from the feature branch's history: | | | 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 | In the above, a feature branch consisting of check-ins C3 and C5 is run concurrently with the main line in check-ins C4 and C6. Advocates for rebase say that you should rebase the feature branch to the tip of main in order to remove main-line development differences from the feature branch's history: ~~~ pikchr toggle center # Duplicated below in section 5.0 scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" |
︙ | ︙ | |||
156 157 158 159 160 161 162 | You could choose to collapse C3\' and C5\' into a single check-in as part of this rebase, but that's a side issue we'll deal with [separately](#collapsing). Because Fossil purposefully lacks rebase, the closest you can get to this same check-in history is the following merge: | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | You could choose to collapse C3\' and C5\' into a single check-in as part of this rebase, but that's a side issue we'll deal with [separately](#collapsing). Because Fossil purposefully lacks rebase, the closest you can get to this same check-in history is the following merge: ~~~ pikchr toggle center scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" arrow same |
︙ | ︙ | |||
196 197 198 199 200 201 202 | branch and from the mainline, whereas in the rebase case diff(C6,C5\') shows only the feature branch changes. But that argument is comparing apples to oranges, since the two diffs do not have the same baseline. The correct way to see only the feature branch changes in the merge case is not diff(C2,C7) but rather diff(C6,C7). | | | | > | | | > | 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | branch and from the mainline, whereas in the rebase case diff(C6,C5\') shows only the feature branch changes. But that argument is comparing apples to oranges, since the two diffs do not have the same baseline. The correct way to see only the feature branch changes in the merge case is not diff(C2,C7) but rather diff(C6,C7). <div align=center> | Rebase | Merge | What You See | |---------------|-------------|----------------------------------------| | diff(C2,C5\') | diff(C2,C7) | Commingled branch and mainline changes | | diff(C6,C5\') | diff(C6,C7) | Branch changes only | </div> Remember: C7 and C5\' are bit-for-bit identical, so the output of the diff is not determined by whether you select C7 or C5\' as the target of the diff, but rather by your choice of the diff source, C2 or C6. So, to help with the problem of viewing changes associated with a feature branch, perhaps what is needed is not rebase but rather better tools to |
︙ | ︙ | |||
254 255 256 257 258 259 260 | branch to the parent repo? Will the many eyeballs even see those errors when they’re intermingled with code implementing some compelling new feature? ## <a id="timestamps"></a>4.0 Rebase causes timestamp confusion Consider the earlier example of rebasing a feature branch: | | | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 | branch to the parent repo? Will the many eyeballs even see those errors when they’re intermingled with code implementing some compelling new feature? ## <a id="timestamps"></a>4.0 Rebase causes timestamp confusion Consider the earlier example of rebasing a feature branch: ~~~ pikchr toggle center # Copy of second diagram in section 2.2 above scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" |
︙ | ︙ |
Changes to www/reviews.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 | <title>Reviews</title> <b>External links:</b> * [https://www.nixtu.info/2010/03/fossil-dvcs-on-go-first-impressions.html | Fossil DVCS on the Go - First Impressions] <b>See Also:</b> * [./quotes.wiki | Short Quotes on Fossil, Git, And DVCSes] <b>Daniel writes on 2009-01-06:</b> | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | <title>Reviews</title> <b>External links:</b> * [https://www.nixtu.info/2010/03/fossil-dvcs-on-go-first-impressions.html | Fossil DVCS on the Go - First Impressions] <b>See Also:</b> * [./quotes.wiki | Short Quotes on Fossil, Git, And DVCSes] <b>Daniel writes on 2009-01-06:</b> <div class="indent"> The reasons I use fossil are that it's the only version control I have found that I can get working through the VERY annoying MS firewalls at work.. (albeit through an ntlm proxy) and I just love single .exe applications! </div> <b>Joshua Paine on 2010-10-22:</b> <div class="indent"> With one of my several hats on, I'm in a small team using git. Another team member just checked some stuff into trunk that should have been on a branch. Nothing else had happened since, so in fossil I would have just edited that commit and put it on a new branch. In git that can't actually be done without danger once other people have pulled, so I had to create a new commit rolling back the changes, then branch and cherry pick the earlier changes, then figure out how to make my new branch shared instead of private. Just want to say thanks for fossil making my life easier on most of my projects, and being able to move commits to another branch after the fact and shared-by-default branches are good features. Also not having a misanthropic command line interface. </div> <b>Stephan Beal writes on 2009-01-11:</b> <div class="indent"> Sometime in late 2007 I came across a link to fossil on <a href="http://www.sqlite.org/">sqlite.org</a>. It was a good thing I bookmarked it, because I was never able to find the link again (it might have been in a bug report or something). The reasons I first took a close look at it were (A) it stemmed from the sqlite project, which I've held in high regards for years (e.g. I wrote JavaScript bindings for it: |
︙ | ︙ | |||
133 134 135 136 137 138 139 | I remember my first reaction to fossil being, "this will be an excellent solution for small projects (like the dozens we've all got sitting on our hard drives but which don't justify the hassle of version control)." A year of daily use in over 15 source trees has confirmed that, and I continue to heartily recommend fossil to other developers I know who also have their own collection of "unhosted" pet projects. | | | 133 134 135 136 137 138 139 140 | I remember my first reaction to fossil being, "this will be an excellent solution for small projects (like the dozens we've all got sitting on our hard drives but which don't justify the hassle of version control)." A year of daily use in over 15 source trees has confirmed that, and I continue to heartily recommend fossil to other developers I know who also have their own collection of "unhosted" pet projects. </div> |
Changes to www/scgi.wiki.
1 2 3 4 5 6 | <title>Fossil SCGI</title> To run Fossil using SCGI, start the [/help/server|fossil server] command with the --scgi command-line option. You will probably also want to specific an alternative TCP/IP port using --port. For example: | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <title>Fossil SCGI</title> To run Fossil using SCGI, start the [/help/server|fossil server] command with the --scgi command-line option. You will probably also want to specific an alternative TCP/IP port using --port. For example: <pre> fossil server $REPOSITORY --port 9000 --scgi </pre> Then configure your SCGI-aware web-server to send SCGI requests to port 9000 on the machine where Fossil is running. A typical configuration for this in Nginx is: <pre> location ~ ^/demo_project/ { include scgi_params; scgi_pass localhost:9000; scgi_param SCRIPT_NAME "/demo_project"; scgi_param HTTPS "on"; } </pre> Note that Nginx does not normally send either the PATH_INFO or SCRIPT_NAME variables via SCGI, but Fossil needs one or the other. So the configuration above needs to add SCRIPT_NAME. If you do not do this, Fossil returns an error. |
Changes to www/selfcheck.wiki.
1 2 | <title>Fossil Repository Integrity Self-Checks</title> | < < | 1 2 3 4 5 6 7 8 9 | <title>Fossil Repository Integrity Self-Checks</title> Fossil is designed with features to give it a high level of integrity so that users can have confidence that content will never be mangled or lost by Fossil. This note describes the defensive measures that Fossil uses to help prevent information loss due to bugs. Fossil has been hosting itself and many other projects for |
︙ | ︙ |
Changes to www/selfhost.wiki.
︙ | ︙ | |||
28 29 30 31 32 33 34 | dozen other smaller projects. This demonstrates that Fossil can run on a low-power host processor. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: | | | | | | | | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | dozen other smaller projects. This demonstrates that Fossil can run on a low-power host processor. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: <pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre> Server (3) ran for 10 years as a CGI script on a shared hosting account at <a href="http://www.he.net/">Hurricane Electric</a> in Fremont, CA. This server demonstrated the ability of Fossil to run on an economical shared-host web account with no privileges beyond port 80 HTTP access and CGI. It is not necessary to have a dedicated computer with administrator privileges to run Fossil. As far as we are aware, Fossil is the only full-featured configuration management system that can run in such a restricted environment. The CGI script that ran on the Hurricane Electric server was the same as the CGI script shown above, except that the pathnames are modified to suit the environment: <pre> #!/home/hwaci/bin/fossil repository: /home/hwaci/fossil/fossil.fossil </pre> In recent years, virtual private servers have become a more flexible and less expensive hosting option compared to shared hosting accounts. So on 2017-07-25, server (3) was moved onto a $5/month "droplet" [https://en.wikipedia.org/wiki/Virtual_private_server|VPS] from [https://www.digitalocean.com|Digital Ocean] located in San Francisco. Server (3) is synchronized with the canonical server (1) by running a command similar to the following via cron: <pre> /usr/local/bin/fossil all sync -u </pre> Server (2) is a <a href="http://www.linode.com/">Linode 4096</a> located in Newark, NJ and set up just like the canonical server (1) with the addition of a cron job for synchronization. The same cron job also runs the [/help?cmd=git|fossil git export] command after each sync in order to [./mirrortogithub.md#ex1|mirror all changes to GitHub]. |
Changes to www/server/any/cgi.md.
1 2 3 4 5 6 7 8 9 10 11 12 | # Serving via CGI A Fossil server can be run from most ordinary web servers as a CGI program. This feature allows Fossil to seamlessly integrate into a larger website. The [self-hosting Fossil repository web site](../../selfhost.wiki) is implemented using CGI. See the [How CGI Works](../../aboutcgi.wiki) page for background information on the CGI protocol. To run Fossil as CGI, create a CGI script (here called "repo") in the CGI directory of your web server with content like this: | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | # Serving via CGI A Fossil server can be run from most ordinary web servers as a CGI program. This feature allows Fossil to seamlessly integrate into a larger website. The [self-hosting Fossil repository web site](../../selfhost.wiki) is implemented using CGI. See the [How CGI Works](../../aboutcgi.wiki) page for background information on the CGI protocol. To run Fossil as CGI, create a CGI script (here called "repo") in the CGI directory of your web server with content like this: #!/usr/bin/fossil repository: /home/fossil/repo.fossil Adjust the paths appropriately. It may be necessary to set certain permissions on this file or to modify an `.htaccess` file or make other server-specific changes. Consult the documentation for your particular web server. The following permissions are *normally* required, but, again, may be different for a particular configuration: |
︙ | ︙ | |||
55 56 57 58 59 60 61 | for scripts like our “`repo`” example. To serve multiple repositories from a directory using CGI, use the "directory:" tag in the CGI script rather than "repository:". You might also want to add a "notfound:" tag to tell where to redirect if the particular repository requested by the URL is not found: | | | | | | | | | 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | for scripts like our “`repo`” example. To serve multiple repositories from a directory using CGI, use the "directory:" tag in the CGI script rather than "repository:". You might also want to add a "notfound:" tag to tell where to redirect if the particular repository requested by the URL is not found: #!/usr/bin/fossil directory: /home/fossil/repos notfound: http://url-to-go-to-if-repo-not-found/ Once deployed, a URL like: <b>http://mydomain.org/cgi-bin/repo/XYZ</b> will serve up the repository `/home/fossil/repos/XYZ.fossil` if it exists. Additional options available to the CGI script are [documented separately](../../cgi.wiki). #### CGI with Apache behind an Nginx proxy For the case where the Fossil repositories live on a computer, itself behind an Internet-facing machine that employs Nginx to reverse proxy HTTP(S) requests and take care of the TLS part of the connections in a transparent manner for the downstream web servers, the CGI parameter `HTTPS=on` might not be set. However, Fossil in CGI mode needs it in order to generate the correct links. Apache can be instructed to pass this parameter further to the CGI scripts for TLS connections with a stanza like SetEnvIf X-Forwarded-Proto "https" HTTPS=on in its config file section for CGI, provided that proxy_set_header X-Forwarded-Proto $scheme; has been be added in the relevant proxying section of the Nginx config file. *[Return to the top-level Fossil server article.](../)* |
Changes to www/server/any/http-over-ssh.md.
︙ | ︙ | |||
13 14 15 16 17 18 19 | ## 1. Force remote Fossil access through a wrapper script <a id="sshd"></a> Put something like the following into the `sshd_config` file on the Fossil repository server: ``` ssh-config | | | | | | | | | 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ## 1. Force remote Fossil access through a wrapper script <a id="sshd"></a> Put something like the following into the `sshd_config` file on the Fossil repository server: ``` ssh-config Match Group fossil X11Forwarding no AllowTcpForwarding no AllowAgentForwarding no ForceCommand /home/fossil/bin/wrapper ``` This file is usually found in `/etc/ssh`, but some OSes put it elsewhere. The first line presumes that we will put all users who need to use our Fossil repositories into the `fossil` group, as we will do [below](#perms). You could instead say something like: ``` ssh-config Match User alice,bob,carol,dave ``` You have to list the users allowed to use Fossil in this case because your system likely has a system administrator that uses SSH for remote shell access, so you want to *exclude* that user from the list. For the same reason, you don’t want to put the `ForceCommand` directive outside a `Match` block of some sort. You could instead list the exceptions: ``` ssh-config Match User !evi ``` This would permit only [Evi the System Administrator][evi] to bypass this mechanism. [evi]: https://en.wikipedia.org/wiki/Evi_Nemeth |
︙ | ︙ | |||
66 67 68 69 70 71 72 | instance with certain parameters in order to set up the HTTP-based sync protocol over that SSH tunnel. We need to preserve some of this command and rewrite other parts to make this work. Here is a simpler variant of Andy’s original wrapper script: ``` sh | | | | | | | | 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | instance with certain parameters in order to set up the HTTP-based sync protocol over that SSH tunnel. We need to preserve some of this command and rewrite other parts to make this work. Here is a simpler variant of Andy’s original wrapper script: ``` sh #!/bin/bash set -- $SSH_ORIGINAL_COMMAND while [ $# -gt 1 ] ; do shift ; done export REMOTE_USER="$USER" ROOT=/home/fossil exec "$ROOT/bin/fossil" http "$ROOT/museum/$(/bin/basename "$1")" ``` The substantive changes are: 1. Move the command rewriting bits to the start. 2. Be explicit about executable paths. You might extend this idea by |
︙ | ︙ | |||
102 103 104 105 106 107 108 | is not the case everywhere. If the script fails to run on your system, try changing this line to point at `bash`, `dash`, `ksh`, or `zsh`. Also check the absolute paths for local correctness: is `/bin/basename` installed on your system, for example? Under this scheme, you clone with a command like: | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | is not the case everywhere. If the script fails to run on your system, try changing this line to point at `bash`, `dash`, `ksh`, or `zsh`. Also check the absolute paths for local correctness: is `/bin/basename` installed on your system, for example? Under this scheme, you clone with a command like: $ fossil clone ssh://HOST/repo.fossil This will clone the remote `/home/fossil/museum/repo.fossil` repository to your local machine under the same name and open it into a “`repo/`” subdirectory. Notice that we didn’t have to give the `museum/` part of the path: it’s implicit per point #3 above. This presumes your local user name matches the remote user name. Unlike |
︙ | ︙ | |||
127 128 129 130 131 132 133 | the wrapper script from where you placed it and execute it, and that they have read/write access on the directory where the Fossil repositories are stored. You can achieve all of this on a Linux box with: ``` shell | | | | | | | | | | | | | | 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | the wrapper script from where you placed it and execute it, and that they have read/write access on the directory where the Fossil repositories are stored. You can achieve all of this on a Linux box with: ``` shell sudo adduser fossil for u in alice bob carol dave ; do sudo adduser $u sudo gpasswd -a fossil $u done sudo -i -u fossil chmod 710 . mkdir -m 750 bin mkdir -m 770 museum ln -s /usr/local/bin/fossil bin ``` You then need to copy the Fossil repositories into `~fossil/museum` and make them readable and writable by group `fossil`. These repositories presumably already have Fossil users configured, with the necessary [user capabilities](../../caps/), the point of this article being to show you how to make Fossil-over-SSH pay attention to those caps. You must also permit use of `REMOTE_USER` on each shared repository. Fossil only pays attention to this environment variable in certain contexts, of which “`fossil http`” is not one. Run this command against each repo to allow that: ``` shell echo "INSERT OR REPLACE INTO config VALUES ('remote_user_ok',1,strftime('%s','now'));" | fossil sql -R museum/repo.fossil ``` Now you can configure SSH authentication for each user. Since Fossil’s password-saving feature doesn’t work in this case, I suggest setting up SSH keys via `~USER/.ssh/authorized_keys` since the SSH authentication occurs on each sync, which Fossil’s default-enabled autosync setting makes frequent. |
︙ | ︙ |
Changes to www/server/any/inetd.md.
1 2 3 4 5 6 | # Serving via inetd A Fossil server can be launched on-demand by `inetd` by using the [`fossil http`](/help/http) command. To do so, add a line like the following to its configuration file, typically `/etc/inetd.conf`: | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | # Serving via inetd A Fossil server can be launched on-demand by `inetd` by using the [`fossil http`](/help/http) command. To do so, add a line like the following to its configuration file, typically `/etc/inetd.conf`: 80 stream tcp nowait.1000 root /usr/bin/fossil /usr/bin/fossil http /home/fossil/repo.fossil In this example, you are telling `inetd` that when an incoming connection appears on TCP port 80 that it should launch the program `/usr/bin/fossil` with the arguments shown. Obviously you will need to modify the pathnames for your particular setup. The final argument is either the name of the fossil repository to be served or a directory containing multiple repositories. If you use a non-standard TCP port on systems where the port specification must be a symbolic name and cannot be numeric, add the desired name and port to `/etc/services`. For example, if you want your Fossil server running on TCP port 12345 instead of 80, you will need to add: fossil 12345/tcp # fossil server and use the symbolic name “`fossil`” instead of the numeric TCP port number (“12345” in the above example) in `inetd.conf`. Notice that we configured `inetd` to launch Fossil as root. See the top-level section on “[The Fossil Chroot Jail](../../chroot.md)” for the consequences of this and |
︙ | ︙ |
Changes to www/server/any/none.md.
︙ | ︙ | |||
26 27 28 29 30 31 32 | * “`ui`” launches a local web browser pointed at this URL. You can omit the _REPOSITORY_ argument if you run one of the above commands from within a Fossil checkout directory to serve that repository: | | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | * “`ui`” launches a local web browser pointed at this URL. You can omit the _REPOSITORY_ argument if you run one of the above commands from within a Fossil checkout directory to serve that repository: $ fossil ui # or... $ fossil server You can abbreviate Fossil sub-commands as long as they are unambiguous. “`server`” can currently be as short as “`ser`”. You can serve a directory containing multiple `*.fossil` files like so: $ fossil server --port 9000 --repolist /path/to/repo/dir There is an [example script](/file/tools/fslsrv) in the Fossil distribution that wraps `fossil server` to produce more complicated effects. Feel free to take it, study it, and modify it to suit your local needs. See the [online documentation](/help/server) for more information on the |
︙ | ︙ |
Changes to www/server/any/scgi.md.
1 2 3 4 5 6 7 8 9 10 11 12 13 | # Serving via SCGI There is an alternative to running Fossil as a [standalone HTTP server](./none.md), which is to run it in SimpleCGI (a.k.a. SCGI) mode, which uses the same [`fossil server`](/help/server) command as for HTTP service. Simply add the `--scgi` command-line option and the stand-alone server will speak the SCGI protocol rather than raw HTTP. This can be used with a web server such as [nginx](http://nginx.org) which does not support [Fossil’s CGI mode](./cgi.md). A basic nginx configuration to support SCGI with Fossil looks like this: | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | # Serving via SCGI There is an alternative to running Fossil as a [standalone HTTP server](./none.md), which is to run it in SimpleCGI (a.k.a. SCGI) mode, which uses the same [`fossil server`](/help/server) command as for HTTP service. Simply add the `--scgi` command-line option and the stand-alone server will speak the SCGI protocol rather than raw HTTP. This can be used with a web server such as [nginx](http://nginx.org) which does not support [Fossil’s CGI mode](./cgi.md). A basic nginx configuration to support SCGI with Fossil looks like this: location /code/ { include scgi_params; scgi_param SCRIPT_NAME "/code"; scgi_pass localhost:9000; } The `scgi_params` file comes with nginx, and it simply translates nginx internal variables to `scgi_param` directives to create SCGI environment variables for the proxied program; in this case, Fossil. Our explicit `scgi_param` call to define `SCRIPT_NAME` adds one more variable to this set, which is necessary for this configuration to work properly, because our repo isn’t at the root of the URL hierarchy. Without it, when Fossil generates absolute URLs, they’ll be missing the `/code` part at the start, which will typically cause [404 errors][404]. The final directive simply tells nginx to proxy all calls to URLs under `/code` down to an SCGI program on TCP port 9000. We can temporarily set Fossil up as a server on that port like so: $ fossil server /path/to/repo.fossil --scgi --localhost --port 9000 & The `--scgi` option switches Fossil into SCGI mode from its default, which is [stand-alone HTTP server mode](./none.md). All of the other options discussed in that linked document — such as the ability to serve a directory full of Fossil repositories rather than just a single repository — work the same way in SCGI mode. |
︙ | ︙ |
Changes to www/server/any/xinetd.md.
1 2 3 4 5 6 7 8 9 10 | # Serving via xinetd Some operating systems have replaced the old Unix `inetd` daemon with `xinetd`, which has a similar mission but with a very different configuration file format. The typical configuration file is either `/etc/xinetd.conf` or a subfile in the `/etc/xinetd.d` directory. You need a configuration something like this for Fossil: | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | # Serving via xinetd Some operating systems have replaced the old Unix `inetd` daemon with `xinetd`, which has a similar mission but with a very different configuration file format. The typical configuration file is either `/etc/xinetd.conf` or a subfile in the `/etc/xinetd.d` directory. You need a configuration something like this for Fossil: service http { port = 80 socket_type = stream wait = no user = root server = /usr/bin/fossil server_args = http /home/fossil/repos/ } This example configures Fossil to serve multiple repositories under the `/home/fossil/repos/` directory. Beyond this, see the general commentary in our article on [the `inetd` method](./inetd.md) as they also apply to service via `xinetd`. |
︙ | ︙ |
Changes to www/server/debian/nginx.md.
︙ | ︙ | |||
99 100 101 102 103 104 105 | ## <a id="deps"></a>Installing the Dependencies The first step is to install some non-default packages we’ll need. SSH into your server, then say: | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | ## <a id="deps"></a>Installing the Dependencies The first step is to install some non-default packages we’ll need. SSH into your server, then say: $ sudo apt install fossil nginx You can leave “`fossil`” out of that if you’re building Fossil from source to get a more up-to-date version than is shipped with the host OS. ## <a id="scgi"></a>Running Fossil in SCGI Mode |
︙ | ︙ | |||
129 130 131 132 133 134 135 | ## <a id="config"></a>Configuration On Debian and Ubuntu systems the primary user-level configuration file for nginx is `/etc/nginx/sites-enabled/default`. I recommend that this file contain only a list of include statements, one for each site that server hosts: | | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | ## <a id="config"></a>Configuration On Debian and Ubuntu systems the primary user-level configuration file for nginx is `/etc/nginx/sites-enabled/default`. I recommend that this file contain only a list of include statements, one for each site that server hosts: include local/example.com include local/foo.net Those files then each define one domain’s configuration. Here, `/etc/nginx/local/example.com` contains the configuration for `*.example.com` and its alias `*.example.net`; and `local/foo.net` contains the configuration for `*.foo.net`. The configuration for our `example.com` web site, stored in |
︙ | ︙ | |||
195 196 197 198 199 200 201 | As you can see, this is a pure extension of [the basic nginx service configuration for SCGI][scgii], showing off a few ideas you might want to try on your own site, such as static asset proxying. You also need a `local/code` file containing: | | | | | | | | | 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | As you can see, this is a pure extension of [the basic nginx service configuration for SCGI][scgii], showing off a few ideas you might want to try on your own site, such as static asset proxying. You also need a `local/code` file containing: include scgi_params; scgi_pass 127.0.0.1:12345; scgi_param SCRIPT_NAME "/code"; We separate that out because nginx refuses to inherit certain settings between nested location blocks, so rather than repeat them, we extract them to this separate file and include it from both locations where it’s needed. You see this above where we set far-future expiration dates on files served by Fossil via URLs that contain hashes that change when the content changes. It tells your browser that the content of these URLs can never change without the URL itself changing, which makes your Fossil-based site considerably faster. Similarly, the `local/generic` file referenced above helps us reduce unnecessary repetition among the multiple sites this configuration hosts: root /var/www/$host; listen 80; listen [::]:80; charset utf-8; There are some configuration directives that nginx refuses to substitute variables into, citing performance considerations, so there is a limit to how much repetition you can squeeze out this way. One such example are the `access_log` and `error_log` directives, which follow an obvious pattern from one host to the next. Sadly, you must tolerate some repetition across `server { }` blocks when setting up multiple domains |
︙ | ︙ | |||
244 245 246 247 248 249 250 | encryption for Fossil](#tls), proxying HTTP instead of SCGI provides no benefit. However, it is still worth showing the proper method of proxying Fossil’s HTTP server through nginx if only to make reading nginx documentation on other sites easier: | | | | | | | | | | | | | | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 | encryption for Fossil](#tls), proxying HTTP instead of SCGI provides no benefit. However, it is still worth showing the proper method of proxying Fossil’s HTTP server through nginx if only to make reading nginx documentation on other sites easier: location /code { rewrite ^/code(/.*) $1 break; proxy_pass http://127.0.0.1:12345; } The most common thing people get wrong when hand-rolling a configuration like this is to get the slashes wrong. Fossil is sensitive to this. For instance, Fossil will not collapse double slashes down to a single slash, as some other HTTP servers will. ## <a id="large-uv"></a> Allowing Large Unversioned Files By default, nginx only accepts HTTP messages [up to a meg](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) in size. Fossil chunks its sync protocol such that this is not normally a problem, but when sending [unversioned content][uv], it uses a single message for the entire file. Therefore, if you will be storing files larger than this limit as unversioned content, you need to raise the limit. Within the `location` block: # Allow large unversioned file uploads, such as PDFs client_max_body_size 20M; [uv]: ../../unvers.wiki ## <a id="fail2ban"></a> Integrating `fail2ban` One of the nice things that falls out of proxying Fossil behind nginx is that it makes it easier to configure `fail2ban` to recognize attacks on Fossil and automatically block them. Fossil logs the sorts of errors we want to detect, but it does so in places like the repository’s admin log, a SQL table, which `fail2ban` doesn’t know how to query. By putting Fossil behind an nginx proxy, we convert these failures to log file form, which `fail2ban` is designed to handle. First, install `fail2ban`, if you haven’t already: sudo apt install fail2ban We’d like `fail2ban` to react to Fossil `/login` failures. The stock configuration of `fail2ban` only detects a few common sorts of SSH attacks by default, and its included (but disabled) nginx attack detectors don’t include one that knows how to detect an attack on Fossil. We have to teach it by putting the following into `/etc/fail2ban/filter.d/nginx-fossil-login.conf`: [Definition] failregex = ^<HOST> - .*POST .*/login HTTP/..." 401 That teaches `fail2ban` how to recognize the errors logged by Fossil [as of 2.14](/info/39d7eb0e22). (Earlier versions of Fossil returned HTTP status code 200 for this, so you couldn’t distinguish a successful login from a failure.) Then in `/etc/fail2ban/jail.local`, add this section: [nginx-fossil-login] enabled = true logpath = /var/log/nginx/*-https-access.log The last line is the key: it tells `fail2ban` where we’ve put all of our per-repo access logs in the nginx config above. There’s a [lot more you can do][dof2b], but that gets us out of scope of this guide. |
︙ | ︙ | |||
336 337 338 339 340 341 342 | has gotten smarter or our nginx configurations have gotten simpler, so we have removed the manual instructions we used to have here. You may wish to include something like this from each `server { }` block in your configuration to enable TLS in a common, secure way: ``` | | | | | | | | | | | | | | | | | | | | < < | 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 | has gotten smarter or our nginx configurations have gotten simpler, so we have removed the manual instructions we used to have here. You may wish to include something like this from each `server { }` block in your configuration to enable TLS in a common, secure way: ``` # Tell nginx to accept TLS-encrypted HTTPS on the standard TCP port. listen 443 ssl; listen [::]:443 ssl; # Reference the TLS cert files produced by Certbot. ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # Load the Let's Encrypt Diffie-Hellman parameters generated for # this server. Without this, the server is vulnerable to Logjam. ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # Tighten things down further, per Qualys’ and Certbot’s advice. ssl_session_cache shared:le_nginx_SSL:1m; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_session_timeout 1440m; # Offer OCSP certificate stapling. ssl_stapling on; ssl_stapling_verify on; # Enable HSTS. include local/enable-hsts; ``` The [HSTS] step is optional and should be applied only after due consideration, since it has the potential to lock users out of your site if you later change your mind on the TLS configuration. The `local/enable-hsts` file it references is simply: add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; It’s a separate file because nginx requires that headers like this be applied separately for each `location { }` block. We’ve therefore factored this out so you can `include` it everywhere you need it. The [OCSP] step is optional, but recommended. |
︙ | ︙ |
Changes to www/server/debian/service.md.
︙ | ︙ | |||
51 52 53 54 55 56 57 | create a listener socket on a high-numbered (≥ 1024) TCP port, suitable for sharing a Fossil repo to a workgroup on a private LAN. To do this, write the following in `~/.local/share/systemd/user/fossil.service`: ```dosini | | | | | | | | | | | | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | create a listener socket on a high-numbered (≥ 1024) TCP port, suitable for sharing a Fossil repo to a workgroup on a private LAN. To do this, write the following in `~/.local/share/systemd/user/fossil.service`: ```dosini [Unit] Description=Fossil user server After=network-online.target [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/fossil server --port 9000 repo.fossil Restart=always RestartSec=3 [Install] WantedBy=multi-user.target ``` Unlike with `inetd` and `xinetd`, we don’t need to tell `systemd` which user and group to run this service as, because we’ve installed it under the account we’re logged into, which `systemd` will use as the service’s owner. |
︙ | ︙ | |||
88 89 90 91 92 93 94 | follows that it doesn’t need to run as a system service. A user service works perfectly well for this. Because we’ve set this up as a user service, the commands you give to manipulate the service vary somewhat from the sort you’re more likely to find online: | | | | | | | | 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | follows that it doesn’t need to run as a system service. A user service works perfectly well for this. Because we’ve set this up as a user service, the commands you give to manipulate the service vary somewhat from the sort you’re more likely to find online: $ systemctl --user daemon-reload $ systemctl --user enable fossil $ systemctl --user start fossil $ systemctl --user status fossil -l $ systemctl --user stop fossil That is, we don’t need to talk to `systemd` with `sudo` privileges, but we do need to tell it to look at the user configuration rather than the system-level configuration. This scheme isolates the permissions needed by the Fossil server, which reduces the amount of damage it can do if there is ever a remotely-triggerable security flaw found in Fossil. On some `systemd` based OSes, user services only run while that user is logged in interactively. This is common on systems aiming to provide desktop environments, where this is the behavior you often want. To allow background services to continue to run after logout, say: $ sudo loginctl enable-linger $USER You can paste the command just like that into your terminal, since `$USER` will expand to your login name. [scgi]: ../any/scgi.md |
︙ | ︙ | |||
163 164 165 166 167 168 169 | roughly equivalent to [the ancient `inetd` method](../any/inetd.md). It’s more complicated, but it has some nice properties. We first need to define the privileged socket listener by writing `/etc/systemd/system/fossil.socket`: ```dosini | | | | | | | | | | | | | | | | | | | | | | | | | | 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 | roughly equivalent to [the ancient `inetd` method](../any/inetd.md). It’s more complicated, but it has some nice properties. We first need to define the privileged socket listener by writing `/etc/systemd/system/fossil.socket`: ```dosini [Unit] Description=Fossil socket [Socket] Accept=yes ListenStream=80 NoDelay=true [Install] WantedBy=sockets.target ``` Note the change of configuration directory from the `~/.local` directory to the system level. We need to start this socket listener at the root level because of the low-numbered TCP port restriction we brought up above. This configuration says more or less the same thing as the socket part of an `inted` entry [exemplified elsewhere in this documentation](../any/inetd.md). Next, create the service definition file in that same directory as `fossil@.service`: ```dosini [Unit] Description=Fossil socket server After=network-online.target [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/fossil http repo.fossil StandardInput=socket [Install] WantedBy=multi-user.target ``` Notice that we haven’t told `systemd` which user and group to run Fossil under. Since this is a system-level service definition, that means it will run as root, which then causes Fossil to [automatically drop into a `chroot(2)` jail](../../chroot.md) rooted at the `WorkingDirectory` we’ve configured above, shortly after each `fossil http` call starts. The `Restart*` directives we had in the user service configuration above are unnecessary for this method, since Fossil isn’t supposed to remain running under it. Each HTTP hit starts one Fossil instance, which handles that single client’s request and then immediately shuts down. Next, you need to tell `systemd` to reload its system-level configuration files and enable the listening socket: $ sudo systemctl daemon-reload $ sudo systemctl enable fossil.socket And now you can manipulate the socket listener: $ sudo systemctl start fossil.socket $ sudo systemctl status -l fossil.socket $ sudo systemctl stop fossil.socket Notice that we’re working with the *socket*, not the *service*. The fact that we’ve given them the same base name and marked the service as an instantiated service with the “`@`” notation allows `systemd` to automatically start an instance of the service each time a hit comes in on the socket that `systemd` is monitoring on Fossil’s behalf. To see this service instantiation at work, visit a long-running Fossil page (e.g. `/tarball`) and then give a command like this: $ sudo systemctl --full | grep fossil This will show information about the `fossil` socket and service instances, which should show your `/tarball` hit handler, if it’s still running: fossil@20-127.0.0.1:80-127.0.0.1:38304.service You can feed that service instance description to a `systemctl kill` command to stop that single instance without restarting the whole `fossil` service, for example. In all of this, realize that we’re able to manipulate a single socket listener or single service instance at a time, rather than reload the |
︙ | ︙ |
Changes to www/server/index.html.
1 2 3 | <div class='fossil-doc' data-title="How To Configure A Fossil Server"> <style type="text/css"> | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < | < | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | <div class='fossil-doc' data-title="How To Configure A Fossil Server"> <style type="text/css"> .doc > .content th.fep { font-family: "Helvetica Neue", "Arial Narrow", "Myriad Pro", "Avenir Next Condensed"; font-stretch: condensed; min-width: 3em; padding: 0.4em; white-space: nowrap; } .doc > .content th.host { font-family: "Helvetica Neue", "Arial Narrow", "Myriad Pro", "Avenir Next Condensed"; font-stretch: condensed; padding: 0.4em; text-align: right; } .doc > .content td.doc { text-align: center; } </style> <h2>No Server Required</h2> |
︙ | ︙ | |||
198 199 200 201 202 203 204 | <h2 id="matrix">Activation Tutorials</h2> <p>We've broken the configuration for each method out into a series of sub-articles. Some of these are generic, while others depend on particular operating systems or front-end software:</p> | < | < | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | <h2 id="matrix">Activation Tutorials</h2> <p>We've broken the configuration for each method out into a series of sub-articles. Some of these are generic, while others depend on particular operating systems or front-end software:</p> <div class="indent"><table> <tr> <th class="host">⇩ OS / Method ⇨</th> <th class="fep">direct</th> <th class="fep">inetd</th> <th class="fep">stunnel</th> <th class="fep">CGI</th> <th class="fep">SCGI</th> |
︙ | ︙ | |||
278 279 280 281 282 283 284 | <td class="doc"><a href="windows/cgi.md">✅</a></td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc"><a href="windows/iis.md">✅</a></td> <td class="doc"><a href="windows/service.md">✅</a></td> </tr> | | | 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | <td class="doc"><a href="windows/cgi.md">✅</a></td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc"><a href="windows/iis.md">✅</a></td> <td class="doc"><a href="windows/service.md">✅</a></td> </tr> </table></div> <p>Where there is a check mark in the "<b>Any</b>" row, the method for that is generic enough that it works across OSes that Fossil is known to work on. The check marks below that usually just link to this generic documentation.</p> <p>The method in the "<b>proxy</b>" column is for the platform's default |
︙ | ︙ |
Changes to www/server/macos/service.md.
︙ | ︙ | |||
16 17 18 19 20 21 22 | However, we will still give two different configurations, just as in the `systemd` article: one for a standalone HTTP server, and one using socket activation. For more information on `launchd`, the single best resource we’ve found is [launchd.info](https://launchd.info). The next best is: | | | | > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | However, we will still give two different configurations, just as in the `systemd` article: one for a standalone HTTP server, and one using socket activation. For more information on `launchd`, the single best resource we’ve found is [launchd.info](https://launchd.info). The next best is: $ man launchd.plist [la]: http://www.grivet-tools.com/blog/2014/launchdaemons-vs-launchagents/ [ldhome]: https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html [wpa]: https://en.wikipedia.org/wiki/Launchd ## Standalone HTTP Server To configure `launchd` to start Fossil as a standalone HTTP server, write the following as `com.example.dev.FossilHTTP.plist`: ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.example.dev.FossilHTTP</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/fossil</string> <string>server</string> <string>--port</string> <string>9000</string> <string>repo.fossil</string> </array> <key>WorkingDirectory</key> <string>/Users/you/museum</string> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> <key>StandardErrorPath</key> <string>/tmp/fossil-error.log</string> <key>StandardOutPath</key> <string>/tmp/fossil-info.log</string> <key>UserName</key> <string>you</string> <key>GroupName</key> <string>staff</string> <key>InitGroups</key> <true/> </dict> </plist> ``` In this example, we’re assuming your development organization uses the domain name “`dev.example.org`”, that your short macOS login name is “`you`”, and that you store your Fossils in “`~/museum`”. Adjust these elements of the plist file to suit your local situation. You might be wondering about the use of `UserName`: isn’t Fossil supposed to drop privileges and enter [a `chroot(2)` jail](../../chroot.md) when it’s started as root like this? Why do we need to give it a user name? Won’t Fossil use the owner of the repository file to set that? All I can tell you is that in testing here, if you leave the user and group configuration at the tail end of that plist file out, Fossil will remain running as root! Install that file and set it to start with: $ sudo install -o root -g wheel -m 644 com.example.dev.FossilHTTP.plist \ /Library/LaunchDaemons/ $ sudo launchctl load -w /Library/LaunchDaemons/com.example.dev.FossilHTTP.plist Because we set the `RunAtLoad` key, this will also launch the daemon. Stop the daemon with: $ sudo launchctl unload -w /Library/LaunchDaemons/com.example.dev.FossilHTTP.plist ## Socket Listener Another useful method to serve a Fossil repo via `launchd` is by setting up a socket listener: ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.example.dev.FossilSocket</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/fossil</string> |
︙ | ︙ |
Changes to www/server/openbsd/fastcgi.md.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ## <a id="fslinstall"></a>Install Fossil Use the OpenBSD package manager `pkg_add` to install Fossil, making sure to select the statically linked binary. ```console | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | ## <a id="fslinstall"></a>Install Fossil Use the OpenBSD package manager `pkg_add` to install Fossil, making sure to select the statically linked binary. ```console $ doas pkg_add fossil quirks-3.325 signed on 2020-06-12T06:24:53Z Ambiguous: choose package for fossil 0: <None> 1: fossil-2.10v0 2: fossil-2.10v0-static Your choice: 2 fossil-2.10v0-static: ok ``` This installs Fossil into the chroot. To facilitate local use, create a symbolic link of the fossil executable into `/usr/local/bin`. ```console $ doas ln -s /var/www/bin/fossil /usr/local/bin/fossil ``` As a privileged user, create the file `/var/www/cgi-bin/scm` with the following contents to make the CGI script that `httpd` will execute in response to `fsl.domain.tld` requests; all paths are relative to the `/var/www` chroot. ```sh #!/bin/fossil directory: /htdocs/fsl.domain.tld notfound: https://domain.tld repolist errorlog: /logs/fossil.log ``` The `directory` directive instructs Fossil to serve all repositories found in `/var/www/htdocs/fsl.domain.tld`, while `errorlog` sets logging to be saved to `/var/www/logs/fossil.log`; create the repository directory and log file—making the latter owned by the `www` user, and the script executable. ```console $ doas mkdir /var/www/htdocs/fsl.domain.tld $ doas touch /var/www/logs/fossil.log $ doas chown www /var/www/logs/fossil.log $ doas chmod 660 /var/www/logs/fossil.log $ doas chmod 755 /var/www/cgi-bin/scm ``` ## <a id="chroot"></a>Setup chroot Fossil needs both `/dev/random` and `/dev/null`, which aren't accessible from within the chroot, so need to be constructed; `/var`, however, is mounted with the `nodev` option. Rather than removing this default setting, create a small memory filesystem and then mount it on to `/var/www/dev` with [`mount_mfs(8)`][mfs] so that the `random` and `null` device files can be created. In order to avoid necessitating a startup script to recreate the device files at boot, create a template of the needed ``/dev`` tree to automatically populate the memory filesystem. ```console $ doas mkdir /var/www/dev $ doas install -d -g daemon /template/dev $ cd /template/dev $ doas /dev/MAKEDEV urandom $ doas mknod -m 666 null c 2 2 $ doas mount_mfs -s 1M -P /template/dev /dev/sd0b /var/www/dev $ ls -l total 0 crw-rw-rw- 1 root daemon 2, 2 Jun 20 08:56 null lrwxr-xr-x 1 root daemon 7 Jun 18 06:30 random@ -> urandom crw-r--r-- 1 root wheel 45, 0 Jun 18 06:30 urandom ``` [mfs]: https://man.openbsd.org/mount_mfs.8 To make the mountable memory filesystem permanent, open `/etc/fstab` as a privileged user and add the following line to automate creation of the filesystem at startup: ```console swap /var/www/dev mfs rw,-s=1048576,-P=/template/dev 0 0 ``` The same user that executes the fossil binary must have writable access to the repository directory that resides within the chroot; on OpenBSD this is `www`. In addition, grant repository directory ownership to the user who will push to, pull from, and create repositories. ```console $ doas chown -R user:www /var/www/htdocs/fsl.domain.tld $ doas chmod 770 /var/www/htdocs/fsl.domain.tld ``` ## <a id="httpdconfig"></a>Configure httpd On OpenBSD, [httpd.conf(5)][httpd] is the configuration file for `httpd`. To setup the server to serve all Fossil repositores within the directory specified in the CGI script, and automatically redirect standard HTTP requests to HTTPS—apart from [Let's Encrypt][LE] challenges issued in response to [acme-client(1)][acme] certificate requests—create `/etc/httpd.conf` as a privileged user with the following contents. [LE]: https://letsencrypt.org [acme]: https://man.openbsd.org/acme-client.1 [httpd.conf(5)]: https://man.openbsd.org/httpd.conf.5 ```apache server "fsl.domain.tld" { listen on * port http root "/htdocs/fsl.domain.tld" location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } location * { block return 301 "https://$HTTP_HOST$REQUEST_URI" } location "/*" { fastcgi { param SCRIPT_FILENAME "/cgi-bin/scm" } } } server "fsl.domain.tld" { listen on * tls port https root "/htdocs/fsl.domain.tld" tls { certificate "/etc/ssl/domain.tld.fullchain.pem" key "/etc/ssl/private/domain.tld.key" } hsts { max-age 15768000 preload subdomains } connection max request body 104857600 location "/*" { fastcgi { param SCRIPT_FILENAME "/cgi-bin/scm" } } location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } } ``` [The default limit][dlim] for HTTP messages in OpenBSD’s `httpd` server is 1 MiB. Fossil chunks its sync protocol such that this is not normally a problem, but when sending [unversioned content][uv], it uses a single message for the entire file. Therefore, if you will be storing files larger than this limit as unversioned content, you need to raise |
︙ | ︙ | |||
185 186 187 188 189 190 191 | In order for `httpd` to serve HTTPS, secure a free certificate from Let's Encrypt using `acme-client`. Before issuing the request, however, ensure you have a zone record for the subdomain with your registrar or nameserver. Then open `/etc/acme-client.conf` as a privileged user to configure the request. ```dosini | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | In order for `httpd` to serve HTTPS, secure a free certificate from Let's Encrypt using `acme-client`. Before issuing the request, however, ensure you have a zone record for the subdomain with your registrar or nameserver. Then open `/etc/acme-client.conf` as a privileged user to configure the request. ```dosini authority letsencrypt { api url "https://acme-v02.api.letsencrypt.org/directory" account key "/etc/acme/letsencrypt-privkey.pem" } authority letsencrypt-staging { api url "https://acme-staging.api.letsencrypt.org/directory" account key "/etc/acme/letsencrypt-staging-privkey.pem" } domain domain.tld { alternative names { www.domain.tld fsl.domain.tld } domain key "/etc/ssl/private/domain.tld.key" domain certificate "/etc/ssl/domain.tld.crt" domain full chain certificate "/etc/ssl/domain.tld.fullchain.pem" sign with letsencrypt } ``` Start `httpd` with the new configuration file, and issue the certificate request. ```console $ doas rcctl start httpd $ doas acme-client -vv domain.tld acme-client: /etc/acme/letsencrypt-privkey.pem: account key exists (not creating) acme-client: /etc/acme/letsencrypt-privkey.pem: loaded RSA account key acme-client: /etc/ssl/private/domain.tld.key: generated RSA domain key acme-client: https://acme-v01.api.letsencrypt.org/directory: directories acme-client: acme-v01.api.letsencrypt.org: DNS: 172.65.32.248 ... N(Q????Z???j?j?>W#????b???? H????eb??T??*? DNosz(???n{L}???D???4[?B] (1174 bytes) acme-client: /etc/ssl/domain.tld.crt: created acme-client: /etc/ssl/domain.tld.fullchain.pem: created ``` A successful result will output the public certificate, full chain of trust, and private key into the `/etc/ssl` directory as specified in `acme-client.conf`. ```console $ doas ls -lR /etc/ssl -r--r--r-- 1 root wheel 2.3K Mar 2 01:31:03 2018 domain.tld.crt -r--r--r-- 1 root wheel 3.9K Mar 2 01:31:03 2018 domain.tld.fullchain.pem /etc/ssl/private: -r-------- 1 root wheel 3.2K Mar 2 01:31:03 2018 domain.tld.key ``` Make sure to reopen `/etc/httpd.conf` to uncomment the second server block responsible for serving HTTPS requests before proceeding. ## <a id="starthttpd"></a>Start `httpd` With `httpd` configured to serve Fossil repositories out of `/var/www/htdocs/fsl.domain.tld`, and the certificates and key in place, enable and start `slowcgi`—OpenBSD's FastCGI wrapper server that will execute the above Fossil CGI script—before checking that the syntax of the `httpd.conf` configuration file is correct, and (re)starting the server (if still running from requesting a Let's Encrypt certificate). ```console $ doas rcctl enable slowcgi $ doas rcctl start slowcgi slowcgi(ok) $ doas httpd -vnf /etc/httpd.conf configuration OK $ doas rcctl start httpd httpd(ok) ``` ## <a id="clientconfig"></a>Configure Client To facilitate creating new repositories and pushing them to the server, add the following function to your `~/.cshrc` or `~/.zprofile` or the config file for whichever shell you are using on your development box. ```sh finit() { fossil init $1.fossil && \ chmod 664 $1.fossil && \ fossil open $1.fossil && \ fossil user password $USER $PASSWD && \ fossil remote-url https://$USER:$PASSWD@fsl.domain.tld/$1 && \ rsync --perms $1.fossil $USER@fsl.domain.tld:/var/www/htdocs/fsl.domain.tld/ >/dev/null && \ chmod 644 $1.fossil && \ fossil ui } ``` This enables a new repository to be made with `finit repo`, which will create the fossil repository file `repo.fossil` in the current working directory; by default, the repository user is set to the environment variable `$USER`. It then opens the repository and sets the user password to the `$PASSWD` environment variable (which you can either set |
︙ | ︙ |
Changes to www/server/openbsd/service.wiki.
1 2 3 4 5 6 7 8 | <title>Serving via rc on OpenBSD</title> OpenBSD provides [https://man.openbsd.org/rc.subr.8|rc.subr(8)], a framework for writing [https://man.openbsd.org/rc.8|rc(8)] scripts. <h2>Creating the daemon</h2> Create the file /etc/rc.d/fossil with contents like the following. | > | | > | | | | | | | | > | | > > | | > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | <title>Serving via rc on OpenBSD</title> OpenBSD provides [https://man.openbsd.org/rc.subr.8|rc.subr(8)], a framework for writing [https://man.openbsd.org/rc.8|rc(8)] scripts. <h2>Creating the daemon</h2> Create the file /etc/rc.d/fossil with contents like the following. <pre> #!/bin/ksh daemon="/usr/local/bin/fossil" # fossil executable daemon_user="_fossil" # user to run fossil as daemon_flags="server /home/_fossil/example --repolist --port 8888" # fossil command . /etc/rc.d/rc.subr # pexp="$daemon server .*" # See below. rc_reload=NO # Unsupported by Fossil; 'rcctl reload fossil' kills the process. rc_bg=YES # Run in the background, since fossil serve does not daemonize itself rc_cmd $1 </pre> <h3>pexp</h3> You may need to uncomment the "pexp=". rc.subr typically finds the daemon process based by matching the process name and argument list. Without the "pexp=" line, rc.subr would look for this exact command: <pre> /usr/local/bin/fossil server /home/_fossil/example --repolist --port 8888 </pre> Depending on the arguments and their order, fossil may rewrite the arguments for display in the process listing ([https://man.openbsd.org/ps.1|ps(1)]), so rc.subr may fail to find the process through the default match. The example above does not get rewritten, but the same commands in a different order can be rewritten. For example, when I switch the order of the arguments in "daemon_flags", <pre> /usr/local/bin/fossil server --repolist --port 8888 /home/_fossil/example </pre> the process command is changed to this. <pre> /usr/local/bin/fossil server /home/_fossil/example /home/_fossil/example 8888 /home/_fossil/example </pre> The commented "pexp=" line instructs rc.subr to choose the process whose command and arguments text starts with this: <pre> /usr/local/bin/fossil server </pre> <h2>Enabling the daemon</h2> Once you have created /etc/rc.d/fossil, run these commands. <pre> rcctl enable fossil # add fossil to pkg_scripts in /etc/rc.conf.local rcctl start fossil # start the daemon now </pre> The daemon should now be running and set to start at boot. <h2>Multiple daemons</h2> You may want to serve multiple fossil instances with different options. For example, * If different users own different repositories, you may want different users to serve different repositories. * You may want to serve different repositories on different ports so you can control them differently with, for example, HTTP reverse proxies or [https://man.openbsd.org/pf.4|pf(4)]. To run multiple fossil daemons, create multiple files in /etc/rc.d, and enable each of them. Here are two approaches for creating the files in /etc/rc.d: Symbolic links and copies. <h3>Symbolic links</h3> Suppose you want to run one fossil daemon as user "user1" on port 8881 and another as user "user2" on port 8882. Create the files with [https://man.openbsd.org/ln.1|ln(1)], and configure them to run different fossil commands. <pre> cd /etc/rc.d ln -s fossil fossil1 ln -s fossil fossil2 rcctl enable fossil1 fossil2 rcctl set fossil1 user user1 rcctl set fossil2 user user2 rcctl set fossil1 flags 'server /home/user1/repo1.fossil --port 8881' rcctl set fossil2 flags 'server /home/user2/repo2.fossil --port 8882' rcctl start fossil1 fossil2 </pre> <h3>Copies</h3> You may want to run fossil daemons that are too different to configure just with [https://man.openbsd.org/rcctl.8|rcctl(8)]. In particular, you can't change the "pexp" with rcctl. If you want to run fossil commands that are more different, you may prefer to create separate files in /etc/rc.d. Replace "ln -s" above with "cp" to accomplish this. <pre> cp /etc/rc.d/fossil /etc/rc.d/fossil-user1 cp /etc/rc.d/fossil /etc/rc.d/fossil-user2 </pre> You can still use commands like "rcctl set fossil-user1 flags", but you can also edit the "/etc/rc.d/fossil-user1" file. |
Changes to www/server/windows/iis.md.
︙ | ︙ | |||
30 31 32 33 34 35 36 | ## Background Fossil Service Setup You will need to have the Fossil HTTP server running in the background, serving some local repository, bound to localhost on a fixed high-numbered TCP port. For the purposes of testing, simply start it by hand in your command shell of choice: | | | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | ## Background Fossil Service Setup You will need to have the Fossil HTTP server running in the background, serving some local repository, bound to localhost on a fixed high-numbered TCP port. For the purposes of testing, simply start it by hand in your command shell of choice: fossil serve --port 9000 --localhost repo.fossil That command assumes you’ve got `fossil.exe` in your `%PATH%` and you’re in a directory holding `repo.fossil`. See [the platform-independent instructions](../any/none.md) for further details. For a more robust setup, we recommend that you [install Fossil as a Windows service](./service.md), which will allow Fossil to start at |
︙ | ︙ |
Changes to www/serverext.wiki.
︙ | ︙ | |||
29 30 31 32 33 34 35 | An administrator activates the CGI extension mechanism by specifying an "Extension Root Directory" or "extroot" as part of the [./server/index.html|server setup]. If the Fossil server is itself run as [./server/any/cgi.md|CGI], then add a line to the [./cgi.wiki#extroot|CGI script file] that says: | | | | | | | | 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | An administrator activates the CGI extension mechanism by specifying an "Extension Root Directory" or "extroot" as part of the [./server/index.html|server setup]. If the Fossil server is itself run as [./server/any/cgi.md|CGI], then add a line to the [./cgi.wiki#extroot|CGI script file] that says: <pre> extroot: <i>DIRECTORY</i> </pre> Or, if the Fossil server is being run using the "[./server/any/none.md|fossil server]" or "[./server/any/none.md|fossil ui]" or "[./server/any/inetd.md|fossil http]" commands, then add an extra "--extroot <i>DIRECTORY</i>" option to that command. The <i>DIRECTORY</i> is the DOCUMENT_ROOT for the CGI. Files in the DOCUMENT_ROOT are accessed via URLs like this: <pre> https://example-project.org/ext/<i>FILENAME</i> </pre> In other words, access files in DOCUMENT_ROOT by appending the filename relative to DOCUMENT_ROOT to the [/help?cmd=/ext|/ext] page of the Fossil server. Files that are readable but not executable are returned as static content. Files that are executable are run as CGI. <h3>2.1 Example #1</h3> The source code repository for SQLite is a Fossil server that is run as CGI. The URL for the source code repository is [https://sqlite.org/src]. The CGI script looks like this: <verbatim> #!/usr/bin/fossil repository: /fossil/sqlite.fossil errorlog: /logs/errors.txt extroot: /sqlite-src-ext </verbatim> The "extroot: /sqlite-src-ext" line tells Fossil that it should look for extension CGIs in the /sqlite-src-ext directory. (All of this is happening inside of a chroot jail, so putting the document root in a top-level directory is a reasonable thing to do.) When a URL like "https://sqlite.org/src/ext/checklist" is received by the |
︙ | ︙ | |||
99 100 101 102 103 104 105 | main web server which in turn relays the result back to the original client. <h3>2.2 Example #2</h3> The [https://fossil-scm.org/home|Fossil self-hosting repository] is also a CGI that looks like this: | | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | main web server which in turn relays the result back to the original client. <h3>2.2 Example #2</h3> The [https://fossil-scm.org/home|Fossil self-hosting repository] is also a CGI that looks like this: <verbatim> #!/usr/bin/fossil repository: /fossil/fossil.fossil errorlog: /logs/errors.txt extroot: /fossil-extroot </verbatim> The extroot for this Fossil server is /fossil-extroot and in that directory is an executable file named "fileup1" - another [https://wapp.tcl.tk|Wapp] script. (The extension mechanism is not required to use Wapp. You can use any kind of program you like. But the creator of SQLite and Fossil is fond of [https://www.tcl.tk|Tcl/Tk] and so he tends to gravitate toward Tcl-based technologies like Wapp.) The fileup1 script is a demo program that lets |
︙ | ︙ | |||
199 200 201 202 203 204 205 | header and footer, then the inserted header will include a Content Security Policy (CSP) restriction on the use of javascript within the webpage. Any <script>...</script> elements within the CGI output must include a nonce or else they will be suppressed by the web browser. The FOSSIL_NONCE variable contains the value of that nonce. So, in other words, to get javascript to work, it must be enclosed in: | | | | | | | | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | header and footer, then the inserted header will include a Content Security Policy (CSP) restriction on the use of javascript within the webpage. Any <script>...</script> elements within the CGI output must include a nonce or else they will be suppressed by the web browser. The FOSSIL_NONCE variable contains the value of that nonce. So, in other words, to get javascript to work, it must be enclosed in: <verbatim> <script nonce='$FOSSIL_NONCE'>...</script> </verbatim> Except, of course, the $FOSSIL_NONCE is replaced by the value of the FOSSIL_NONCE environment variable. <h3>3.1 Input Content</h3> If the HTTP request includes content (for example if this is a POST request) then the CONTENT_LENGTH value will be positive and the data for the content will be readable on standard input. <h2>4.0 CGI Outputs</h2> CGI programs construct a reply by writing to standard output. The first few lines of output are parameters intended for the web server that invoked the CGI. These are followed by a blank line and then the content. Typical parameter output looks like this: <verbatim> Status: 200 OK Content-Type: text/html </verbatim> CGI programs can return any content type they want - they are not restricted to text replies. It is OK for a CGI program to return (for example) image/png. The fields of the CGI response header can be any valid HTTP header fields. Those that Fossil does not understand are simply relayed back to up the line to the requester. Fossil takes special action with some content types. If the Content-Type is "text/x-fossil-wiki" or "text/x-markdown" then Fossil converts the content from [/wiki_rules|Fossil-Wiki] or [/md_rules|Markdown] into HTML, adding its own header and footer text according to the repository skin. Content of type "text/html" is normally passed straight through unchanged. However, if the text/html content is of the form: <verbatim> <div class='fossil-doc' data-title='DOCUMENT TITLE'> ... HTML content there ... </div> </verbatim> In other words, if the outer-most markup of the HTML is a <div> element with a single class of "fossil-doc", then Fossil will adds its own header and footer to the HTML. The page title contained in the added header will be extracted from the "data-title" attribute. |
︙ | ︙ |
Changes to www/ssl-server.md.
︙ | ︙ | |||
28 29 30 31 32 33 34 | obtaining a CA-signed certificate. ## Usage To put any of the Fossil server commands into SSL/TLS mode, simply add the "--cert" command-line option. | < | < | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | obtaining a CA-signed certificate. ## Usage To put any of the Fossil server commands into SSL/TLS mode, simply add the "--cert" command-line option. fossil ui --cert unsafe-builtin The --cert option is what tells Fossil to use TLS encryption. Normally, the argument to --cert is the name of a file containing the certificate (the "fullchain.pem" file) for the website. In this example, the magic name "unsafe-builtin" is used, which causes Fossil to use a self-signed cert rather than a real cert obtained from a [Certificate Authority](https://en.wikipedia.org/wiki/Certificate_authority) |
︙ | ︙ | |||
86 87 88 89 90 91 92 | key and cert. Fossil wants to read certs and public keys in the [PEM format](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail). PEM is a pure ASCII text format. The private key consists of text like this: | < | | | < | | | < | < < | < | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | key and cert. Fossil wants to read certs and public keys in the [PEM format](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail). PEM is a pure ASCII text format. The private key consists of text like this: -----BEGIN PRIVATE KEY----- *base-64 encoding of the private key* -----END PRIVATE KEY----- Similarly, a PEM-encoded cert will look like this: -----BEGIN CERTIFICATE----- *base-64 encoding of the certificate* -----END CERTIFICATE----- In both formats, text outside of the delimiters is ignored. That means that if you have a PEM-formatted private key and a separate PEM-formatted certificate, you can concatenate the two into a single file and the individual components will still be easily accessible. If you have a single file that holds both your private key and your cert, you can hand it off to the "[fossil server](/help?cmd=server)" command using the --cert option. Like this: fossil server --port 443 --cert mycert.pem /home/www/myproject.fossil The command above is sufficient to run a fully-encrypted web site for the "myproject.fossil" Fossil repository. This command must be run as root, since it wants to listen on TCP port 443, and only root processes are allowed to do that. This is safe, however, since before reading any information off of the wire, Fossil will put itself inside a chroot jail at /home/www and drop all root privileges. ### Keeping The Cert And Private Key In Separate Files If you do not want to combine your cert and private key into a single big PEM file, you can keep them separate using the --pkey option to Fossil. fossil server --port 443 --cert fullchain.pem --pkey privkey.pem /home/www/myproject.fossil ## The ACME Protocol The [ACME Protocol][2] is used to prove to a CA that you control a website. CAs require proof that you control a domain before they will issue a cert for that domain. The usual means of dealing with ACME is to run the separate [certbot](https://certbot.eff.org) tool. |
︙ | ︙ | |||
171 172 173 174 175 176 177 | the repository file. If the "server" or "http" command are run against a directory full of Fossil repositories, then the ".well-known" sub-directory should be in that top-level directory. Thus, to set up a project website, you should first run Fossil in ordinary unencrypted HTTP mode like this: | < | < | 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | the repository file. If the "server" or "http" command are run against a directory full of Fossil repositories, then the ".well-known" sub-directory should be in that top-level directory. Thus, to set up a project website, you should first run Fossil in ordinary unencrypted HTTP mode like this: fossil server --port 80 --acme /home/www/myproject.fossil Then you create your public/private key pair and run certbot, giving it a --webroot of /home/www. Certbot will create the sub-directory named "/home/www/.well-known" and put token files there, which the CA will verify. Then certbot will store your new cert in a particular file. Once certbot has obtained your cert, then you can concatenate that cert with your private key and run Fossil in SSL/TLS mode as shown above. [2]: https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment |
Changes to www/ssl.wiki.
︙ | ︙ | |||
80 81 82 83 84 85 86 | passing the <tt>--with-openssl</tt> option to the <tt>configure</tt> script. Type <tt>./configure --help</tt> for details. Another option is to download the source code to OpenSSL and build Fossil against that private version of OpenSSL: <pre> | | | | | | | | | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | passing the <tt>--with-openssl</tt> option to the <tt>configure</tt> script. Type <tt>./configure --help</tt> for details. Another option is to download the source code to OpenSSL and build Fossil against that private version of OpenSSL: <pre> cd compat # relative to the Fossil source tree root tar xf /path/to/openssl-*.tar.gz ln -fs openssl-x.y.z openssl cd openssl ./config # or, e.g. ./Configure darwin64-x86_64-cc make -j11 cd ../.. ./configure --with-openssl=tree make -j11 </pre> That will get you a Fossil binary statically linked to this in-tree version of OpenSSL. Beware, taking this path typically opens you up to new problems, which are conveniently covered in the next section! |
︙ | ︙ | |||
122 123 124 125 126 127 128 | If you are cloning from or syncing to Fossil servers that use a certificate signed by a well-known CA or one of its delegates, Fossil still has to know which CA roots to trust. When this fails, you get an error message that looks like this in Fossil 2.11 and newer: <pre> | | | | | | | | | 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | If you are cloning from or syncing to Fossil servers that use a certificate signed by a well-known CA or one of its delegates, Fossil still has to know which CA roots to trust. When this fails, you get an error message that looks like this in Fossil 2.11 and newer: <pre> Unable to verify SSL cert from fossil-scm.org subject: CN = sqlite.org issuer: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 sha256: bf26092dd97df6e4f7bf1926072e7e8d200129e1ffb8ef5276c1e5dd9bc95d52 accept this cert and continue (y/N)? </pre> In older versions, the message was much longer and began with this line: <pre> SSL verification failed: unable to get local issuer certificate </pre> Fossil relies on the OpenSSL library to have some way to check a trusted list of CA signing keys. There are two common ways this fails: # The OpenSSL library Fossil is linked to doesn't have a CA signing key set at all, so that it initially trusts no certificates at all. # The OpenSSL library does have a CA cert set, but your Fossil server's TLS certificate was signed by a CA that isn't in that set. A common reason to fall into the second trap is that you're using certificates signed by a local private CA, as often happens in large enterprises. You can solve this sort of problem by getting your local CA's signing certificate in PEM format and pointing OpenSSL at it: <pre> fossil set --global ssl-ca-location /path/to/local-ca.pem </pre> The use of <tt>--global</tt> with this option is common, since you may have multiple repositories served under certificates signed by that same CA. However, if you have a mix of publicly-signed and locally-signed certificates, you might want to drop the <tt>--global</tt> flag and set this option on a per-repository basis instead. |
︙ | ︙ | |||
180 181 182 183 184 185 186 | may find it acceptable to use the same Mozilla NSS cert set. I do not know of a way to easily get this from Mozilla themselves, but I did find a [https://curl.se/docs/caextract.html | third party source] for the <tt>cacert.pem</tt> file. I suggest placing the file into your Windows user home directory so that you can then point Fossil at it like so: <pre> | | | 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 | may find it acceptable to use the same Mozilla NSS cert set. I do not know of a way to easily get this from Mozilla themselves, but I did find a [https://curl.se/docs/caextract.html | third party source] for the <tt>cacert.pem</tt> file. I suggest placing the file into your Windows user home directory so that you can then point Fossil at it like so: <pre> fossil set --global ssl-ca-location %userprofile%\cacert.pem </pre> This can also happen if you've linked Fossil to a version of OpenSSL [#openssl-src|built from source]. That same <tt>cacert.pem</tt> fix can work in that case, too. When you build Fossil on Linux platforms against the binary OpenSSL |
︙ | ︙ |
Changes to www/stats.wiki.
1 | <title>Fossil Performance</title> | < | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | <title>Fossil Performance</title> The questions will inevitably arise: How does Fossil perform? Does it use a lot of disk space or bandwidth? Is it scalable? In an attempt to answers these questions, this report looks at several projects that use fossil for configuration management and examines how well they are working. The following table is a summary of the results. (Last updated on 2018-06-04.) Explanation and analysis follows the table. <table> <tr> <th>Project</th> <th>Number Of Artifacts</th> <th>Number Of Check-ins</th> <th>Project Duration<br>(as of 2018-06-04)</th> <th>Uncompressed Size</th> <th>Repository Size</th> |
︙ | ︙ |
Changes to www/sync.wiki.
︙ | ︙ | |||
46 47 48 49 50 51 52 | peer-to-peer communication and without any kind of central authority. If you are already familiar with CRDTs and were wondering if Fossil used them, the answer is "yes". We just don't call them by that name. | | | 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | peer-to-peer communication and without any kind of central authority. If you are already familiar with CRDTs and were wondering if Fossil used them, the answer is "yes". We just don't call them by that name. <h2 id="transport">2.0 Transport</h2> All communication between client and server is via HTTP requests. The server is listening for incoming HTTP requests. The client issues one or more HTTP requests and receives replies for each request. The server might be running as an independent server |
︙ | ︙ | |||
80 81 82 83 84 85 86 | to represent the listener and initiator of the interaction, respectively. Nothing in this protocol requires that the server actually be a back-room processor housed in a datacenter, nor does the client need to be a desktop or handheld device. For the purposes of this article "client" simply means the repository that initiates the conversation and "server" is the repository that responds. Nothing more. | | | | > > > > > > > > > > > > > > > > > > > > | | < | < < | < | | > | < | > | < | | | < | < | < | | | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | to represent the listener and initiator of the interaction, respectively. Nothing in this protocol requires that the server actually be a back-room processor housed in a datacenter, nor does the client need to be a desktop or handheld device. For the purposes of this article "client" simply means the repository that initiates the conversation and "server" is the repository that responds. Nothing more. <h4 id="https">2.0.1 HTTPS Transport</h4> HTTPS differs from HTTP only in that the HTTPS protocol is encrypted as it travels over the wire. The underlying protocol is the same. This document describes only the underlying, unencrypted messages that go client to server and back to client. Whether or not those messages are encrypted does not come into play in this document. Fossil includes built-in [./ssl-server.md|support for HTTPS encryption] in both client and server. <h4 id="ssh">2.0.2 SSH Transport</h4> When doing a sync using an "<code>ssh:…</code>" URL, the same HTTP transport protocol is used. Fossil simply uses [https://en.wikipedia.org/wiki/Secure_Shell|ssh] to start an instance of the [/help?cmd=test-http|fossil test-http] command running on the remote machine. It then sends HTTP requests and gets back HTTP replies over the SSH connection, rather than sending and receiving over an internet socket. To see the specific "ssh" command that the Fossil client runs in order to set up a connection, add either of the the "--httptrace" or "--sshtrace" options to the "fossil sync" command line. This method is dependent on the remote <var>PATH</var> set by the SSH daemon, which may not be the same as your interactive shell's <var>PATH</var> on that same server. It is common to find <var>$HOME/bin</var> in the latter but not the former, for instance, leading to failures to sync over <code>ssh:…</code> URLs when you install the <code>fossil</code> binary in a nonstandard location, as with <verbatim>./configure --prefix=$HOME && make install</verbatim> The simpler of the two solutions to this problem is to install Fossil where sshd expects to find it, but when that isn't an option, you can instead give a URL like this: <verbatim>fossil clone ssh://myserver.example.com/path/to/repo.fossil?fossil=/home/me/bin/fossil</verbatim> That gives the local Fossil instance the absolute path to the binary on the remote machine for use when calling that Fossil instance through the SSH tunnel. <h4 id="file">2.0.3 FILE Transport</h4> When doing a sync using a "file:..." URL, the same HTTP protocol is still used. But instead of sending each HTTP request over a socket or via SSH, the HTTP request is written into a temporary file. The client then invokes the [/help?cmd=http|fossil http] command in a subprocess to process the request and and generate a reply. The client then reads the HTTP reply out of a temporary file on disk, and deletes the two temporary files. To see the specific "fossil http" command that is run in order to implement the "file:" transport, add the "--httptrace" option to the "fossil sync" command. <h3 id="srv-id">2.1 Server Identification</h3> The server is identified by a URL argument that accompanies the push, pull, or sync command on the client. (As a convenience to users, the URL can be omitted on the client command and the same URL from the most recent push, pull, or sync will be reused. This saves typing in the common case where the client does multiple syncs to the same server.) The client modifies the URL by appending the method name "<b>/xfer</b>" to the end. For example, if the URL specified on the client command line is <pre>https://fossil-scm.org/fossil</pre> Then the URL that is really used to do the synchronization will be: <pre>https://fossil-scm.org/fossil/xfer</pre> <h3 id="req-format">2.2 HTTP Request Format</h3> The client always sends a POST request to the server. The general format of the POST request is as follows: <pre> POST /fossil/xfer HTTP/1.0 Host: fossil-scm.hwaci.com:80 Content-Type: application/x-fossil Content-Length: 4216 </pre> <i><pre>content...</pre></i> In the example above, the pathname given after the POST keyword on the first line is a copy of the URL pathname. The Host: parameter is also taken from the URL. The content type is always either "application/x-fossil" or "application/x-fossil-debug". The "x-fossil" content type is the default. The only difference is that "x-fossil" content is compressed using zlib whereas "x-fossil-debug" is sent uncompressed. A typical reply from the server might look something like this: <pre> HTTP/1.0 200 OK Date: Mon, 10 Sep 2007 12:21:01 GMT Connection: close Cache-control: private Content-Type: application/x-fossil; charset=US-ASCII Content-Length: 265 </pre> <i><pre>content...</pre></i> The content type of the reply is always the same as the content type of the request. <h2 id="content">3.0 Fossil Synchronization Content</h2> A synchronization request between a client and server consists of one or more HTTP requests as described in the previous section. This section details the "x-fossil" content type. <h3 id="lines">3.1 Line-oriented Format</h3> The x-fossil content type consists of zero or more "cards". Cards are separated by the newline character ("\n"). Leading and trailing whitespace on a card is ignored. Blank cards are ignored. Each card is divided into zero or more space separated tokens. The first token on each card is the operator. Subsequent tokens are arguments. The set of operators understood by servers is slightly different from the operators understood by clients, though the two are very similar. <h3 id="login">3.2 Login Cards</h3> Every message from client to server begins with one or more login cards. Each login card has the following format: <pre><b>login</b> <i>userid nonce signature</i></pre> The userid is the name of the user that is requesting service from the server. The nonce is the SHA1 hash of the remainder of the message - all text that follows the newline character that terminates the login card. The signature is the SHA1 hash of the concatenation of the nonce and the users password. For each login card, the server looks up the user and verifies that the nonce matches the SHA1 hash of the remainder of the message. It then checks the signature hash to make sure the signature matches. If everything checks out, then the client is granted all privileges of the specified user. Privileges are cumulative. There can be multiple successful login cards. The session privilege is the union of all privileges from all login cards. <h3 id="file">3.3 File Cards</h3> Artifacts are transferred using either "file" cards, or "cfile" or "uvfile" cards. The name "file" card comes from the fact that most artifacts correspond to files that are under version control. The "cfile" name is an abbreviation for "compressed file". The "uvfile" name is an abbreviation for "unversioned file". <h4 id="ordinary-fc">3.3.1 Ordinary File Cards</h4> For sync protocols, artifacts are transferred using "file" cards. File cards come in two different formats depending on whether the artifact is sent directly or as a [./delta_format.wiki|delta] from some other artifact. <pre> <b>file</b> <i>artifact-id size</i> <b>\n</b> <i>content</i> <b>file</b> <i>artifact-id delta-artifact-id size</i> <b>\n</b> <i>content</i> </pre> File cards are followed by in-line "payload" data. The content of the artifact or the artifact delta is the first <i>size</i> bytes of the x-fossil content that immediately follow the newline that terminates the file card. |
︙ | ︙ | |||
267 268 269 270 271 272 273 | the ID of another artifact that is the source of the delta. File cards are sent in both directions: client to server and server to client. A delta might be sent before the source of the delta, so both client and server should remember deltas and be able to apply them when their source arrives. | | | | | | | < < | < | < | < | 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 | the ID of another artifact that is the source of the delta. File cards are sent in both directions: client to server and server to client. A delta might be sent before the source of the delta, so both client and server should remember deltas and be able to apply them when their source arrives. <h4 id="compressed-fc">3.3.2 Compressed File Cards</h4> A client that sends a clone protocol version "3" or greater will receive artifacts as "cfile" cards while cloning. This card was introduced to improve the speed of the transfer of content by sending the compressed artifact directly from the server database to the client. Compressed File cards are similar to File cards, sharing the same in-line "payload" data characteristics and also the same treatment of direct content or delta content. Cfile cards come in two different formats depending on whether the artifact is sent directly or as a delta from some other artifact. <pre> <b>cfile</b> <i>artifact-id usize csize</i> <b>\n</b> <i>content</i> <b>cfile</b> <i>artifact-id delta-artifact-id usize csize</i> <b>\n</b> <i>content</i> </pre> The first argument of the cfile card is the ID of the artifact that is being transferred. The artifact ID is the lower-case hexadecimal representation of the name hash for the artifact. The second argument of the cfile card is the original size in bytes of the artifact. The last argument of the cfile card is the number of compressed bytes of payload that immediately follow the cfile card. If the cfile card has only three arguments, that means the payload is the complete content of the artifact. If the cfile card has four arguments, then the payload is a delta and the second argument is the ID of another artifact that is the source of the delta and the third argument is the original size of the delta artifact. Unlike file cards, cfile cards are only sent in one direction during a clone from server to client for clone protocol version "3" or greater. <h4 id="private">3.3.3 Private artifacts</h4> "Private" content consist of artifacts that are not normally synced. However, private content will be synced when the the [/help?cmd=sync|fossil sync] command includes the "--private" option. Private content is marked by a "private" card: <pre><b>private</b></pre> The private card has no arguments and must directly precede a file card that contains the private content. <h4 id="uv-fc">3.3.4 Unversioned File Cards</h4> Unversioned content is sent in both directions (client to server and server to client) using "uvfile" cards in the following format: <pre><b>uvfile</b> <i>name mtime hash size flags</i> <b>\n</b> <i>content</i></pre> The <i>name</i> field is the name of the unversioned file. The <i>mtime</i> is the last modification time of the file in seconds since 1970. The <i>hash</i> field is the hash of the content for the unversioned file, or "<b>-</b>" for deleted content. The <i>size</i> field is the (uncompressed) size of the content in bytes. The <i>flags</i> field is an integer which is interpreted |
︙ | ︙ | |||
350 351 352 353 354 355 356 | A server will only accept uvfile cards if the login user has the "y" write-unversioned permission. Servers send uvfile cards in response to uvgimme cards received from the client. Clients send uvfile cards when they determine that the server needs the content based on uvigot cards previously received from the server. | | | | | | | | | | | | 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 | A server will only accept uvfile cards if the login user has the "y" write-unversioned permission. Servers send uvfile cards in response to uvgimme cards received from the client. Clients send uvfile cards when they determine that the server needs the content based on uvigot cards previously received from the server. <h3 id="push" name="pull">3.4 Push and Pull Cards</h3> Among the first cards in a client-to-server message are the push and pull cards. The push card tells the server that the client is pushing content. The pull card tells the server that the client wants to pull content. In the event of a sync, both cards are sent. The format is as follows: <pre> <b>push</b> <i>servercode projectcode</i> <b>pull</b> <i>servercode projectcode</i> </pre> The <i>servercode</i> argument is the repository ID for the client. The <i>projectcode</i> is the identifier of the software project that the client repository contains. The projectcode for the client and server must match in order for the transaction to proceed. The server will also send a push card back to the client during a clone. This is how the client determines what project code to put in the new repository it is constructing. The <i>servercode</i> argument is currently unused. <h3 id="clones">3.5 Clone Cards</h3> A clone card works like a pull card in that it is sent from client to server in order to tell the server that the client wants to pull content. The clone card comes in two formats. Older clients use the no-argument format and newer clients use the two-argument format. <pre> <b>clone</b> <b>clone</b> <i>protocol-version sequence-number</i> </pre> <h4>3.5.1 Protocol 3</h4> The latest clients send a two-argument clone message with a protocol version of "3". (Future versions of Fossil might use larger protocol version numbers.) Version "3" of the protocol enhanced version "2" by introducing the "cfile" card which is intended to speed up clone operations. Instead of sending "file" cards, the server will send "cfile" cards <h4>3.5.2 Protocol 2</h4> The sequence-number sent is the number of artifacts received so far. For the first clone message, the sequence number is 0. The server will respond by sending file cards for some number of artifacts up to the maximum message size. The server will also send a single "clone_seqno" card to the client so that the client can know where the server left off. <pre> <b>clone_seqno</b> <i>sequence-number</i> </pre> The clone message in subsequent HTTP requests for the same clone operation will use the sequence-number from the clone_seqno of the previous reply. In response to an initial clone message, the server also sends the client a push message so that the client can discover the projectcode for |
︙ | ︙ | |||
432 433 434 435 436 437 438 | The legacy protocol works well for smaller repositories (50MB with 50,000 artifacts) but is too slow and unwieldy for larger repositories. The version 2 protocol is an effort to improve performance. Further performance improvements with higher-numbered clone protocols are possible in future versions of Fossil. | | | | | | | 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 | The legacy protocol works well for smaller repositories (50MB with 50,000 artifacts) but is too slow and unwieldy for larger repositories. The version 2 protocol is an effort to improve performance. Further performance improvements with higher-numbered clone protocols are possible in future versions of Fossil. <h3 id="igot">3.6 Igot Cards</h3> An igot card can be sent from either client to server or from server to client in order to indicate that the sender holds a copy of a particular artifact. The format is: <pre> <b>igot</b> <i>artifact-id</i> ?<i>flag</i>? </pre> The first argument of the igot card is the ID of the artifact that the sender possesses. The receiver of an igot card will typically check to see if it also holds the same artifact and if not it will request the artifact using a gimme card in either the reply or in the next message. If the second argument exists and is "1", then the artifact identified by the first argument is private on the sender and should be ignored unless a "--private" [/help?cmd=sync|sync] is occurring. The name "igot" comes from the English slang expression "I got" meaning "I have". <h4>3.6.1 Unversioned Igot Cards</h4> Zero or more "uvigot" cards are sent from server to client when synchronizing unversioned content. The format of a uvigot card is as follows: <pre> <b>uvigot</b> <i>name mtime hash size</i> </pre> The <i>name</i> argument is the name of an unversioned file. The <i>mtime</i> is the last modification time of the unversioned file in seconds since 1970. The <i>hash</i> is the SHA1 or SHA3-256 hash of the unversioned file content, or "<b>-</b>" if the file has been deleted. The <i>size</i> is the uncompressed size of the file in bytes. |
︙ | ︙ | |||
487 488 489 490 491 492 493 | When a client receives a "uvigot" card, it checks to see if the file needs to be transferred from client to server or from server to client. If a client-to-server transmission is needed, the client schedules that transfer to occur on a subsequent HTTP request. If a server-to-client transfer is needed, then the client sends a "uvgimme" card back to the server to request the file content. | | | | | | | | | | | | | 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 | When a client receives a "uvigot" card, it checks to see if the file needs to be transferred from client to server or from server to client. If a client-to-server transmission is needed, the client schedules that transfer to occur on a subsequent HTTP request. If a server-to-client transfer is needed, then the client sends a "uvgimme" card back to the server to request the file content. <h3 id="gimme">3.7 Gimme Cards</h3> A gimme card is sent from either client to server or from server to client. The gimme card asks the receiver to send a particular artifact back to the sender. The format of a gimme card is this: <pre> <b>gimme</b> <i>artifact-id</i> </pre> The argument to the gimme card is the ID of the artifact that the sender wants. The receiver will typically respond to a gimme card by sending a file card in its reply or in the next message. The "gimme" name means "give me". The imperative "give me" is pronounced as if it were a single word "gimme" in some dialects of English (including the dialect spoken by the original author of Fossil). <h4>3.7.1 Unversioned Gimme Cards</h4> Sync synchronizing unversioned content, the client may send "uvgimme" cards to the server. A uvgimme card requests that the server send unversioned content to the client. The format of a uvgimme card is as follows: <pre> <b>uvgimme</b> <i>name</i> </pre> The <i>name</i> is the name of the unversioned file found on the server that the client would like to have. When a server sees a uvgimme card, it normally responses with a uvfile card, though it might also send another uvigot card if the HTTP reply is already oversized. <h3 id="cookie">3.8 Cookie Cards</h3> A cookie card can be used by a server to record a small amount of state information on a client. The server sends a cookie to the client. The client sends the same cookie back to the server on its next request. The cookie card has a single argument which is its payload. <pre> <b>cookie</b> <i>payload</i> </pre> The client is not required to return the cookie to the server on its next request. Or the client might send a cookie from a different server on the next request. So the server must not depend on the cookie and the server must structure the cookie payload in such a way that it can tell if the cookie it sees is its own cookie or a cookie from another server. (Typically the server will embed its servercode as part of the cookie.) <h3 id="reqconfig">3.9 Request-Configuration Cards</h3> A request-configuration or "reqconfig" card is sent from client to server in order to request that the server send back "configuration" data. "Configuration" data is information about users or website appearance or other administrative details which are not part of the persistent and versioned state of the project. For example, the "name" of the project, the default Cascading Style Sheet (CSS) for the web-interface, and the project logo displayed on the web-interface are all configuration data elements. The reqconfig card is normally sent in response to the "fossil configuration pull" command. The format is as follows: <pre> <b>reqconfig</b> <i>configuration-name</i> </pre> As of 2018-06-04, the configuration-name must be one of the following values: <table border=0 align="center"> <tr><td valign="top"> <ul> |
︙ | ︙ | |||
601 602 603 604 605 606 607 | <li> ignore-glob <li> keep-glob <li> crlf-glob <ul></td><td valign="top"><ul> <li> crnl-glob <li> encoding-glob <li> empty-dirs | | | 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 | <li> ignore-glob <li> keep-glob <li> crlf-glob <ul></td><td valign="top"><ul> <li> crnl-glob <li> encoding-glob <li> empty-dirs <li> <s title="removed 2020-08, version 2.12.1">allow-symlinks</s> <li> dotfiles <li> parent-project-code <li> parent-projet-name <li> hash-policy <li> mv-rm-files <li> ticket-table <li> ticket-common |
︙ | ︙ | |||
651 652 653 654 655 656 657 | values instead of a single value. The content of these configuration items is returned in a "config" card that contains pure SQL text that is intended to be evaluated by the client. The @user and @concealed configuration items contain sensitive information and are ignored for clients without sufficient privilege. | | | | | | | | 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 | values instead of a single value. The content of these configuration items is returned in a "config" card that contains pure SQL text that is intended to be evaluated by the client. The @user and @concealed configuration items contain sensitive information and are ignored for clients without sufficient privilege. <h3 id="config">3.10 Configuration Cards</h3> A "config" card is used to send configuration information from client to server (in response to a "fossil configuration push" command) or from server to client (in response to a "fossil configuration pull" or "fossil clone" command). The format is as follows: <pre> <b>config</b> <i>configuration-name size</i> <b>\n</b> <i>content</i> </pre> The server will only accept a config card if the user has "Admin" privilege. A client will only accept a config card if it had sent a corresponding reqconfig card in its request. The content of the configuration item is used to overwrite the corresponding configuration data in the receiver. <h3 id="pragma">3.11 Pragma Cards</h3> The client may try to influence the behavior of the server by issuing a pragma card: <pre> <b>pragma</i> <i>name value...</i> </pre> The "pragma" card has at least one argument which is the pragma name. The pragma name defines what the pragma does. A pragma might have zero or more "value" arguments depending on the pragma name. New pragma names may be added to the protocol from time to time |
︙ | ︙ | |||
761 762 763 764 765 766 767 | a successful commit. This instructs the server to release any lock on any check-in previously held by that client. The ci-unlock pragma helps to avoid false-positive lock warnings that might arise if a check-in is aborted and then restarted on a branch. </ol> | | | | | | | | | | 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 | a successful commit. This instructs the server to release any lock on any check-in previously held by that client. The ci-unlock pragma helps to avoid false-positive lock warnings that might arise if a check-in is aborted and then restarted on a branch. </ol> <h3 id="comment">3.12 Comment Cards</h3> Any card that begins with "#" (ASCII 0x23) is a comment card and is silently ignored. <h3 id="error">3.13 Message and Error Cards</h3> If the server discovers anything wrong with a request, it generates an error card in its reply. When the client sees the error card, it displays an error message to the user and aborts the sync operation. An error card looks like this: <pre> <b>error</b> <i>error-message</i> </pre> The error message is English text that is encoded in order to be a single token. A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73). A newline (ASCII 0x0a) is "\n" (ASCII 0x6C, x6E). A backslash (ASCII 0x5C) is represented as two backslashes "\\". Apart from space and newline, no other whitespace characters nor any unprintable characters are allowed in the error message. The server can also send a message card that also prints a message on the client console, but which is not an error: <pre> <b>message</b> <i>message-text</i> </pre> The message-text uses the same format as an error message. <h3 id="unknown">3.14 Unknown Cards</h3> If either the client or the server sees a card that is not described above, then it generates an error and aborts. <h2 id="phantoms" name="clusters">4.0 Phantoms And Clusters</h2> When a repository knows that an artifact exists and knows the ID of that artifact, but it does not know the artifact content, then it stores that artifact as a "phantom". A repository will typically create a phantom when it receives an igot card for an artifact that it does not hold or when it receives a file card that references a delta source that it does not hold. When a server is generating its reply or when a client is |
︙ | ︙ | |||
835 836 837 838 839 840 841 | Any artifact that does not match the specifications of a cluster exactly is not a cluster. There must be no extra whitespace in the artifact. There must be one or more M cards. There must be a single Z card with a correct MD5 checksum. And all cards must be in strict lexicographical order. | | | | | 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 | Any artifact that does not match the specifications of a cluster exactly is not a cluster. There must be no extra whitespace in the artifact. There must be one or more M cards. There must be a single Z card with a correct MD5 checksum. And all cards must be in strict lexicographical order. <h3 id="unclustered">4.1 The Unclustered Table</h3> Every repository maintains a table named "<b>unclustered</b>" which records the identity of every artifact and phantom it holds that is not mentioned in a cluster. The entries in the unclustered table can be thought of as leaves on a tree of artifacts. Some of the unclustered artifacts will be other clusters. Those clusters may contain other clusters, which might contain still more clusters, and so forth. Beginning with the artifacts in the unclustered table, one can follow the chain of clusters to find every artifact in the repository. <h2 id="strategies">5.0 Synchronization Strategies</h2> <h3 id="pull-strategy">5.1 Pull</h3> A typical pull operation proceeds as shown below. Details of the actual implementation may very slightly but the gist of a pull is captured in the following steps: <ol> <li>The client sends login and pull cards. |
︙ | ︙ | |||
900 901 902 903 904 905 906 | amount of overlap between clusters in the common configuration where there is a single server and many clients. The same synchronization protocol will continue to work even if there are multiple servers or if servers and clients sometimes change roles. The only negative effects of these unusual arrangements is that more than the minimum number of clusters might be generated. | | | 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 | amount of overlap between clusters in the common configuration where there is a single server and many clients. The same synchronization protocol will continue to work even if there are multiple servers or if servers and clients sometimes change roles. The only negative effects of these unusual arrangements is that more than the minimum number of clusters might be generated. <h3 id="push-stragegy">5.2 Push</h3> A typical push operation proceeds roughly as shown below. As with a pull, the actual implementation may vary slightly. <ol> <li>The client sends login and push cards. <li>The client sends file cards for any artifacts that it holds that have |
︙ | ︙ | |||
934 935 936 937 938 939 940 | As with a pull, the steps of a push operation repeat until the server knows all artifacts that exist on the client. Also, as with pull, the client attempts to keep the size of the request from growing too large by suppressing file cards once the size of the request reaches 1MB. | | | | 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 | As with a pull, the steps of a push operation repeat until the server knows all artifacts that exist on the client. Also, as with pull, the client attempts to keep the size of the request from growing too large by suppressing file cards once the size of the request reaches 1MB. <h3 id="sync-strategy">5.3 Sync</h3> A sync is just a pull and a push that happen at the same time. The first three steps of a pull are combined with the first five steps of a push. Steps (4) through (7) of a pull are combined with steps (5) through (8) of a push. And steps (8) through (10) of a pull are combined with step (9) of a push. <h3 id="uv-strategy">5.4 Unversioned File Sync</h3> "Unversioned files" are files held in the repository where only the most recent version of the file is kept rather than the entire change history. Unversioned files are intended to be used to store ephemeral content, such as compiled binaries of the most recent release. |
︙ | ︙ | |||
984 985 986 987 988 989 990 | cards and answers "uvgimme" cards with "uvfile" cards in its reply. </ol> The last two steps might be repeated multiple times if there is more unversioned content to be transferred than will fit comfortably in a single HTTP request. | | | 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 | cards and answers "uvgimme" cards with "uvfile" cards in its reply. </ol> The last two steps might be repeated multiple times if there is more unversioned content to be transferred than will fit comfortably in a single HTTP request. <h2 id="summary">6.0 Summary</h2> Here are the key points of the synchronization protocol: <ol> <li>The client sends one or more PUSH HTTP requests to the server. The request and reply content type is "application/x-fossil". <li>HTTP request content is compressed using zlib. |
︙ | ︙ | |||
1028 1029 1030 1031 1032 1033 1034 | <li>Clusters are created automatically on the server during a pull. <li>Repositories keep track of all artifacts that are not named in any cluster and send igot messages for those artifacts. <li>Repositories keep track of all the phantoms they hold and send gimme messages for those artifacts. </ol> | | < | 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 | <li>Clusters are created automatically on the server during a pull. <li>Repositories keep track of all artifacts that are not named in any cluster and send igot messages for those artifacts. <li>Repositories keep track of all the phantoms they hold and send gimme messages for those artifacts. </ol> <h2 id="troubleshooting">7.0 Troubleshooting And Debugging Hints</h2> If you run the [/help?cmd=sync|fossil sync] command (or [/help?cmd=pull|pull] or [/help?cmd=push|push] or [/help?cmd=clone|clone]) with the --httptrace option, Fossil will keep a copy of each HTTP request and reply in files named: <ul> |
︙ | ︙ |
Changes to www/tech_overview.wiki.
|
| | < < < | 1 2 3 4 5 6 7 8 | <title>A Technical Overview of Fossil's Design & Implementation</title> <h2>1.0 Introduction</h2> At its lowest level, a Fossil repository consists of an unordered set of immutable "artifacts". You might think of these artifacts as "files", since in many cases the artifacts are exactly that. But other "structural artifacts" are also included in the mix. |
︙ | ︙ | |||
51 52 53 54 55 56 57 | file that people are normally referring to when they say "a Fossil repository". The checkout database is found in the working checkout for a project and contains state information that is unique to that working checkout. Fossil does not always use all three database files. The web interface, for example, typically only uses the repository database. And the | | | < | | | > > > | | < < < | | < < < | | < | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | file that people are normally referring to when they say "a Fossil repository". The checkout database is found in the working checkout for a project and contains state information that is unique to that working checkout. Fossil does not always use all three database files. The web interface, for example, typically only uses the repository database. And the [/help/settings | fossil settings] command only opens the configuration database when the --global option is used. But other commands use all three databases at once. For example, the [/help/status | fossil status] command will first locate the checkout database, then use the checkout database to find the repository database, then open the configuration database. Whenever multiple databases are used at the same time, they are all opened on the same SQLite database connection using SQLite's [http://www.sqlite.org/lang_attach.html | ATTACH] command. The chart below provides a quick summary of how each of these database files are used by Fossil, with detailed discussion following. <table align="center"> <tr valign="bottom"> <th style="text-align:center">Configuration Database<br>"~/.fossil" or<br> "~/.config/fossil.db" <th style="text-align:center">Repository Database<br>"<i>project</i>.fossil" <th style="text-align:center">Checkout Database<br>"_FOSSIL_" or ".fslckout" <tr valign="top"> <td><ul> <li>Global [/help/settings |settings] <li>List of active repositories used by the [/help/all | all] command </ul></td> <td><ul> <li>[./fileformat.wiki | Global state of the project] encoded using delta-compression <li>Local [/help/settings|settings] <li>Web interface display preferences <li>User credentials and permissions <li>Metadata about the global state to facilitate rapid queries </ul></td> <td><ul> <li>The repository database used by this checkout <li>The version currently checked out <li>Other versions [/help/merge | merged] in but not yet [/help/commit | committed] <li>Changes from the [/help/add | add], [/help/delete | delete], and [/help/rename | rename] commands that have not yet been committed <li>"mtime" values and other information used to efficiently detect local edits <li>The "[/help/stash | stash]" <li>Information needed to "[/help/undo|undo]" or "[/help/redo|redo]" </ul></td> </tr> </table> <h3 id="configdb">2.1 The Configuration Database</h3> The configuration database holds cross-repository preferences and a list of all repositories for a single user. |
︙ | ︙ | |||
127 128 129 130 131 132 133 | operations such as "sync" or "rebuild" on all repositories managed by a user. <h4 id="configloc">2.1.1 Location Of The Configuration Database</h4> On Unix systems, the configuration database is named by the following algorithm: | | | | > | | 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | operations such as "sync" or "rebuild" on all repositories managed by a user. <h4 id="configloc">2.1.1 Location Of The Configuration Database</h4> On Unix systems, the configuration database is named by the following algorithm: <table> <tr><td>1. if environment variable FOSSIL_HOME exists <td> → <td>$FOSSIL_HOME/.fossil <tr><td>2. if file ~/.fossil exists <td> →<td>~/.fossil <tr><td>3. if environment variable XDG_CONFIG_HOME exists <td> →<td>$XDG_CONFIG_HOME/fossil.db <tr><td>4. if the directory ~/.config exists <td> →<td>~/.config/fossil.db <tr><td>5. Otherwise<td> →<td>~/.fossil </table> Another way of thinking of this algorithm is the following: * Use "$FOSSIL_HOME/.fossil" if the FOSSIL_HOME variable is defined * Use the XDG-compatible name (usually ~/.config/fossil.db) on XDG systems if the ~/.fossil file does not already exist * Otherwise, use the traditional unix name of "~/.fossil" |
︙ | ︙ | |||
161 162 163 164 165 166 167 | * %FOSSIL_HOME%/_fossil * %LOCALAPPDATA%/_fossil * %APPDATA%/_fossil * %USERPROFILES%/_fossil * %HOMEDRIVE%%HOMEPATH%/_fossil | | | 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 | * %FOSSIL_HOME%/_fossil * %LOCALAPPDATA%/_fossil * %APPDATA%/_fossil * %USERPROFILES%/_fossil * %HOMEDRIVE%%HOMEPATH%/_fossil The second case is the one that usually determines the name. Note that the FOSSIL_HOME environment variable can always be set to determine the location of the configuration database. Note also that the configuration database file itself is called ".fossil" or "fossil.db" on unix but "_fossil" on windows. The [/help?cmd=info|fossil info] command will show the location of the configuration database on a line that starts with "config-db:". |
︙ | ︙ |
Changes to www/th1.md.
︙ | ︙ | |||
52 53 54 55 56 57 58 | that a TH1 script is really just a list of text commands, not a context-free language with a grammar like C/C++. This can be confusing to long-time C/C++ programmers because TH1 does look a lot like C/C++, but the semantics of TH1 are closer to FORTH or Lisp than they are to C. Consider the `if` command in TH1. | | | | | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | that a TH1 script is really just a list of text commands, not a context-free language with a grammar like C/C++. This can be confusing to long-time C/C++ programmers because TH1 does look a lot like C/C++, but the semantics of TH1 are closer to FORTH or Lisp than they are to C. Consider the `if` command in TH1. if {$current eq "dev"} { puts "hello" } else { puts "world" } The example above is a single command. The first token, and the name of the command, is `if`. The second token is `$current eq "dev"` - an expression. (The outer {...} are removed from each token by the command parser.) The third token is the `puts "hello"`, with its whitespace and newlines. The fourth token is `else` and the fifth and last token is `puts "world"`. |
︙ | ︙ | |||
81 82 83 84 85 86 87 | All of this also explains the emphasis on *unescaped* characters above: the curly braces `{ }` are string quoting characters in Tcl/TH1, not block delimiters as in C. This is how we can have a command that extends over multiple lines. It is also why the `else` keyword must be cuddled up with the closing brace for the `if` clause's scriptlet. The following is invalid Tcl/TH1: | | | | | | | | | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | All of this also explains the emphasis on *unescaped* characters above: the curly braces `{ }` are string quoting characters in Tcl/TH1, not block delimiters as in C. This is how we can have a command that extends over multiple lines. It is also why the `else` keyword must be cuddled up with the closing brace for the `if` clause's scriptlet. The following is invalid Tcl/TH1: if {$current eq "dev"} { puts "hello" } else { puts "world" } If you try to run this under either Tcl or TH1, the interpreter will tell you that there is no `else` command, because with the newline on the third line, you terminated the `if` command. Occasionally in Tcl/TH1 scripts, you may need to use a backslash at the end of a line to allow a command to extend over multiple lines without being considered two separate commands. Here's an example from one of Fossil's test scripts: return [lindex [regexp -line -inline -nocase -- \ {^uuid:\s+([0-9A-F]{40}) } [eval [getFossilCommand \ $repository "" info trunk]]] end] Those backslashes allow the command to wrap nicely within a standard terminal width while telling the interpreter to consider those three lines as a single command. Summary of Core TH1 Commands |
︙ | ︙ | |||
296 297 298 299 300 301 302 | the term is true if all of the capability letters in that term are available to the "anonymous" user. Or, if the term is "*" then it is always true. Examples: ``` | | | | | | | | 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 | the term is true if all of the capability letters in that term are available to the "anonymous" user. Or, if the term is "*" then it is always true. Examples: ``` capexpr {j o r} True if any one of j, o, or r are available capexpr {oh} True if both o and h are available capexpr {@2 @3 4 5 6} 2 or 3 available for anonymous or one of 4, 5 or 6 is available for the user capexpr L True if the user is logged in capexpr !L True if the user is not logged in ``` The `L` pseudo-capability is intended only to be used on its own or with the `!` prefix for implementing login/logout menus via the `mainmenu` site configuration option: ``` |
︙ | ︙ | |||
681 682 683 684 685 686 687 | 1. **w** -- _Wiki_ To be clear, only one of the document classes identified by each STRING needs to be searchable in order for that argument to be true. But all arguments must be true for this routine to return true. Hence, to see if ALL document classes are searchable: | | | | 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 | 1. **w** -- _Wiki_ To be clear, only one of the document classes identified by each STRING needs to be searchable in order for that argument to be true. But all arguments must be true for this routine to return true. Hence, to see if ALL document classes are searchable: if {[searchable c d t w]} {...} But to see if ANY document class is searchable: if {[searchable cdtw]} {...} This command is useful for enabling or disabling a "Search" entry on the menu bar. <a id="setParameter"></a>TH1 setParameter Command --------------------------------------------------- |
︙ | ︙ |
Changes to www/theory1.wiki.
1 | <title>Thoughts On The Design Of The Fossil DVCS</title> | < | 1 2 3 4 5 6 7 8 | <title>Thoughts On The Design Of The Fossil DVCS</title> Two questions (or criticisms) that arise frequently regarding Fossil can be summarized as follows: 1. Why is Fossil based on SQLite instead of a distributed NoSQL database? 2. Why is Fossil written in C instead of a modern high-level language? |
︙ | ︙ |
Changes to www/tickets.wiki.
︙ | ︙ | |||
45 46 47 48 49 50 51 | <h3>2.1 Ticket Table Schema</h3> The two ticket tables are called TICKET and TICKETCHNG. The default schema (as of this writing) for these two tables is shown below: | | | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | <h3>2.1 Ticket Table Schema</h3> The two ticket tables are called TICKET and TICKETCHNG. The default schema (as of this writing) for these two tables is shown below: <verbatim> CREATE TABLE ticket( -- Do not change any column that begins with tkt_ tkt_id INTEGER PRIMARY KEY, tkt_uuid TEXT UNIQUE, tkt_mtime DATE, tkt_ctime DATE, -- Add as many fields as required below this line |
︙ | ︙ | |||
76 77 78 79 80 81 82 | -- Add as many fields as required below this line login TEXT, username TEXT, mimetype TEXT, icomment TEXT ); CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime); | | | 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | -- Add as many fields as required below this line login TEXT, username TEXT, mimetype TEXT, icomment TEXT ); CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime); </verbatim> Generally speaking, there is one row in the TICKETCHNG table for each change to each ticket. In other words, there is one row in the TICKETCHNG table for each low-level ticket change artifact. The TICKET table, on the other hand, contains a summary of the current status of each ticket. |
︙ | ︙ |
Changes to www/unvers.wiki.
1 | <title>Unversioned Content</title> | < | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | <title>Unversioned Content</title> "Unversioned content" or "unversioned files" are files stored in a Fossil repository without history, meaning it retains the newest version of each such file, and that alone. Though it omits history, Fossil does sync unversioned content between repositories. In the event of a conflict during a sync, it retains the most recent version of each unversioned file, discarding older versions. Unversioned files are useful for storing ephemeral content such as builds or frequently changing web pages. We store the [https://fossil-scm.org/home/uv/download.html|download] page of the self-hosting Fossil repository as unversioned content, for example. |
︙ | ︙ | |||
32 33 34 35 36 37 38 | the [/help?cmd=/uvlist|/uvlist] URL. ([/uvlist|example]). <h2>Syncing Unversioned Files</h2> Unversioned content does not sync between repositories by default. One must request it via commands such as: | | | | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | the [/help?cmd=/uvlist|/uvlist] URL. ([/uvlist|example]). <h2>Syncing Unversioned Files</h2> Unversioned content does not sync between repositories by default. One must request it via commands such as: <pre> fossil sync <b>-u</b> fossil clone <b>-u</b> <i>URL local-repo-name</i> fossil unversioned sync </pre> The [/help?cmd=sync|fossil sync] and [/help?cmd=clone|fossil clone] commands will synchronize unversioned content if and only if they're given the "-u" (or "--unversioned") command-line option. The [/help?cmd=unversioned|fossil unversioned sync] command synchronizes the unversioned content without synchronizing anything else. |
︙ | ︙ | |||
70 71 72 73 74 75 76 | <i>(This section outlines the current implementation of unversioned files. This is not an interface spec and hence subject to change.)</i> Unversioned content is stored in the repository in the "unversioned" table: | | | | | | | | | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | <i>(This section outlines the current implementation of unversioned files. This is not an interface spec and hence subject to change.)</i> Unversioned content is stored in the repository in the "unversioned" table: <pre> CREATE TABLE unversioned( uvid INTEGER PRIMARY KEY AUTOINCREMENT, -- unique ID for this file name TEXT UNIQUE, -- Name of the file rcvid INTEGER, -- From whence this file was received mtime DATETIME, -- Last change (seconds since 1970) hash TEXT, -- SHA1 or SHA3-256 hash of uncompressed content sz INTEGER, -- Size of uncompressed content encoding INT, -- 0: plaintext 1: zlib compressed content BLOB -- File content ); </pre> Fossil does not create the table ahead of need. If there are no unversioned files in the repository, the "unversioned" table will not exist. Consequently, one simple way to purge all unversioned content from a repository is to run: <pre> fossil sql "DROP TABLE unversioned; VACUUM;" </pre> Lacking history for unversioned files, Fossil does not attempt delta compression on them. Fossil servers exchange unversioned content whole; it does not attempt to "diff" your local version against the remote and send only the changes. We point this out because one use-case for unversioned content is to send large, frequently-changing files. Appreciate the consequences before making each change. There are two bandwidth-saving measures in "<tt>fossil uv sync</tt>". The first is the regular HTTP payload compression step, done on all syncs. The second is that Fossil sends hash exchanges to determine when it can avoid sending duplicate content over the wire unnecessarily. See the [./sync.wiki|synchronization protocol documentation] for further information. |
Changes to www/webui.wiki.
︙ | ︙ | |||
30 31 32 33 34 35 36 | As an example of how useful this web interface can be, the entire [./index.wiki | Fossil website], including the document you are now reading, is rendered using the Fossil web interface, with no enhancements, and little customization. | | | | | | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | As an example of how useful this web interface can be, the entire [./index.wiki | Fossil website], including the document you are now reading, is rendered using the Fossil web interface, with no enhancements, and little customization. <div class="indent"> <b>Key point:</b> <i>The Fossil website is just a running instance of Fossil! </div> Note also that because Fossil is a distributed system, you can run the web interface on your local machine while off network (for example, while on an airplane) including making changes to wiki pages and/or trouble ticket, then synchronize with your co-workers after you reconnect. When you clone a Fossil repository, you don't just get the project source code, you get the entire project management website. <h2>Very Simple Startup</h2> To start using the built-in Fossil web interface on an existing Fossil repository, simply type this: <pre>fossil ui existing-repository.fossil</pre> Substitute the name of your repository, of course. The "ui" command will start a web server running (it figures out an available TCP port to use on its own) and then automatically launches your web browser to point at that server. If you run the "ui" command from within an open check-out, you can omit the repository name: <pre>fossil ui</pre> The latter case is a very useful short-cut when you are working on a Fossil project and you want to quickly do some work with the web interface. Notice that Fossil automatically finds an unused TCP port to run the server on and automatically points your web browser to the correct URL. So there is never any fumbling around trying to find an open port or to type arcane strings into your browser URL entry box. |
︙ | ︙ | |||
151 152 153 154 155 156 157 | available to a distributed team by simply copying the single repository file up to a web server that supports CGI or SCGI. To run Fossil as CGI, just put the <b>sample-project.fossil</b> file in a directory where CGI scripts have both read and write permission on the file and the directory that contains the file, then add a CGI script that looks something like this: | < | | < < | | < | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | available to a distributed team by simply copying the single repository file up to a web server that supports CGI or SCGI. To run Fossil as CGI, just put the <b>sample-project.fossil</b> file in a directory where CGI scripts have both read and write permission on the file and the directory that contains the file, then add a CGI script that looks something like this: <verbatim>#!/usr/local/bin/fossil repository: /home/www/sample-project.fossil</verbatim> Adjust the script above so that the paths are correct for your system, of course, and also make sure the Fossil binary is installed on the server. But that is <u>all</u> you have to do. You now have everything you need to host a distributed software development project in less than five minutes using a two-line CGI script. Instructions for setting up an SCGI server are [./scgi.wiki | available separately]. You don't have a CGI- or SCGI-capable web server running on your server machine? Not a problem. The Fossil interface can also be launched via inetd or xinetd. An inetd configuration line sufficient to launch the Fossil web interface looks like this: <verbatim>80 stream tcp nowait.1000 root /usr/local/bin/fossil \ /usr/local/bin/fossil http /home/www/sample-project.fossil</verbatim> As always, you'll want to adjust the pathnames to whatever is appropriate for your system. The xinetd setup uses a different syntax but follows the same idea. |
Changes to www/whyusefossil.wiki.
|
| | | | | < | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | <title>Why You Should Use Fossil</title> <h4>(Or if not Fossil, at least some kind of modern version control such as Git, Mercurial, or Subversion.)</h4> <h5>I. Benefits of Version Control</h5> <ol type='A'> <li><p><b>Immutable file and version identification</b> <ol type='i'> <li>Simplified and unambiguous communication between developers <li>Detect accidental or surreptitious changes <li>Locate the origin of discovered files </ol> |
︙ | ︙ | |||
35 36 37 38 39 40 41 | <li>Everyone always has the latest code <li>Failed disk-drives cause no loss of work <li>Avoid wasting time doing manual file copying <li>Avoid human errors during manual backups </ol> </ol> | | | | | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | <li>Everyone always has the latest code <li>Failed disk-drives cause no loss of work <li>Avoid wasting time doing manual file copying <li>Avoid human errors during manual backups </ol> </ol> <h5 id="definitions">II. Definitions</h5> <div class="indent">Moved to [./glossary.md | a separate document].</div> <h5>III. Basic Fossil commands</h5> <ul> <li><p><b>clone</b> → Make a copy of a repository. The original repository is usually (but not always) on a remote machine and the copy is on the local machine. The copy remembers the network location from which it was copied and (by default) tries to keep itself synchronized |
︙ | ︙ | |||
85 86 87 88 89 90 91 | <li><p><b>rm/mv</b> → Short for 'remove' and 'move', these commands are like "add" in that they specify pending changes to the structure of the check-out. As with "add", no changes are made to the repository until the next "commit". </ul> | | | 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | <li><p><b>rm/mv</b> → Short for 'remove' and 'move', these commands are like "add" in that they specify pending changes to the structure of the check-out. As with "add", no changes are made to the repository until the next "commit". </ul> <h5>IV. The history of a project is a Directed Acyclic Graph (DAG)</h5> <ul> <li><p>Fossil (and other distributed VCSes like Git and Mercurial, but not Subversion) represent the history of a project as a directed acyclic graph (DAG). <ul> <li><p>Each check-in is a node in the graph |
︙ | ︙ | |||
140 141 142 143 144 145 146 | humans, so best practice is to give each branch a unique name. <li><p>The name of a branch can be changed by adding special tags to the first check-in of a branch. The name assigned by this special tag automatically propagates to all direct children. </ul> </ul> | | | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | humans, so best practice is to give each branch a unique name. <li><p>The name of a branch can be changed by adding special tags to the first check-in of a branch. The name assigned by this special tag automatically propagates to all direct children. </ul> </ul> <h5>V. Why version control is important (reprise)</h5> <ol> <li><p>Every check-in and every individual file has a unique name - its SHA1 or SHA3-256 hash. Team members can unambiguously identify any specific version of the overall project or any specific version of an individual file. |
︙ | ︙ |