Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | Merge trunk |
---|---|
Downloads: | Tarball | ZIP archive |
Timelines: | family | ancestors | descendants | both | andygoth-changes |
Files: | files | file ages | folders |
SHA1: |
a47d79e910a9c88ba76b31b121bbc38f |
User & Date: | andygoth 2016-11-05 15:53:36.945 |
Context
2016-11-05
| ||
19:25 | Merge trunk ... (check-in: e6787d1e user: andygoth tags: andygoth-changes) | |
15:53 | Merge trunk ... (check-in: a47d79e9 user: andygoth tags: andygoth-changes) | |
15:49 | Expand list of stopwords in permuted index ... (check-in: 95bb5a24 user: andygoth tags: trunk) | |
2016-11-02
| ||
19:26 | Minor tweaks to proposed help text for possible enhanced changes command ... (check-in: b18735fc user: andygoth tags: andygoth-changes) | |
Changes
Changes to Dockerfile.
1 2 3 | ### # Dockerfile for Fossil ### | | | 1 2 3 4 5 6 7 8 9 10 11 | ### # Dockerfile for Fossil ### FROM fedora:24 ### Now install some additional parts we will need for the build RUN dnf update -y && dnf install -y gcc make zlib-devel openssl-devel tar && dnf clean all && groupadd -r fossil -g 433 && useradd -u 431 -r -g fossil -d /opt/fossil -s /sbin/nologin -c "Fossil user" fossil ### If you want to build "trunk", change the next line accordingly. ENV FOSSIL_INSTALL_VERSION release |
︙ | ︙ |
Changes to Makefile.in.
︙ | ︙ | |||
35 36 37 38 39 40 41 42 43 44 45 46 47 48 | #### Tcl shell for use in running the fossil testsuite. If you do not # care about testing the end result, this can be blank. # TCLSH = tclsh LIB = @LDFLAGS@ @EXTRA_LDFLAGS@ @LIBS@ TCCFLAGS = @EXTRA_CFLAGS@ @CPPFLAGS@ @CFLAGS@ -DHAVE_AUTOCONFIG_H -D_HAVE_SQLITE_CONFIG_H INSTALLDIR = $(DESTDIR)@prefix@/bin USE_SYSTEM_SQLITE = @USE_SYSTEM_SQLITE@ USE_LINENOISE = @USE_LINENOISE@ USE_SEE = @USE_SEE@ FOSSIL_ENABLE_MINIZ = @FOSSIL_ENABLE_MINIZ@ | > | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | #### Tcl shell for use in running the fossil testsuite. If you do not # care about testing the end result, this can be blank. # TCLSH = tclsh LIB = @LDFLAGS@ @EXTRA_LDFLAGS@ @LIBS@ BCCFLAGS = @CPPFLAGS@ @CFLAGS@ TCCFLAGS = @EXTRA_CFLAGS@ @CPPFLAGS@ @CFLAGS@ -DHAVE_AUTOCONFIG_H -D_HAVE_SQLITE_CONFIG_H INSTALLDIR = $(DESTDIR)@prefix@/bin USE_SYSTEM_SQLITE = @USE_SYSTEM_SQLITE@ USE_LINENOISE = @USE_LINENOISE@ USE_SEE = @USE_SEE@ FOSSIL_ENABLE_MINIZ = @FOSSIL_ENABLE_MINIZ@ |
︙ | ︙ |
Changes to VERSION.
|
| | | 1 | 1.37 |
Changes to auto.def.
︙ | ︙ | |||
477 478 479 480 481 482 483 484 485 486 487 488 489 490 | } cc-check-function-in-lib dlopen dl cc-check-function-in-lib sin m # Check for the FuseFS library if {[opt-bool fusefs]} { if {[cc-check-function-in-lib fuse_mount fuse]} { define FOSSIL_HAVE_FUSEFS 1 define-append LIBS -lfuse msg-result "FuseFS support enabled" } } make-template Makefile.in | > | 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 | } cc-check-function-in-lib dlopen dl cc-check-function-in-lib sin m # Check for the FuseFS library if {[opt-bool fusefs]} { if {[cc-check-function-in-lib fuse_mount fuse]} { define-append EXTRA_CFLAGS -DFOSSIL_HAVE_FUSEFS define FOSSIL_HAVE_FUSEFS 1 define-append LIBS -lfuse msg-result "FuseFS support enabled" } } make-template Makefile.in |
︙ | ︙ |
Changes to skins/black_and_white/css.txt.
︙ | ︙ | |||
104 105 106 107 108 109 110 | text-align: center; border:1px solid #999; border-width:1px 0px; background-color: #eee; color: #333; } div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, | | | | 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | text-align: center; border:1px solid #999; border-width:1px 0px; background-color: #eee; color: #333; } div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: #333; text-decoration: none; } div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #eee; background-color: #333; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/blitz/css.txt.
︙ | ︙ | |||
892 893 894 895 896 897 898 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } | | | | 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } .submenu a, .submenu label { color: #3b5c6b; padding: 5px 15px; text-decoration: none; border: 1px solid transparent; border-radius: 5px; } .submenu a:hover, .submenu label:hover { border: 1px solid #ccc; } /* Section * Cap/header to distinguish a section. Displayed within a content div. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ |
︙ | ︙ |
Changes to skins/blitz_no_logo/css.txt.
︙ | ︙ | |||
892 893 894 895 896 897 898 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } | | | | 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } .submenu a, .submenu label { color: #3b5c6b; padding: 5px 15px; text-decoration: none; border: 1px solid transparent; border-radius: 5px; } .submenu a:hover, .submenu label:hover { border: 1px solid #ccc; } /* Section * Cap/header to distinguish a section. Displayed within a content div. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ |
︙ | ︙ |
Changes to skins/default/css.txt.
︙ | ︙ | |||
102 103 104 105 106 107 108 | .submenu { font-size: .7em; margin-top: 10px; padding: 10px; border-bottom: 1px solid #ccc; } | | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | .submenu { font-size: .7em; margin-top: 10px; padding: 10px; border-bottom: 1px solid #ccc; } .submenu a, .submenu label { padding: 10px 11px; text-decoration:none; color: #777; } .submenu a:hover, .submenu label:hover { padding: 6px 10px; border: 1px solid #ccc; border-radius: 5px; color: #000; } .content { |
︙ | ︙ |
Changes to skins/eagle/css.txt.
︙ | ︙ | |||
71 72 73 74 75 76 77 | font-size: 0.9em; font-weight: bold; text-align: center; background-color: #485D7B; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | font-size: 0.9em; font-weight: bold; text-align: center; background-color: #485D7B; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { text-decoration: underline; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { padding: 0ex 1ex 0ex 2ex; |
︙ | ︙ |
Changes to skins/enhanced1/css.txt.
︙ | ︙ | |||
69 70 71 72 73 74 75 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #558195; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/khaki/css.txt.
︙ | ︙ | |||
67 68 69 70 71 72 73 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #c0af58; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #c0af58; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover div.submenu label:hover { color: #a09048; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/original/css.txt.
︙ | ︙ | |||
69 70 71 72 73 74 75 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #558195; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/plain_gray/css.txt.
︙ | ︙ | |||
69 70 71 72 73 74 75 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #404040; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/rounded1/css.txt.
︙ | ︙ | |||
82 83 84 85 86 87 88 | box-shadow: 0px 3px 4px #999; } div.mainmenu a, div.mainmenu a:visited { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } | | | | 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | box-shadow: 0px 3px 4px #999; } div.mainmenu a, div.mainmenu a:visited { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.submenu a, div.submenu a:visited, a.button, div.submenu label div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited { padding: 2px 8px; color: #000; font-family: Arial; text-decoration: none; margin:auto; border-radius: 5px; background-color: #e0e0e0; text-shadow: 0px -1px 0px #eee; border: 1px solid #000; } div.mainmenu a:hover { color: #000; background-color: white; } div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { background-color: #c0c0c0; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { background-color: #fff; |
︙ | ︙ |
Changes to skins/xekri/css.txt.
︙ | ︙ | |||
156 157 158 159 160 161 162 | div.submenu { border-top: 1px solid #0a0; border-radius: 0; display: block; } | | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | div.submenu { border-top: 1px solid #0a0; border-radius: 0; display: block; } div.mainmenu a, div.submenu a, div.submenu label { color: #000; padding: 0 0.75rem; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.submenu label:hover { color: #fff; text-shadow: 0px 0px 6px #0f0; } div.submenu * { margin: 0 0.5rem; vertical-align: middle; |
︙ | ︙ |
Changes to src/allrepo.c.
︙ | ︙ | |||
171 172 173 174 175 176 177 | char *zQFilename; Blob extra; int useCheckouts = 0; int quiet = 0; int dryRunFlag = 0; int showFile = find_option("showfile",0,0)!=0; int stopOnError = find_option("dontstop",0,0)==0; | < | 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | char *zQFilename; Blob extra; int useCheckouts = 0; int quiet = 0; int dryRunFlag = 0; int showFile = find_option("showfile",0,0)!=0; int stopOnError = find_option("dontstop",0,0)==0; int nToDel = 0; int showLabel = 0; dryRunFlag = find_option("dry-run","n",0)!=0; if( !dryRunFlag ){ dryRunFlag = find_option("test",0,0)!=0; /* deprecated */ } |
︙ | ︙ | |||
376 377 378 379 380 381 382 383 384 385 386 387 388 389 | " WHERE substr(name, 1, 5)=='repo:'" " ORDER BY 1" ); } db_multi_exec("CREATE TEMP TABLE toDel(x TEXT)"); db_prepare(&q, "SELECT name, tag FROM repolist ORDER BY 1"); while( db_step(&q)==SQLITE_ROW ){ const char *zFilename = db_column_text(&q, 0); #if !USE_SEE if( sqlite3_strglob("*.efossil", zFilename)==0 ) continue; #endif if( file_access(zFilename, F_OK) || !file_is_canonical(zFilename) || (useCheckouts && file_isdir(zFilename)!=1) | > | 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 | " WHERE substr(name, 1, 5)=='repo:'" " ORDER BY 1" ); } db_multi_exec("CREATE TEMP TABLE toDel(x TEXT)"); db_prepare(&q, "SELECT name, tag FROM repolist ORDER BY 1"); while( db_step(&q)==SQLITE_ROW ){ int rc; const char *zFilename = db_column_text(&q, 0); #if !USE_SEE if( sqlite3_strglob("*.efossil", zFilename)==0 ) continue; #endif if( file_access(zFilename, F_OK) || !file_is_canonical(zFilename) || (useCheckouts && file_isdir(zFilename)!=1) |
︙ | ︙ |
Changes to src/attach.c.
︙ | ︙ | |||
450 451 452 453 454 455 456 | if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); #if 0 /* Shunning here needs to get both the attachment control artifact and ** the object that is attached. */ if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid='%q'", zUuid) ){ | | | | | | | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 | if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); #if 0 /* Shunning here needs to get both the attachment control artifact and ** the object that is attached. */ if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid='%q'", zUuid) ){ style_submenu_element("Unshun", "%s/shun?uuid=%s&sub=1", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } #endif pAttach = manifest_get(rid, CFTYPE_ATTACHMENT, 0); if( pAttach==0 ) fossil_redirect_home(); zTarget = pAttach->zAttachTarget; zSrc = pAttach->zAttachSrc; ridSrc = db_int(0,"SELECT rid FROM blob WHERE uuid='%q'", zSrc); zName = pAttach->zAttachName; zDesc = pAttach->zComment; zMime = mimetype_from_name(zName); fShowContent = zMime ? strncmp(zMime,"text/", 5)==0 : 0; if( validate16(zTarget, strlen(zTarget)) && db_exists("SELECT 1 FROM ticket WHERE tkt_uuid='%q'", zTarget) ){ zTktUuid = zTarget; if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } if( g.perm.WrTkt ){ style_submenu_element("Delete", "%R/ainfo/%s?del", zUuid); } }else if( db_exists("SELECT 1 FROM tag WHERE tagname='wiki-%q'",zTarget) ){ zWikiName = zTarget; if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } if( g.perm.WrWiki ){ style_submenu_element("Delete", "%R/ainfo/%s?del", zUuid); } }else if( db_exists("SELECT 1 FROM tag WHERE tagname='event-%q'",zTarget) ){ zTNUuid = zTarget; if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } if( g.perm.Write && g.perm.WrWiki ){ style_submenu_element("Delete", "%R/ainfo/%s?del", zUuid); } } zDate = db_text(0, "SELECT datetime(%.12f)", pAttach->rDate); if( P("confirm") && ((zTktUuid && g.perm.WrTkt) || (zWikiName && g.perm.WrWiki) || |
︙ | ︙ | |||
549 550 551 552 553 554 555 | return; } if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Attachment Details"); | | | < | 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 | return; } if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Attachment Details"); style_submenu_element("Raw", "%R/artifact/%s", zUuid); if(fShowContent){ style_submenu_element("Line Numbers", "%R/ainfo/%s%s", zUuid, ((zLn&&*zLn) ? "" : "?ln=0")); } @ <div class="section">Overview</div> @ <p><table class="label-value"> @ <tr><th>Artifact ID:</th> @ <td>%z(href("%R/artifact/%!S",zUuid))%s(zUuid)</a> |
︙ | ︙ | |||
627 628 629 630 631 632 633 | @ %h(z) @ </pre> } }else if( strncmp(zMime, "image/", 6)==0 ){ int sz = db_int(0, "SELECT size FROM blob WHERE rid=%d", ridSrc); @ <i>(file is %d(sz) bytes of image data)</i><br /> @ <img src="%R/raw/%s(zSrc)?m=%s(zMime)"></img> | | | 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 | @ %h(z) @ </pre> } }else if( strncmp(zMime, "image/", 6)==0 ){ int sz = db_int(0, "SELECT size FROM blob WHERE rid=%d", ridSrc); @ <i>(file is %d(sz) bytes of image data)</i><br /> @ <img src="%R/raw/%s(zSrc)?m=%s(zMime)"></img> style_submenu_element("Image", "%R/raw/%s?m=%s", zSrc, zMime); }else{ int sz = db_int(0, "SELECT size FROM blob WHERE rid=%d", ridSrc); @ <i>(file is %d(sz) bytes of binary data)</i> } @ </blockquote> manifest_destroy(pAttach); blob_reset(&attach); |
︙ | ︙ |
Changes to src/bisect.c.
︙ | ︙ | |||
73 74 75 76 77 78 79 | /* ** Return the value of a boolean bisect option. */ int bisect_option(const char *zName){ unsigned int i; int r = -1; | | | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | /* ** Return the value of a boolean bisect option. */ int bisect_option(const char *zName){ unsigned int i; int r = -1; for(i=0; i<count(aBisectOption); i++){ if( fossil_strcmp(zName, aBisectOption[i].zName)==0 ){ char *zLabel = mprintf("bisect-%s", zName); char *z = db_lget(zLabel, (char*)aBisectOption[i].zDefault); if( is_truth(z) ) r = 1; if( is_false(z) ) r = 0; if( r<0 ) r = is_truth(aBisectOption[i].zDefault); free(zLabel); |
︙ | ︙ | |||
404 405 406 407 408 409 410 | }else if( strncmp(zCmd, "log", n)==0 ){ bisect_chart(0); }else if( strncmp(zCmd, "chart", n)==0 ){ bisect_chart(1); }else if( strncmp(zCmd, "options", n)==0 ){ if( g.argc==3 ){ unsigned int i; | | | | | 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 | }else if( strncmp(zCmd, "log", n)==0 ){ bisect_chart(0); }else if( strncmp(zCmd, "chart", n)==0 ){ bisect_chart(1); }else if( strncmp(zCmd, "options", n)==0 ){ if( g.argc==3 ){ unsigned int i; for(i=0; i<count(aBisectOption); i++){ char *z = mprintf("bisect-%s", aBisectOption[i].zName); fossil_print(" %-15s %-6s ", aBisectOption[i].zName, db_lget(z, (char*)aBisectOption[i].zDefault)); fossil_free(z); comment_print(aBisectOption[i].zDesc, 0, 27, -1, g.comFmtFlags); } }else if( g.argc==4 || g.argc==5 ){ unsigned int i; n = strlen(g.argv[3]); for(i=0; i<count(aBisectOption); i++){ if( strncmp(g.argv[3], aBisectOption[i].zName, n)==0 ){ char *z = mprintf("bisect-%s", aBisectOption[i].zName); if( g.argc==5 ){ db_lset(z, g.argv[4]); } fossil_print("%s\n", db_lget(z, (char*)aBisectOption[i].zDefault)); fossil_free(z); break; } } if( i>=count(aBisectOption) ){ fossil_fatal("no such bisect option: %s", g.argv[3]); } }else{ usage("options ?NAME? ?VALUE?"); } }else if( strncmp(zCmd, "reset", n)==0 ){ db_multi_exec( |
︙ | ︙ |
Changes to src/branch.c.
︙ | ︙ | |||
152 153 154 155 156 157 158 | brid = content_put_ex(&branch, 0, 0, 0, isPrivate); if( brid==0 ){ fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ | | | 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | brid = content_put_ex(&branch, 0, 0, 0, isPrivate); if( brid==0 ){ fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ fossil_fatal("%s", g.zErrMsg); } assert( blob_is_reset(&branch) ); content_deltify(rootid, brid, 0); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", brid); fossil_print("New branch: %s\n", zUuid); if( g.argc==3 ){ fossil_print( |
︙ | ︙ | |||
321 322 323 324 325 326 327 | @ (SELECT tagxref.value @ FROM plink CROSS JOIN tagxref @ WHERE plink.pid=event.objid @ AND tagxref.rid=plink.cid @ AND tagxref.tagid=(SELECT tagid FROM tag WHERE tagname='branch') @ AND tagtype>0), @ count(*), | | > > > > > > > > > > > > > > | > | 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 | @ (SELECT tagxref.value @ FROM plink CROSS JOIN tagxref @ WHERE plink.pid=event.objid @ AND tagxref.rid=plink.cid @ AND tagxref.tagid=(SELECT tagid FROM tag WHERE tagname='branch') @ AND tagtype>0), @ count(*), @ (SELECT uuid FROM blob WHERE rid=tagxref.rid), @ event.bgcolor @ FROM tagxref, tag, event @ WHERE tagxref.tagid=tag.tagid @ AND tagxref.tagtype>0 @ AND tag.tagname='branch' @ AND event.objid=tagxref.rid @ GROUP BY 1 @ ORDER BY 2 DESC; ; /* ** This is the new-style branch-list page that shows the branch names ** together with their ages (time of last check-in) and whether or not ** they are closed or merged to another branch. ** ** Control jumps to this routine from brlist_page() (the /brlist handler) ** if there are no query parameters. */ static void new_brlist_page(void){ Stmt q; double rNow; int show_colors = PB("colors"); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Branches"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_checkbox("colors", "Use Branch Colors", 0); login_anonymous_available(); db_prepare(&q, brlistQuery/*works-like:""*/); rNow = db_double(0.0, "SELECT julianday('now')"); @ <div class="brlist"><table id="branchlisttable"> @ <thead><tr> @ <th>Branch Name</th> @ <th>Age</th> @ <th>Check-ins</th> @ <th>Status</th> @ <th>Resolution</th> @ </tr></thead><tbody> while( db_step(&q)==SQLITE_ROW ){ const char *zBranch = db_column_text(&q, 0); double rMtime = db_column_double(&q, 1); int isClosed = db_column_int(&q, 2); const char *zMergeTo = db_column_text(&q, 3); int nCkin = db_column_int(&q, 4); const char *zLastCkin = db_column_text(&q, 5); const char *zBgClr = db_column_text(&q, 6); char *zAge = human_readable_age(rNow - rMtime); sqlite3_int64 iMtime = (sqlite3_int64)(rMtime*86400.0); if( zMergeTo && zMergeTo[0]==0 ) zMergeTo = 0; if( zBgClr == 0 ){ if( zBranch==0 || strcmp(zBranch,"trunk")==0 ){ zBgClr = 0; }else{ zBgClr = hash_color(zBranch); } } if( zBgClr && zBgClr[0] && show_colors ){ @ <tr style="background-color:%s(zBgClr)"> }else{ @ <tr> } @ <td>%z(href("%R/timeline?n=100&r=%T",zBranch))%h(zBranch)</a></td> @ <td data-sortkey="%016llx(-iMtime)">%s(zAge)</td> @ <td>%d(nCkin)</td> fossil_free(zAge); @ <td>%s(isClosed?"closed":"")</td> if( zMergeTo ){ @ <td>merged into |
︙ | ︙ | |||
425 426 427 428 429 430 431 | showAll = 1; } if( showAll ) brFlags = BRL_BOTH; if( showClosed ) brFlags = BRL_CLOSED_ONLY; style_header("%s", showClosed ? "Closed Branches" : showAll ? "All Branches" : "Open Branches"); | | | | | | | | | | | 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 | showAll = 1; } if( showAll ) brFlags = BRL_BOTH; if( showClosed ) brFlags = BRL_CLOSED_ONLY; style_header("%s", showClosed ? "Closed Branches" : showAll ? "All Branches" : "Open Branches"); style_submenu_element("Timeline", "brtimeline"); if( showClosed ){ style_submenu_element("All", "brlist?all"); style_submenu_element("Open", "brlist?open"); }else if( showAll ){ style_submenu_element("Closed", "brlist?closed"); style_submenu_element("Open", "brlist"); }else{ style_submenu_element("All", "brlist?all"); style_submenu_element("Closed", "brlist?closed"); } if( !colorTest ){ style_submenu_element("Color-Test", "brlist?colortest"); }else{ style_submenu_element("All", "brlist?all"); } login_anonymous_available(); #if 0 style_sidebox_begin("Nomenclature:", "33%"); @ <ol> @ <li> An <div class="sideboxDescribed">%z(href("brlist")) @ open branch</a></div> is a branch that has one or more |
︙ | ︙ | |||
527 528 529 530 531 532 533 | void brtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Branches"); | | | 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 | void brtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Branches"); style_submenu_element("List", "brlist"); login_anonymous_available(); @ <h2>The initial check-in for each branch:</h2> db_prepare(&q, "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype>0 AND tagid=%d AND srcid!=0)" " ORDER BY event.mtime DESC", timeline_query_for_www(), TAG_BRANCH ); www_print_timeline(&q, 0, 0, 0, 0, brtimeline_extra); db_finalize(&q); style_footer(); } |
Changes to src/browse.c.
︙ | ︙ | |||
169 170 171 172 173 174 175 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "in directory ", -1); hyperlinked_path(zD, &dirname, zCI, "dir", ""); zPrefix = mprintf("%s/", zD); | | | | < | < | < | | 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "in directory ", -1); hyperlinked_path(zD, &dirname, zCI, "dir", ""); zPrefix = mprintf("%s/", zD); style_submenu_element("Top-Level", "%s", url_render(&sURI, "name", 0, 0, 0)); }else{ blob_append(&dirname, "in the top-level directory", -1); zPrefix = ""; } if( linkTrunk ){ style_submenu_element("Trunk", "%s", url_render(&sURI, "ci", "trunk", 0, 0)); } if( linkTip ){ style_submenu_element("Tip", "%s", url_render(&sURI, "ci", "tip", 0, 0)); } if( zCI ){ @ <h2>Files of check-in [%z(href("vinfo?name=%!S",zUuid))%S(zUuid)</a>] @ %s(blob_str(&dirname))</h2> zSubdirLink = mprintf("%R/dir?ci=%!S&name=%T", zUuid, zPrefix); if( nD==0 ){ style_submenu_element("File Ages", "%R/fileage?name=%!S", zUuid); } }else{ @ <h2>The union of all files from all check-ins @ %s(blob_str(&dirname))</h2> zSubdirLink = mprintf("%R/dir?name=%T", zPrefix); } style_submenu_element("All", "%s", url_render(&sURI, "ci", 0, 0, 0)); style_submenu_element("Tree-View", "%s", url_render(&sURI, "type", "tree", 0, 0)); /* Compute the temporary table "localfiles" containing the names ** of all files and subdirectories in the zD[] directory. ** ** Subdirectory names begin with "/". This causes them to sort ** first and it also gives us an easy way to distinguish files |
︙ | ︙ | |||
614 615 616 617 618 619 620 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "within directory ", -1); hyperlinked_path(zD, &dirname, zCI, "tree", zREx); if( zRE ) blob_appendf(&dirname, " matching \"%s\"", zRE); | | | < | < | | < | | 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "within directory ", -1); hyperlinked_path(zD, &dirname, zCI, "tree", zREx); if( zRE ) blob_appendf(&dirname, " matching \"%s\"", zRE); style_submenu_element("Top-Level", "%s", url_render(&sURI, "name", 0, 0, 0)); }else{ if( zRE ){ blob_appendf(&dirname, "matching \"%s\"", zRE); } } style_submenu_binary("mtime","Sort By Time","Sort By Filename", 0); if( zCI ){ style_submenu_element("All", "%s", url_render(&sURI, "ci", 0, 0, 0)); if( nD==0 && !showDirOnly ){ style_submenu_element("File Ages", "%R/fileage?name=%s", zUuid); } } if( linkTrunk ){ style_submenu_element("Trunk", "%s", url_render(&sURI, "ci", "trunk", 0, 0)); } if( linkTip ){ style_submenu_element("Tip", "%s", url_render(&sURI, "ci", "tip", 0, 0)); } style_submenu_element("Flat-View", "%s", url_render(&sURI, "type", "flat", 0, 0)); /* Compute the file hierarchy. */ if( zCI ){ Stmt q; compute_fileage(rid, 0); |
︙ | ︙ | |||
693 694 695 696 697 698 699 | } if( showDirOnly ){ for(nFile=0, p=sTree.pFirst; p; p=p->pNext){ if( p->pChild!=0 && p->nFullName>nD ) nFile++; } zObjType = "Folders"; | < < < < > > | 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | } if( showDirOnly ){ for(nFile=0, p=sTree.pFirst; p; p=p->pNext){ if( p->pChild!=0 && p->nFullName>nD ) nFile++; } zObjType = "Folders"; }else{ zObjType = "Files"; } style_submenu_checkbox("nofiles", "Folders Only", 0); if( zCI ){ @ <h2>%s(zObjType) from if( sqlite3_strnicmp(zCI, zUuid, (int)strlen(zCI))!=0 ){ @ "%h(zCI)" } @ [%z(href("vinfo?name=%!S",zUuid))%S(zUuid)</a>] %s(blob_str(&dirname)) |
︙ | ︙ | |||
1030 1031 1032 1033 1034 1035 1036 | if( rid==0 ){ fossil_fatal("not a valid check-in: %s", zName); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); baseTime = db_double(0.0,"SELECT mtime FROM event WHERE objid=%d", rid); zNow = db_text("", "SELECT datetime(mtime,toLocal()) FROM event" " WHERE objid=%d", rid); | | < < | 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 | if( rid==0 ){ fossil_fatal("not a valid check-in: %s", zName); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); baseTime = db_double(0.0,"SELECT mtime FROM event WHERE objid=%d", rid); zNow = db_text("", "SELECT datetime(mtime,toLocal()) FROM event" " WHERE objid=%d", rid); style_submenu_element("Tree-View", "%R/tree?ci=%T&mtime=1&type=tree", zName); style_header("File Ages"); zGlob = P("glob"); compute_fileage(rid,zGlob); db_multi_exec("CREATE INDEX fileage_ix1 ON fileage(mid,pathname);"); @ <h2>Files in @ %z(href("%R/info/%!S",zUuid))[%S(zUuid)]</a> |
︙ | ︙ |
Changes to src/builtin.c.
︙ | ︙ | |||
31 32 33 34 35 36 37 | /* ** Return a pointer to built-in content */ const unsigned char *builtin_file(const char *zFilename, int *piSize){ int lwr, upr, i, c; lwr = 0; | | | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | /* ** Return a pointer to built-in content */ const unsigned char *builtin_file(const char *zFilename, int *piSize){ int lwr, upr, i, c; lwr = 0; upr = count(aBuiltinFiles) - 1; while( upr>=lwr ){ i = (upr+lwr)/2; c = strcmp(aBuiltinFiles[i].zName,zFilename); if( c<0 ){ lwr = i+1; }else if( c>0 ){ upr = i-1; |
︙ | ︙ | |||
58 59 60 61 62 63 64 | /* ** COMMAND: test-builtin-list ** ** List the names and sizes of all built-in resources. */ void test_builtin_list(void){ int i; | | | 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | /* ** COMMAND: test-builtin-list ** ** List the names and sizes of all built-in resources. */ void test_builtin_list(void){ int i; for(i=0; i<count(aBuiltinFiles); i++){ fossil_print("%-30s %6d\n", aBuiltinFiles[i].zName,aBuiltinFiles[i].nByte); } } /* ** COMMAND: test-builtin-get ** |
︙ | ︙ |
Changes to src/cgi.c.
︙ | ︙ | |||
19 20 21 22 23 24 25 26 27 28 29 30 31 32 | ** services to CGI programs. There are procedures for parsing and ** dispensing QUERY_STRING parameters and cookies, the "mprintf()" ** formatting function and its cousins, and routines to encode and ** decode strings in HTML or HTTP. */ #include "config.h" #ifdef _WIN32 # include <winsock2.h> # include <ws2tcpip.h> #else # include <sys/socket.h> # include <netinet/in.h> # include <arpa/inet.h> # include <sys/times.h> | > > > | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | ** services to CGI programs. There are procedures for parsing and ** dispensing QUERY_STRING parameters and cookies, the "mprintf()" ** formatting function and its cousins, and routines to encode and ** decode strings in HTML or HTTP. */ #include "config.h" #ifdef _WIN32 # if !defined(_WIN32_WINNT) # define _WIN32_WINNT 0x0501 # endif # include <winsock2.h> # include <ws2tcpip.h> #else # include <sys/socket.h> # include <netinet/in.h> # include <arpa/inet.h> # include <sys/times.h> |
︙ | ︙ | |||
714 715 716 717 718 719 720 | cgi_set_parameter_nocopy(mprintf("%s:bytes", zName), mprintf("%d",nContent), 1); } } zName = 0; showBytes = 0; }else{ | | | 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 | cgi_set_parameter_nocopy(mprintf("%s:bytes", zName), mprintf("%d",nContent), 1); } } zName = 0; showBytes = 0; }else{ nArg = tokenize_line(zLine, count(azArg), azArg); for(i=0; i<nArg; i++){ int c = fossil_tolower(azArg[i][0]); int n = strlen(azArg[i]); if( c=='c' && sqlite3_strnicmp(azArg[i],"content-disposition:",n)==0 ){ i++; }else if( c=='n' && sqlite3_strnicmp(azArg[i],"name=",n)==0 ){ zName = azArg[++i]; |
︙ | ︙ |
Changes to src/checkin.c.
︙ | ︙ | |||
2280 2281 2282 2283 2284 2285 2286 | nvid = content_put(&manifest); if( nvid==0 ){ fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nvid); if( manifest_crosslink(nvid, &manifest, dryRunFlag ? MC_NONE : MC_PERMIT_HOOKS)==0 ){ | | | 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 | nvid = content_put(&manifest); if( nvid==0 ){ fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nvid); if( manifest_crosslink(nvid, &manifest, dryRunFlag ? MC_NONE : MC_PERMIT_HOOKS)==0 ){ fossil_fatal("%s", g.zErrMsg); } assert( blob_is_reset(&manifest) ); content_deltify(vid, nvid, 0); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nvid); db_prepare(&q, "SELECT uuid,merge FROM vmerge JOIN blob ON merge=rid" " WHERE id=-4"); |
︙ | ︙ |
Changes to src/db.c.
︙ | ︙ | |||
25 26 27 28 29 30 31 | ** (2) The "repository" database ** ** (3) A local checkout database named "_FOSSIL_" or ".fslckout" ** and located at the root of the local copy of the source tree. ** */ #include "config.h" | | > > > > | 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ** (2) The "repository" database ** ** (3) A local checkout database named "_FOSSIL_" or ".fslckout" ** and located at the root of the local copy of the source tree. ** */ #include "config.h" #if defined(_WIN32) # if USE_SEE # include <windows.h> # endif #else # include <pwd.h> #endif #include <sqlite3.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <time.h> |
︙ | ︙ | |||
868 869 870 871 872 873 874 875 876 | db_now_function, 0, 0); sqlite3_create_function(db, "toLocal", 0, SQLITE_UTF8, 0, db_tolocal_function, 0, 0); sqlite3_create_function(db, "fromLocal", 0, SQLITE_UTF8, 0, db_fromlocal_function, 0, 0); } /* ** If the database file zDbFile has a name that suggests that it is | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > | < | | | | | 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 | db_now_function, 0, 0); sqlite3_create_function(db, "toLocal", 0, SQLITE_UTF8, 0, db_tolocal_function, 0, 0); sqlite3_create_function(db, "fromLocal", 0, SQLITE_UTF8, 0, db_fromlocal_function, 0, 0); } #if USE_SEE /* ** This is a pointer to the saved database encryption key string. */ static char *zSavedKey = 0; /* ** This is the size of the saved database encryption key, in bytes. */ size_t savedKeySize = 0; /* ** This function returns the saved database encryption key -OR- zero if ** no database encryption key is saved. */ char *db_get_saved_encryption_key(){ return zSavedKey; } /* ** This function returns the size of the saved database encryption key ** -OR- zero if no database encryption key is saved. */ size_t db_get_saved_encryption_key_size(){ return savedKeySize; } /* ** This function arranges for the database encryption key to be securely ** saved in non-pagable memory (on platforms where this is possible). */ static void db_save_encryption_key( Blob *pKey ){ void *p = NULL; size_t n = 0; size_t pageSize = 0; size_t blobSize = 0; blobSize = blob_size(pKey); if( blobSize==0 ) return; fossil_get_page_size(&pageSize); assert( pageSize>0 ); if( blobSize>pageSize ){ fossil_fatal("key blob too large: %u versus %u", blobSize, pageSize); } p = fossil_secure_alloc_page(&n); assert( p!=NULL ); assert( n==pageSize ); assert( n>=blobSize ); memcpy(p, blob_str(pKey), blobSize); zSavedKey = p; savedKeySize = n; } /* ** This function arranges for the saved database encryption key to be ** securely zeroed, unlocked (if necessary), and freed. */ void db_unsave_encryption_key(){ fossil_secure_free_page(zSavedKey, savedKeySize); zSavedKey = NULL; savedKeySize = 0; } /* ** This function sets the saved database encryption key to the specified ** string value, allocating or freeing the underlying memory if needed. */ void db_set_saved_encryption_key( Blob *pKey ){ if( zSavedKey!=NULL ){ size_t blobSize = blob_size(pKey); if( blobSize==0 ){ db_unsave_encryption_key(); }else{ if( blobSize>savedKeySize ){ fossil_fatal("key blob too large: %u versus %u", blobSize, savedKeySize); } fossil_secure_zero(zSavedKey, savedKeySize); memcpy(zSavedKey, blob_str(pKey), blobSize); } }else{ db_save_encryption_key(pKey); } } #if defined(_WIN32) /* ** This function sets the saved database encryption key to one that gets ** read from the specified Fossil parent process. This is only necessary ** (or functional) on Windows. */ void db_read_saved_encryption_key_from_process( DWORD processId, /* Identifier for Fossil parent process. */ LPVOID pAddress, /* Pointer to saved key buffer in the parent process. */ SIZE_T nSize /* Size of saved key buffer in the parent process. */ ){ void *p = NULL; size_t n = 0; size_t pageSize = 0; HANDLE hProcess = NULL; fossil_get_page_size(&pageSize); assert( pageSize>0 ); if( nSize>pageSize ){ fossil_fatal("key too large: %u versus %u", nSize, pageSize); } p = fossil_secure_alloc_page(&n); assert( p!=NULL ); assert( n==pageSize ); assert( n>=nSize ); hProcess = OpenProcess(PROCESS_VM_READ, FALSE, processId); if( hProcess!=NULL ){ SIZE_T nRead = 0; if( ReadProcessMemory(hProcess, pAddress, p, nSize, &nRead) ){ CloseHandle(hProcess); if( nRead==nSize ){ db_unsave_encryption_key(); zSavedKey = p; savedKeySize = n; }else{ fossil_fatal("bad size read, %u out of %u bytes at %p from pid %lu", nRead, nSize, pAddress, processId); } }else{ CloseHandle(hProcess); fossil_fatal("failed read, %u bytes at %p from pid %lu: %lu", nSize, pAddress, processId, GetLastError()); } }else{ fossil_fatal("failed to open pid %lu: %lu", processId, GetLastError()); } } #endif /* defined(_WIN32) */ #endif /* USE_SEE */ /* ** If the database file zDbFile has a name that suggests that it is ** encrypted, then prompt for the database encryption key and return it ** in the blob *pKey. Or, if the encryption key has previously been ** requested, just return a copy of the previous result. The blob in ** *pKey must be initialized. */ static void db_maybe_obtain_encryption_key( const char *zDbFile, /* Name of the database file */ Blob *pKey /* Put the encryption key here */ ){ #if USE_SEE if( sqlite3_strglob("*.efossil", zDbFile)==0 ){ char *zKey = db_get_saved_encryption_key(); if( zKey ){ blob_set(pKey, zKey); }else{ char *zPrompt = mprintf("\rencryption key for '%s': ", zDbFile); prompt_for_password(zPrompt, pKey, 0); fossil_free(zPrompt); db_set_saved_encryption_key(pKey); } } #endif } /* |
︙ | ︙ | |||
913 914 915 916 917 918 919 | zDbName, &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, g.zVfsName ); if( rc!=SQLITE_OK ){ db_err("[%s]: %s", zDbName, sqlite3_errmsg(db)); } | > | > | 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 | zDbName, &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, g.zVfsName ); if( rc!=SQLITE_OK ){ db_err("[%s]: %s", zDbName, sqlite3_errmsg(db)); } blob_init(&key, 0, 0); db_maybe_obtain_encryption_key(zDbName, &key); if( blob_size(&key)>0 ){ char *zCmd = sqlite3_mprintf("PRAGMA key(%Q)", blob_str(&key)); sqlite3_exec(db, zCmd, 0, 0, 0); fossil_secure_zero(zCmd, strlen(zCmd)); sqlite3_free(zCmd); } blob_reset(&key); sqlite3_busy_timeout(db, 5000); sqlite3_wal_autocheckpoint(db, 1); /* Set to checkpoint frequently */ sqlite3_create_function(db, "user", 0, SQLITE_UTF8, 0, db_sql_user, 0, 0); sqlite3_create_function(db, "cgi", 1, SQLITE_UTF8, 0, db_sql_cgi, 0, 0); |
︙ | ︙ | |||
953 954 955 956 957 958 959 960 | } /* ** zDbName is the name of a database file. Attach zDbName using ** the name zLabel. */ void db_attach(const char *zDbName, const char *zLabel){ Blob key; | > > | | | > > > | 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 | } /* ** zDbName is the name of a database file. Attach zDbName using ** the name zLabel. */ void db_attach(const char *zDbName, const char *zLabel){ char *zCmd; Blob key; blob_init(&key, 0, 0); db_maybe_obtain_encryption_key(zDbName, &key); zCmd = sqlite3_mprintf("ATTACH DATABASE %Q AS %Q KEY %Q", zDbName, zLabel, blob_str(&key)); db_multi_exec(zCmd /*works-like:""*/); fossil_secure_zero(zCmd, strlen(zCmd)); sqlite3_free(zCmd); blob_reset(&key); } /* ** Change the schema name of the "main" database to zLabel. ** zLabel must be a static string that is unchanged for the life of ** the database connection. |
︙ | ︙ | |||
1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 | if( db_local_table_exists_but_lacks_column("undo", "isLink") ){ db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } return 1; } /* ** Locate the root directory of the local repository tree. The root ** directory is found by searching for a file named "_FOSSIL_" or ".fslckout" ** that contains a valid repository database. | > | 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 | if( db_local_table_exists_but_lacks_column("undo", "isLink") ){ db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } fossil_free(zVFileDef); return 1; } /* ** Locate the root directory of the local repository tree. The root ** directory is found by searching for a file named "_FOSSIL_" or ".fslckout" ** that contains a valid repository database. |
︙ | ︙ | |||
2646 2647 2648 2649 2650 2651 2652 | ** If allowPrefix is true, then the Setting returned is the first one for ** which zName is a prefix of the Setting name. */ const Setting *db_find_setting(const char *zName, int allowPrefix){ int lwr, mid, upr, c; int n = (int)strlen(zName) + !allowPrefix; lwr = 0; | | | 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 | ** If allowPrefix is true, then the Setting returned is the first one for ** which zName is a prefix of the Setting name. */ const Setting *db_find_setting(const char *zName, int allowPrefix){ int lwr, mid, upr, c; int n = (int)strlen(zName) + !allowPrefix; lwr = 0; upr = count(aSetting)-2; while( upr>=lwr ){ mid = (upr+lwr)/2; c = fossil_strncmp(zName, aSetting[mid].name, n); if( c<0 ){ upr = mid - 1; }else if( c>0 ){ lwr = mid + 1; |
︙ | ︙ |
Changes to src/delta.c.
︙ | ︙ | |||
375 376 377 378 379 380 381 382 | } /* Compute the hash table used to locate matching sections in the ** source file. */ nHash = lenSrc/NHASH; collide = fossil_malloc( nHash*2*sizeof(int) ); landmark = &collide[nHash]; | > < < | 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 | } /* Compute the hash table used to locate matching sections in the ** source file. */ nHash = lenSrc/NHASH; collide = fossil_malloc( nHash*2*sizeof(int) ); memset(collide, -1, nHash*2*sizeof(int)); landmark = &collide[nHash]; for(i=0; i<lenSrc-NHASH; i+=NHASH){ int hv = hash_once(&zSrc[i]) % nHash; collide[i/NHASH] = landmark[hv]; landmark[hv] = i/NHASH; } /* Begin scanning the target file and generating copy commands and |
︙ | ︙ |
Changes to src/deltacmd.c.
︙ | ︙ | |||
52 53 54 55 56 57 58 | */ void delta_create_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN TARGET DELTA"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ | | | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | */ void delta_create_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN TARGET DELTA"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&target, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } blob_delta_create(&orig, &target, &delta); if( blob_write_to_file(&delta, g.argv[4])<blob_size(&delta) ){ fossil_fatal("cannot write %s", g.argv[4]); } blob_reset(&orig); blob_reset(&target); blob_reset(&delta); } /* |
︙ | ︙ | |||
83 84 85 86 87 88 89 | int nCopy = 0; int nInsert = 0; int sz1, sz2, sz3; if( g.argc!=4 ){ usage("ORIGIN TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ | | | | 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | int nCopy = 0; int nInsert = 0; int sz1, sz2, sz3; if( g.argc!=4 ){ usage("ORIGIN TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&target, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } blob_delta_create(&orig, &target, &delta); delta_analyze(blob_buffer(&delta), blob_size(&delta), &nCopy, &nInsert); sz1 = blob_size(&orig); sz2 = blob_size(&target); sz3 = blob_size(&delta); blob_reset(&orig); |
︙ | ︙ | |||
151 152 153 154 155 156 157 | */ void delta_apply_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN DELTA TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ | | | | | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | */ void delta_apply_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN DELTA TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&delta, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } blob_delta_apply(&orig, &delta, &target); if( blob_write_to_file(&target, g.argv[4])<blob_size(&target) ){ fossil_fatal("cannot write %s", g.argv[4]); } blob_reset(&orig); blob_reset(&target); blob_reset(&delta); } |
︙ | ︙ |
Changes to src/descendants.c.
︙ | ︙ | |||
453 454 455 456 457 458 459 | int showAll = P("all")!=0; int showClosed = P("closed")!=0; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( !showAll ){ | | | | | 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 | int showAll = P("all")!=0; int showClosed = P("closed")!=0; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( !showAll ){ style_submenu_element("All", "leaves?all"); } if( !showClosed ){ style_submenu_element("Closed", "leaves?closed"); } if( showClosed || showAll ){ style_submenu_element("Open", "leaves"); } style_header("Leaves"); login_anonymous_available(); #if 0 style_sidebox_begin("Nomenclature:", "33%"); @ <ol> @ <li> A <div class="sideboxDescribed">leaf</div> |
︙ | ︙ |
Changes to src/diff.c.
︙ | ︙ | |||
113 114 115 116 117 118 119 120 121 122 123 124 125 126 | int nEditAlloc; /* Space allocated for aEdit[] */ DLine *aFrom; /* File on left side of the diff */ int nFrom; /* Number of lines in aFrom[] */ DLine *aTo; /* File on right side of the diff */ int nTo; /* Number of lines in aTo[] */ int (*same_fn)(const DLine*,const DLine*); /* comparison function */ }; /* ** Return an array of DLine objects containing a pointer to the ** start of each line and a hash of that line. The lower ** bits of the hash store the length of each line. ** ** Trailing whitespace is removed from each line. 2010-08-20: Not any | > > > > > > > > > > > > > > > > > > > > > > > > | 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | int nEditAlloc; /* Space allocated for aEdit[] */ DLine *aFrom; /* File on left side of the diff */ int nFrom; /* Number of lines in aFrom[] */ DLine *aTo; /* File on right side of the diff */ int nTo; /* Number of lines in aTo[] */ int (*same_fn)(const DLine*,const DLine*); /* comparison function */ }; /* ** Count the number of lines in the input string. Include the last line ** in the count even if it lacks the \n terminator. If an empty string ** is specified, the number of lines is zero. For the purposes of this ** function, a string is considered empty if it contains no characters ** -OR- it contains only NUL characters. */ static int count_lines( const char *z, int n, int *pnLine ){ int nLine; const char *zNL, *z2; for(nLine=0, z2=z; (zNL = strchr(z2,'\n'))!=0; z2=zNL+1, nLine++){} if( z2[0]!='\0' ){ nLine++; do{ z2++; }while( z2[0]!='\0' ); } if( n!=(int)(z2-z) ) return 0; if( pnLine ) *pnLine = nLine; return 1; } /* ** Return an array of DLine objects containing a pointer to the ** start of each line and a hash of that line. The lower ** bits of the hash store the length of each line. ** ** Trailing whitespace is removed from each line. 2010-08-20: Not any |
︙ | ︙ | |||
138 139 140 141 142 143 144 | int n, int *pnLine, u64 diffFlags ){ int nLine, i, k, nn, s, x; unsigned int h, h2; DLine *a; | | < < < < < | < < | | > | > | | | | | 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 | int n, int *pnLine, u64 diffFlags ){ int nLine, i, k, nn, s, x; unsigned int h, h2; DLine *a; const char *zNL; if( count_lines(z, n, &nLine)==0 ){ return 0; } assert( nLine>0 || z[0]=='\0' ); a = fossil_malloc( sizeof(a[0])*nLine ); memset(a, 0, sizeof(a[0])*nLine); if( nLine==0 ){ *pnLine = 0; return a; } i = 0; do{ zNL = strchr(z,'\n'); if( zNL==0 ) zNL = z+n; nn = (int)(zNL - z); if( nn>LENGTH_MASK ){ fossil_free(a); return 0; } a[i].z = z; k = nn; if( diffFlags & DIFF_STRIP_EOLCR ){ if( k>0 && z[k-1]=='\r' ){ k--; } } a[i].n = k; s = 0; if( diffFlags & DIFF_IGNORE_EOLWS ){ while( k>0 && fossil_isspace(z[k-1]) ){ k--; } } if( (diffFlags & DIFF_IGNORE_ALLWS)==DIFF_IGNORE_ALLWS ){ int numws = 0; while( s<k && fossil_isspace(z[s]) ){ s++; } for(h=0, x=s; x<k; x++){ char c = z[x]; if( fossil_isspace(c) ){ ++numws; }else{ h += c; h *= 0x9e3779b1; } } k -= numws; }else{ for(h=0, x=s; x<k; x++){ h += z[x]; h *= 0x9e3779b1; } } a[i].indent = s; a[i].h = h = (h<<LENGTH_MASK_SZ) | (k-s); h2 = h % nLine; a[i].iNext = a[h2].iHash; a[h2].iHash = i+1; z += nn+1; n -= nn+1; i++; }while( zNL[0]!='\0' && zNL[1]!='\0' ); assert( i==nLine ); /* Return results */ *pnLine = nLine; return a; } |
︙ | ︙ | |||
1047 1048 1049 1050 1051 1052 1053 | if( nLeft*nRight>100000 ){ memset(aM, 4, mnLen); if( nLeft>mnLen ) memset(aM+mnLen, 1, nLeft-mnLen); if( nRight>mnLen ) memset(aM+mnLen, 2, nRight-mnLen); return aM; } | | | 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 | if( nLeft*nRight>100000 ){ memset(aM, 4, mnLen); if( nLeft>mnLen ) memset(aM+mnLen, 1, nLeft-mnLen); if( nRight>mnLen ) memset(aM+mnLen, 2, nRight-mnLen); return aM; } if( nRight < count(aBuf)-1 ){ pToFree = 0; a = aBuf; }else{ a = pToFree = fossil_malloc( sizeof(a[0])*(nRight+1) ); } /* Compute the best alignment */ |
︙ | ︙ | |||
2324 2325 2326 2327 2328 2329 2330 | url_add_parameter(&url, "filename", zFilename); if( iLimit!=20 ){ url_add_parameter(&url, "limit", sqlite3_mprintf("%d", iLimit)); } url_add_parameter(&url, "log", showLog ? "1" : "0"); if( ignoreWs ){ url_add_parameter(&url, "w", ""); | | | | | | < | < | | | | | | 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 | url_add_parameter(&url, "filename", zFilename); if( iLimit!=20 ){ url_add_parameter(&url, "limit", sqlite3_mprintf("%d", iLimit)); } url_add_parameter(&url, "log", showLog ? "1" : "0"); if( ignoreWs ){ url_add_parameter(&url, "w", ""); style_submenu_element("Show Whitespace Changes", "%s", url_render(&url, "w", 0, 0, 0)); }else{ style_submenu_element("Ignore Whitespace", "%s", url_render(&url, "w", "", 0, 0)); } if( showLog ){ style_submenu_element("Hide Log", "%s", url_render(&url, "log", "0", 0, 0)); }else{ style_submenu_element("Show Log", "%s", url_render(&url, "log", "1", 0, 0)); } if( ann.bLimit ){ char *z1, *z2; style_submenu_element("All Ancestors", "%s", url_render(&url, "limit", "-1", 0, 0)); z1 = sqlite3_mprintf("%d Ancestors", iLimit+20); z2 = sqlite3_mprintf("%d", iLimit+20); style_submenu_element(z1, "%s", url_render(&url, "limit", z2, 0, 0)); } if( iLimit>20 ){ style_submenu_element("20 Ancestors", "%s", url_render(&url, "limit", "20", 0, 0)); } if( skin_detail_boolean("white-foreground") ){ clr1 = 0xa04040; clr2 = 0x4059a0; }else{ clr1 = 0xffb5b5; /* Recent changes: red (hot) */ clr2 = 0xb5e0ff; /* Older changes: blue (cold) */ |
︙ | ︙ |
Changes to src/diffcmd.c.
︙ | ︙ | |||
410 411 412 413 414 415 416 417 418 419 420 421 422 423 | " WHERE vid=%d" " AND (deleted OR chnged OR rid==0)" " ORDER BY pathname /*scan*/", vid ); } db_prepare(&q, "%s", blob_sql_text(&sql)); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3); int srcid = db_column_int(&q, 4); int isLink = db_column_int(&q, 5); | > | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | " WHERE vid=%d" " AND (deleted OR chnged OR rid==0)" " ORDER BY pathname /*scan*/", vid ); } db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3); int srcid = db_column_int(&q, 4); int isLink = db_column_int(&q, 5); |
︙ | ︙ | |||
795 796 797 798 799 800 801 802 803 804 805 806 807 808 | ** This option overrides the "binary-glob" setting. ** ** Options: ** --binary PATTERN Treat files that match the glob PATTERN as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** --checkin VERSION Show diff of all changes in VERSION ** --context|-c N Use N lines of context ** --diff-binary BOOL Include binary files when using external commands ** --exec-abs-paths Force absolute path names with external commands. ** --exec-rel-paths Force relative path names with external commands. ** --from|-r VERSION Select VERSION as source for the diff ** --internal|-i Use internal diff logic ** --side-by-side|-y Side-by-side diff | > | 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 | ** This option overrides the "binary-glob" setting. ** ** Options: ** --binary PATTERN Treat files that match the glob PATTERN as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** --checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program - overrides "diff-command" ** --context|-c N Use N lines of context ** --diff-binary BOOL Include binary files when using external commands ** --exec-abs-paths Force absolute path names with external commands. ** --exec-rel-paths Force relative path names with external commands. ** --from|-r VERSION Select VERSION as source for the diff ** --internal|-i Use internal diff logic ** --side-by-side|-y Side-by-side diff |
︙ | ︙ | |||
866 867 868 869 870 871 872 | db_must_be_within_tree(); }else if( zFrom==0 ){ fossil_fatal("must use --from if --to is present"); }else{ db_find_and_open_repository(0, 0); } if( !isInternDiff ){ | > | | 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 | db_must_be_within_tree(); }else if( zFrom==0 ){ fossil_fatal("must use --from if --to is present"); }else{ db_find_and_open_repository(0, 0); } if( !isInternDiff ){ zDiffCmd = find_option("command", 0, 1); if( zDiffCmd==0 ) zDiffCmd = diff_command_external(isGDiff); } zBinGlob = diff_get_binary_glob(); fIncludeBinary = diff_include_binary_files(); determine_exec_relative_option(1); verify_all_options(); if( g.argc>=3 ){ int i; |
︙ | ︙ |
Changes to src/dispatch.c.
︙ | ︙ | |||
67 68 69 70 71 72 73 | ** contain a few webpage entries at the beginning. ** ** The page_index.h file is generated by the mkindex program which scans all ** source code files looking for header comments on the functions that ** implement command and webpages. */ #include "page_index.h" | | | 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | ** contain a few webpage entries at the beginning. ** ** The page_index.h file is generated by the mkindex program which scans all ** source code files looking for header comments on the functions that ** implement command and webpages. */ #include "page_index.h" #define MX_COMMAND count(aCommand) /* ** Given a command or webpage name in zName, find the corresponding CmdOrPage ** object and return a pointer to that object in *ppCmd. ** ** The eType field is CMDFLAG_COMMAND to lookup commands or CMDFLAG_WEBPAGE ** to look up webpages or CMDFLAG_ANY to look for either. If the CMDFLAG_PREFIX |
︙ | ︙ | |||
231 232 233 234 235 236 237 | ** Show the built-in help text for CMD. CMD can be a command-line interface ** command or a page name from the web interface. */ void help_page(void){ const char *zCmd = P("cmd"); if( zCmd==0 ) zCmd = P("name"); | < | > > | | 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 | ** Show the built-in help text for CMD. CMD can be a command-line interface ** command or a page name from the web interface. */ void help_page(void){ const char *zCmd = P("cmd"); if( zCmd==0 ) zCmd = P("name"); if( zCmd && *zCmd ){ int rc; const CmdOrPage *pCmd = 0; style_header("Help: %s", zCmd); style_submenu_element("Command-List", "%s/help", g.zTop); if( *zCmd=='/' ){ /* Some of the webpages require query parameters in order to work. ** @ <h1>The "<a href='%R%s(zCmd)'>%s(zCmd)</a>" page:</h1> */ @ <h1>The "%s(zCmd)" page:</h1> }else{ @ <h1>The "%s(zCmd)" command:</h1> } |
︙ | ︙ | |||
260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 | @ <blockquote> help_to_html(pCmd->zHelp, cgi_output_blob()); @ </blockquote> } } }else{ int i, j, n; @ <h1>Available commands:</h1> @ <table border="0"><tr> for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( '/'==*z || strncmp(z,"test",4)==0 ) continue; j++; } | > > | > > | | 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | @ <blockquote> help_to_html(pCmd->zHelp, cgi_output_blob()); @ </blockquote> } } }else{ int i, j, n; style_header("Help"); @ <h1>Available commands:</h1> @ <table border="0"><tr> for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( '/'==*z || strncmp(z,"test",4)==0 ) continue; j++; } n = (j+5)/6; for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; const char *zBoldOn = aCommand[i].eCmdFlags&CMDFLAG_1ST_TIER?"<b>" :""; const char *zBoldOff = aCommand[i].eCmdFlags&CMDFLAG_1ST_TIER?"</b>":""; if( '/'==*z || strncmp(z,"test",4)==0 ) continue; if( j==0 ){ @ <td valign="top"><ul> } @ <li><a href="%R/help?cmd=%s(z)">%s(zBoldOn)%s(z)%s(zBoldOff)</a></li> j++; if( j>=n ){ @ </ul></td> j = 0; } } if( j>0 ){ |
︙ | ︙ | |||
358 359 360 361 362 363 364 | /* ** WEBPAGE: test-all-help ** ** Show all help text on a single page. Useful for proof-reading. */ void test_all_help_page(void){ int i; | | | 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 | /* ** WEBPAGE: test-all-help ** ** Show all help text on a single page. Useful for proof-reading. */ void test_all_help_page(void){ int i; style_header("All Help Text"); for(i=0; i<MX_COMMAND; i++){ if( memcmp(aCommand[i].zName, "test", 4)==0 ) continue; @ <h2>%s(aCommand[i].zName):</h2> @ <blockquote> help_to_html(aCommand[i].zHelp, cgi_output_blob()); @ </blockquote> } |
︙ | ︙ | |||
400 401 402 403 404 405 406 | ** ** List all web pages. */ void cmd_test_webpage_list(void){ int i, nCmd; const char *aCmd[MX_COMMAND]; for(i=nCmd=0; i<MX_COMMAND; i++){ | | | 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 | ** ** List all web pages. */ void cmd_test_webpage_list(void){ int i, nCmd; const char *aCmd[MX_COMMAND]; for(i=nCmd=0; i<MX_COMMAND; i++){ if(CMDFLAG_WEBPAGE & aCommand[i].eCmdFlags){ aCmd[nCmd++] = aCommand[i].zName; } } assert(nCmd && "page list is empty?"); multi_column_list(aCmd, nCmd); } |
︙ | ︙ |
Changes to src/doc.c.
︙ | ︙ | |||
52 53 54 55 56 57 58 | }; if( !looks_like_binary(pBlob) ) { return 0; /* Plain text */ } x = (const unsigned char*)blob_buffer(pBlob); n = blob_size(pBlob); | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | }; if( !looks_like_binary(pBlob) ) { return 0; /* Plain text */ } x = (const unsigned char*)blob_buffer(pBlob); n = blob_size(pBlob); for(i=0; i<count(aMime); i++){ if( n>=aMime[i].size && memcmp(x, aMime[i].zPrefix, aMime[i].size)==0 ){ return aMime[i].zMimetype; } } return "unknown/unknown"; } |
︙ | ︙ | |||
291 292 293 294 295 296 297 | /* ** Verify that all entries in the aMime[] table are in sorted order. ** Abort with a fatal error if any is out-of-order. */ static void mimetype_verify(void){ int i; | | | 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 | /* ** Verify that all entries in the aMime[] table are in sorted order. ** Abort with a fatal error if any is out-of-order. */ static void mimetype_verify(void){ int i; for(i=1; i<count(aMime); i++){ if( fossil_strcmp(aMime[i-1].zSuffix,aMime[i].zSuffix)>=0 ){ fossil_fatal("mimetypes out of sequence: %s before %s", aMime[i-1].zSuffix, aMime[i].zSuffix); } } } |
︙ | ︙ | |||
329 330 331 332 333 334 335 | if( zName[i]=='.' ) z = &zName[i+1]; } len = strlen(z); if( len<sizeof(zSuffix)-1 ){ sqlite3_snprintf(sizeof(zSuffix), zSuffix, "%s", z); for(i=0; zSuffix[i]; i++) zSuffix[i] = fossil_tolower(zSuffix[i]); first = 0; | | | 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | if( zName[i]=='.' ) z = &zName[i+1]; } len = strlen(z); if( len<sizeof(zSuffix)-1 ){ sqlite3_snprintf(sizeof(zSuffix), zSuffix, "%s", z); for(i=0; zSuffix[i]; i++) zSuffix[i] = fossil_tolower(zSuffix[i]); first = 0; last = count(aMime) - 1; while( first<=last ){ int c; i = (first+last)/2; c = fossil_strcmp(zSuffix, aMime[i].zSuffix); if( c==0 ) return aMime[i].zMimetype; if( c<0 ){ last = i-1; |
︙ | ︙ | |||
382 383 384 385 386 387 388 | @ suffixes and the following table to guess at the appropriate mimetype @ for each document.</p> @ <table id='mimeTable' border=1 cellpadding=0 class='mimetypetable'> @ <thead> @ <tr><th>Suffix<th>Mimetype @ </thead> @ <tbody> | | | 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 | @ suffixes and the following table to guess at the appropriate mimetype @ for each document.</p> @ <table id='mimeTable' border=1 cellpadding=0 class='mimetypetable'> @ <thead> @ <tr><th>Suffix<th>Mimetype @ </thead> @ <tbody> for(i=0; i<count(aMime); i++){ @ <tr><td>%h(aMime[i].zSuffix)<td>%h(aMime[i].zMimetype)</tr> } @ </tbody></table> output_table_sorting_javascript("mimeTable","tt",1); style_footer(); } |
︙ | ︙ | |||
591 592 593 594 595 596 597 | }; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } blob_init(&title, 0, 0); zDfltTitle = isUV ? "" : "Documentation"; db_begin_transaction(); | | | | | | 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 | }; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } blob_init(&title, 0, 0); zDfltTitle = isUV ? "" : "Documentation"; db_begin_transaction(); while( rid==0 && (++nMiss)<=count(azSuffix) ){ zName = P("name"); if( isUV ){ if( zName==0 ) zName = "index.wiki"; i = 0; }else{ if( zName==0 || zName[0]==0 ) zName = "tip/index.wiki"; for(i=0; zName[i] && zName[i]!='/'; i++){} zCheckin = mprintf("%.*s", i, zName); if( fossil_strcmp(zCheckin,"ckout")==0 && g.localOpen==0 ){ zCheckin = "tip"; } } if( nMiss==count(azSuffix) ){ zName = "404.md"; }else if( zName[i]==0 ){ assert( nMiss>=0 && nMiss<count(azSuffix) ); zName = azSuffix[nMiss]; }else if( !isUV ){ zName += i; } while( zName[0]=='/' ){ zName++; } if( isUV ){ g.zPath = mprintf("%s/%s", g.zPath, zName); }else{ g.zPath = mprintf("%s/%s/%s", g.zPath, zCheckin, zName); } if( nMiss==0 ) zOrigName = zName; if( !file_is_simple_pathname(zName, 1) ){ if( sqlite3_strglob("*/", zName)==0 ){ assert( nMiss>=0 && nMiss<count(azSuffix) ); zName = mprintf("%s%s", zName, azSuffix[nMiss]); if( !file_is_simple_pathname(zName, 1) ){ goto doc_not_found; } }else{ goto doc_not_found; } |
︙ | ︙ | |||
684 685 686 687 688 689 690 | style_footer(); }else if( fossil_strcmp(zMime, "text/x-markdown")==0 ){ Blob tail = BLOB_INITIALIZER; markdown_to_html(&filebody, &title, &tail); if( blob_size(&title)>0 ){ style_header("%s", blob_str(&title)); }else{ | | | 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 | style_footer(); }else if( fossil_strcmp(zMime, "text/x-markdown")==0 ){ Blob tail = BLOB_INITIALIZER; markdown_to_html(&filebody, &title, &tail); if( blob_size(&title)>0 ){ style_header("%s", blob_str(&title)); }else{ style_header("%s", nMiss>=count(azSuffix)? "Not Found" : zDfltTitle); } convert_href_and_output(&tail); style_footer(); }else if( fossil_strcmp(zMime, "text/plain")==0 ){ style_header("%s", zDfltTitle); @ <blockquote><pre> |
︙ | ︙ | |||
717 718 719 720 721 722 723 | style_footer(); } #endif }else{ cgi_set_content_type(zMime); cgi_set_content(&filebody); } | | | 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 | style_footer(); } #endif }else{ cgi_set_content_type(zMime); cgi_set_content(&filebody); } if( nMiss>=count(azSuffix) ) cgi_set_status(404, "Not Found"); db_end_transaction(0); return; /* Jump here when unable to locate the document */ doc_not_found: db_end_transaction(0); if( isUV && P("name")==0 ){ |
︙ | ︙ |
Changes to src/event.c.
︙ | ︙ | |||
150 151 152 153 154 155 156 | } }else{ blob_appendf(&title, "Tech-note %S", zId); tail = fullbody; } style_header("%s", blob_str(&title)); if( g.perm.WrWiki && g.perm.Write && nextRid==0 ){ | | | | < | | | | | | 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | } }else{ blob_appendf(&title, "Tech-note %S", zId); tail = fullbody; } style_header("%s", blob_str(&title)); if( g.perm.WrWiki && g.perm.Write && nextRid==0 ){ style_submenu_element("Edit", "%R/technoteedit?name=%!S", zId); if( g.perm.Attach ){ style_submenu_element("Attach", "%R/attachadd?technote=%!S&from=%R/technote/%!S", zId, zId); } } zETime = db_text(0, "SELECT datetime(%.17g)", pTNote->rEventDate); style_submenu_element("Context", "%R/timeline?c=%.20s", zId); if( g.perm.Hyperlink ){ if( verboseFlag ){ style_submenu_element("Plain", "%R/technote?name=%!S&aid=%s&mimetype=text/plain", zId, zUuid); if( nextRid ){ char *zNext; zNext = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nextRid); style_submenu_element("Next", "%R/technote?name=%!S&aid=%s&v", zId, zNext); free(zNext); } if( prevRid ){ char *zPrev; zPrev = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", prevRid); style_submenu_element("Prev", "%R/technote?name=%!S&aid=%s&v", zId, zPrev); free(zPrev); } }else{ style_submenu_element("Detail", "%R/technote?name=%!S&aid=%s&v", zId, zUuid); } } if( verboseFlag && g.perm.Hyperlink ){ int i; const char *zClr = 0; |
︙ | ︙ |
Changes to src/export.c.
︙ | ︙ | |||
130 131 132 133 134 135 136 137 138 139 140 | ); } /* ** create_mark() ** Create a new (mark,rid,uuid) entry for the given rid in the 'xmark' table, ** and return that information as a struct mark_t in *mark. ** This function returns -1 in the case where 'rid' does not exist, otherwise ** it returns 0. ** mark->name is dynamically allocated and is owned by the caller upon return. */ | > > > > | | | > > > > | | | | | | > | | > | > > > > > > > | | | > > > | | < < | > | | > > > > > > > > > > > > > > > > > > > > > > > > > | | | < < < < < < < | < < | 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | ); } /* ** create_mark() ** Create a new (mark,rid,uuid) entry for the given rid in the 'xmark' table, ** and return that information as a struct mark_t in *mark. ** *unused_mark is a value representing a mark that is free for use--that is, ** it does not appear in the marks file, and has not been used during this ** export run. Specifically, it is the supremum of the set of used marks ** plus one. ** This function returns -1 in the case where 'rid' does not exist, otherwise ** it returns 0. ** mark->name is dynamically allocated and is owned by the caller upon return. */ int create_mark(int rid, struct mark_t *mark, unsigned int *unused_mark){ char sid[13]; char *zUuid = rid_to_uuid(rid); if( !zUuid ){ fossil_trace("Undefined rid=%d\n", rid); return -1; } mark->rid = rid; sqlite3_snprintf(sizeof(sid), sid, ":%d", *unused_mark); *unused_mark += 1; mark->name = fossil_strdup(sid); sqlite3_snprintf(sizeof(mark->uuid), mark->uuid, "%s", zUuid); free(zUuid); insert_commit_xref(mark->rid, mark->name, mark->uuid); return 0; } /* ** mark_name_from_rid() ** Find the mark associated with the given rid. Mark names always start ** with ':', and are pulled from the 'xmark' temporary table. ** If the given rid doesn't have a mark associated with it yet, one is ** created with a value of *unused_mark. ** *unused_mark functions exactly as in create_mark(). ** This function returns NULL if the rid does not have an associated UUID, ** (i.e. is not valid). Otherwise, it returns the name of the mark, which is ** dynamically allocated and is owned by the caller of this function. */ char * mark_name_from_rid(int rid, unsigned int *unused_mark){ char *zMark = db_text(0, "SELECT tname FROM xmark WHERE trid=%d", rid); if( zMark==NULL ){ struct mark_t mark; if( create_mark(rid, &mark, unused_mark)==0 ){ zMark = mark.name; }else{ return NULL; } } return zMark; } /* ** parse_mark() ** Create a new (mark,rid,uuid) entry in the 'xmark' table given a line ** from a marks file. Return the cross-ref information as a struct mark_t ** in *mark. ** This function returns -1 in the case that the line is blank, malformed, or ** the rid/uuid named in 'line' does not match what is in the repository ** database. Otherwise, 0 is returned. ** mark->name is dynamically allocated, and owned by the caller. */ int parse_mark(char *line, struct mark_t *mark){ char *cur_tok; char type_; cur_tok = strtok(line, " \t"); if( !cur_tok || strlen(cur_tok)<2 ){ return -1; } mark->rid = atoi(&cur_tok[1]); type_ = cur_tok[0]; if( type_!='c' && type_!='b' ){ /* This is probably a blob mark */ mark->name = NULL; return 0; } cur_tok = strtok(NULL, " \t"); if( !cur_tok ){ /* This mark was generated by an older version of Fossil and doesn't ** include the mark name and uuid. create_mark() will name the new mark ** exactly as it was when exported to git, so that we should have a ** valid mapping from git sha1<->mark name<->fossil sha1. */ unsigned int mid; if( type_=='c' ){ mid = COMMITMARK(mark->rid); } else{ mid = BLOBMARK(mark->rid); } return create_mark(mark->rid, mark, &mid); }else{ mark->name = fossil_strdup(cur_tok); } cur_tok = strtok(NULL, "\n"); if( !cur_tok || strlen(cur_tok)!=40 ){ free(mark->name); fossil_trace("Invalid SHA-1 in marks file: %s\n", cur_tok); return -1; }else{ sqlite3_snprintf(sizeof(mark->uuid), mark->uuid, "%s", cur_tok); } /* make sure that rid corresponds to UUID */ if( fast_uuid_to_rid(mark->uuid)!=mark->rid ){ free(mark->name); fossil_trace("Non-existent SHA-1 in marks file: %s\n", mark->uuid); return -1; } /* insert a cross-ref into the 'xmark' table */ insert_commit_xref(mark->rid, mark->name, mark->uuid); return 0; } /* ** import_marks() ** Import the marks specified in file 'f' into the 'xmark' table. ** If 'blobs' is non-null, insert all blob marks into it. ** If 'vers' is non-null, insert all commit marks into it. ** If 'unused_marks' is non-null, upon return of this function, all values ** x >= *unused_marks are free to use as marks, i.e. they do not clash with ** any marks appearing in the marks file. ** Each line in the file must be at most 100 characters in length. This ** seems like a reasonable maximum for a 40-character uuid, and 1-13 ** character rid. ** The function returns -1 if any of the lines in file 'f' are malformed, ** or the rid/uuid information doesn't match what is in the repository ** database. Otherwise, 0 is returned. */ int import_marks(FILE* f, Bag *blobs, Bag *vers, unsigned int *unused_mark){ char line[101]; while(fgets(line, sizeof(line), f)){ struct mark_t mark; if( strlen(line)==100 && line[99]!='\n' ){ /* line too long */ return -1; } if( parse_mark(line, &mark)<0 ){ return -1; }else if( line[0]=='b' ){ if( blobs!=NULL ){ bag_insert(blobs, mark.rid); } }else{ if( vers!=NULL ){ bag_insert(vers, mark.rid); } } if( unused_mark!=NULL ){ unsigned int mid = atoi(mark.name + 1); if( mid>=*unused_mark ){ *unused_mark = mid + 1; } } free(mark.name); } return 0; } void export_mark(FILE* f, int rid, char obj_type) { unsigned int z = 0; char *zUuid = rid_to_uuid(rid); char *zMark; if( zUuid==NULL ){ fossil_trace("No uuid matching rid=%d when exporting marks\n", rid); return; } /* Since rid is already in the 'xmark' table, the value of z won't be ** used, but pass in a valid pointer just to be safe. */ zMark = mark_name_from_rid(rid, &z); fprintf(f, "%c%d %s %s\n", obj_type, rid, zMark, zUuid); free(zMark); free(zUuid); } /* ** If 'blobs' is non-null, it must point to a Bag of blob rids to be ** written to disk. Blob rids are written as 'b<rid>'. ** If 'vers' is non-null, it must point to a Bag of commit rids to be ** written to disk. Commit rids are written as 'c<rid> :<mark> <uuid>'. ** All commit (mark,rid,uuid) tuples are stored in 'xmark' table. ** This function does not fail, but may produce errors if a uuid cannot ** be found for an rid in 'vers'. */ void export_marks(FILE* f, Bag *blobs, Bag *vers){ int rid; if( blobs!=NULL ){ rid = bag_first(blobs); if( rid!=0 ){ do{ export_mark(f, rid, 'b'); }while( (rid = bag_next(blobs, rid))!=0 ); } } if( vers!=NULL ){ rid = bag_first(vers); if( rid!=0 ){ do{ export_mark(f, rid, 'c'); }while( (rid = bag_next(vers, rid))!=0 ); } } } /* ** COMMAND: export |
︙ | ︙ | |||
334 335 336 337 338 339 340 341 342 343 344 345 346 347 | ** ** See also: import */ void export_cmd(void){ Stmt q, q2, q3; int i; Bag blobs, vers; const char *markfile_in; const char *markfile_out; bag_init(&blobs); bag_init(&vers); find_option("git", 0, 0); /* Ignore the --git option for now */ | > | 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | ** ** See also: import */ void export_cmd(void){ Stmt q, q2, q3; int i; Bag blobs, vers; unsigned int unused_mark = 1; const char *markfile_in; const char *markfile_out; bag_init(&blobs); bag_init(&vers); find_option("git", 0, 0); /* Ignore the --git option for now */ |
︙ | ︙ | |||
360 361 362 363 364 365 366 | FILE *f; int rid; f = fossil_fopen(markfile_in, "r"); if( f==0 ){ fossil_fatal("cannot open %s for reading", markfile_in); } | | | | | | 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | FILE *f; int rid; f = fossil_fopen(markfile_in, "r"); if( f==0 ){ fossil_fatal("cannot open %s for reading", markfile_in); } if( import_marks(f, &blobs, &vers, &unused_mark)<0 ){ fossil_fatal("error importing marks from file: %s", markfile_in); } db_prepare(&qb, "INSERT OR IGNORE INTO oldblob VALUES (:rid)"); db_prepare(&qc, "INSERT OR IGNORE INTO oldcommit VALUES (:rid)"); rid = bag_first(&blobs); if( rid!=0 ){ do{ db_bind_int(&qb, ":rid", rid); db_step(&qb); db_reset(&qb); }while((rid = bag_next(&blobs, rid))!=0); } rid = bag_first(&vers); if( rid!=0 ){ do{ db_bind_int(&qc, ":rid", rid); db_step(&qc); db_reset(&qc); }while((rid = bag_next(&vers, rid))!=0); } db_finalize(&qb); |
︙ | ︙ | |||
414 415 416 417 418 419 420 421 422 423 424 | db_prepare(&q2, "INSERT INTO oldblob VALUES (:rid)"); db_prepare(&q3, "SELECT rid FROM newblob WHERE srcid= (:srcid)"); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); Blob content; while( !bag_find(&blobs, rid) ){ content_get(rid, &content); db_bind_int(&q2, ":rid", rid); db_step(&q2); db_reset(&q2); | > > | > | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 | db_prepare(&q2, "INSERT INTO oldblob VALUES (:rid)"); db_prepare(&q3, "SELECT rid FROM newblob WHERE srcid= (:srcid)"); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); Blob content; while( !bag_find(&blobs, rid) ){ char *zMark; content_get(rid, &content); db_bind_int(&q2, ":rid", rid); db_step(&q2); db_reset(&q2); zMark = mark_name_from_rid(rid, &unused_mark); printf("blob\nmark %s\ndata %d\n", zMark, blob_size(&content)); free(zMark); bag_insert(&blobs, rid); fwrite(blob_buffer(&content), 1, blob_size(&content), stdout); printf("\n"); blob_reset(&content); db_bind_int(&q3, ":srcid", rid); if( db_step(&q3) != SQLITE_ROW ){ |
︙ | ︙ | |||
468 469 470 471 472 473 474 | db_step(&q2); db_reset(&q2); if( zBranch==0 ) zBranch = "trunk"; zBr = mprintf("%s", zBranch); for(i=0; zBr[i]; i++){ if( !fossil_isalnum(zBr[i]) ) zBr[i] = '_'; } | | | | | | > | > | 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 | db_step(&q2); db_reset(&q2); if( zBranch==0 ) zBranch = "trunk"; zBr = mprintf("%s", zBranch); for(i=0; zBr[i]; i++){ if( !fossil_isalnum(zBr[i]) ) zBr[i] = '_'; } zMark = mark_name_from_rid(ckinId, &unused_mark); printf("commit refs/heads/%s\nmark %s\n", zBr, zMark); free(zMark); free(zBr); printf("committer"); print_person(zUser); printf(" %s +0000\n", zSecondsSince1970); if( zComment==0 ) zComment = "null comment"; printf("data %d\n%s\n", (int)strlen(zComment), zComment); db_prepare(&q3, "SELECT pid FROM plink" " WHERE cid=%d AND isprim" " AND pid IN (SELECT objid FROM event)", ckinId ); if( db_step(&q3) == SQLITE_ROW ){ int pid = db_column_int(&q3, 0); zMark = mark_name_from_rid(pid, &unused_mark); printf("from %s\n", zMark); free(zMark); db_prepare(&q4, "SELECT pid FROM plink" " WHERE cid=%d AND NOT isprim" " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=pid)" " ORDER BY pid", ckinId); while( db_step(&q4)==SQLITE_ROW ){ zMark = mark_name_from_rid(db_column_int(&q4, 0), &unused_mark); printf("merge %s\n", zMark); free(zMark); } db_finalize(&q4); }else{ printf("deleteall\n"); } db_prepare(&q4, "SELECT filename.name, mlink.fid, mlink.mperm FROM mlink" " JOIN filename ON filename.fnid=mlink.fnid" " WHERE mlink.mid=%d", ckinId ); while( db_step(&q4)==SQLITE_ROW ){ const char *zName = db_column_text(&q4,0); int zNew = db_column_int(&q4,1); int mPerm = db_column_int(&q4,2); if( zNew==0 ){ printf("D %s\n", zName); }else if( bag_find(&blobs, zNew) ){ const char *zPerm; zMark = mark_name_from_rid(zNew, &unused_mark); switch( mPerm ){ case PERM_LNK: zPerm = "120000"; break; case PERM_EXE: zPerm = "100755"; break; default: zPerm = "100644"; break; } printf("M %s %s %s\n", zPerm, zMark, zName); free(zMark); } } db_finalize(&q4); db_finalize(&q3); printf("\n"); } db_finalize(&q2); |
︙ | ︙ | |||
545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 | " FROM tagxref JOIN tag USING(tagid)" " WHERE tagtype=1 AND tagname GLOB 'sym-*'" ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagname = db_column_text(&q, 0); char *zEncoded = 0; int rid = db_column_int(&q, 1); const char *zSecSince1970 = db_column_text(&q, 2); int i; if( rid==0 || !bag_find(&vers, rid) ) continue; zTagname += 4; zEncoded = mprintf("%s", zTagname); for(i=0; zEncoded[i]; i++){ if( !fossil_isalnum(zEncoded[i]) ) zEncoded[i] = '_'; } printf("tag %s\n", zEncoded); | > | > | | 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 | " FROM tagxref JOIN tag USING(tagid)" " WHERE tagtype=1 AND tagname GLOB 'sym-*'" ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagname = db_column_text(&q, 0); char *zEncoded = 0; int rid = db_column_int(&q, 1); char *zMark = mark_name_from_rid(rid, &unused_mark); const char *zSecSince1970 = db_column_text(&q, 2); int i; if( rid==0 || !bag_find(&vers, rid) ) continue; zTagname += 4; zEncoded = mprintf("%s", zTagname); for(i=0; zEncoded[i]; i++){ if( !fossil_isalnum(zEncoded[i]) ) zEncoded[i] = '_'; } printf("tag %s\n", zEncoded); printf("from %s\n", zMark); free(zMark); printf("tagger <tagger> %s +0000\n", zSecSince1970); printf("data 0\n"); fossil_free(zEncoded); } db_finalize(&q); if( markfile_out!=0 ){ FILE *f; f = fossil_fopen(markfile_out, "w"); if( f == 0 ){ fossil_fatal("cannot open %s for writing", markfile_out); } export_marks(f, &blobs, &vers); if( ferror(f)!=0 || fclose(f)!=0 ){ fossil_fatal("error while writing %s", markfile_out); } } bag_clear(&blobs); bag_clear(&vers); } |
Changes to src/file.c.
︙ | ︙ | |||
861 862 863 864 865 866 867 | */ void file_getcwd(char *zBuf, int nBuf){ #ifdef _WIN32 win32_getcwd(zBuf, nBuf); #else if( getcwd(zBuf, nBuf-1)==0 ){ if( errno==ERANGE ){ | | | 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 | */ void file_getcwd(char *zBuf, int nBuf){ #ifdef _WIN32 win32_getcwd(zBuf, nBuf); #else if( getcwd(zBuf, nBuf-1)==0 ){ if( errno==ERANGE ){ fossil_fatal("pwd too big: max %d", nBuf-1); }else{ fossil_fatal("cannot find current working directory; %s", strerror(errno)); } } #endif } |
︙ | ︙ | |||
1288 1289 1290 1291 1292 1293 1294 | } azDirs[1] = fossil_getenv("TEMP"); azDirs[2] = fossil_getenv("TMP"); #endif | | | 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 | } azDirs[1] = fossil_getenv("TEMP"); azDirs[2] = fossil_getenv("TMP"); #endif for(i=0; i<count(azDirs); i++){ if( azDirs[i]==0 ) continue; if( !file_isdir(azDirs[i]) ) continue; zDir = azDirs[i]; break; } /* Check that the output buffer is large enough for the temporary file |
︙ | ︙ | |||
1405 1406 1407 1408 1409 1410 1411 | ** path element. */ const char *file_is_win_reserved(const char *zPath){ static const char *azRes[] = { "CON", "PRN", "AUX", "NUL", "COM", "LPT" }; static char zReturn[5]; int i; while( zPath[0] ){ | | | 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 | ** path element. */ const char *file_is_win_reserved(const char *zPath){ static const char *azRes[] = { "CON", "PRN", "AUX", "NUL", "COM", "LPT" }; static char zReturn[5]; int i; while( zPath[0] ){ for(i=0; i<count(azRes); i++){ if( sqlite3_strnicmp(zPath, azRes[i], 3)==0 && ((i>=4 && fossil_isdigit(zPath[3]) && (zPath[4]=='/' || zPath[4]=='.' || zPath[4]==0)) || (i<4 && (zPath[3]=='/' || zPath[3]=='.' || zPath[3]==0))) ){ sqlite3_snprintf(5,zReturn,"%.*s", i>=4 ? 4 : 3, zPath); return zReturn; |
︙ | ︙ |
Changes to src/finfo.c.
︙ | ︙ | |||
329 330 331 332 333 334 335 | fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( fnid==0 ){ @ No such file: %h(zFilename) style_footer(); return; } if( g.perm.Admin ){ | | | 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( fnid==0 ){ @ No such file: %h(zFilename) style_footer(); return; } if( g.perm.Admin ){ style_submenu_element("MLink Table", "%R/mlink?name=%t", zFilename); } if( baseCheckin ){ compute_direct_ancestors(baseCheckin); } url_add_parameter(&url, "name", zFilename); blob_zero(&sql); blob_append_sql(&sql, |
︙ | ︙ | |||
452 453 454 455 456 457 458 | char zTime[10]; int nParent = 0; int aParent[GR_MAX_RAIL]; db_bind_int(&qparent, ":fid", frid); db_bind_int(&qparent, ":mid", fmid); db_bind_int(&qparent, ":fnid", fnid); | | | 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 | char zTime[10]; int nParent = 0; int aParent[GR_MAX_RAIL]; db_bind_int(&qparent, ":fid", frid); db_bind_int(&qparent, ":mid", fmid); db_bind_int(&qparent, ":fnid", fnid); while( db_step(&qparent)==SQLITE_ROW && nParent<count(aParent) ){ aParent[nParent] = db_column_int(&qparent, 0); nParent++; } db_reset(&qparent); if( zBr==0 ) zBr = "trunk"; if( uBg ){ zBgClr = hash_color(zUser); |
︙ | ︙ |
Changes to src/fshell.c.
︙ | ︙ | |||
23 24 25 26 27 28 29 30 31 32 33 34 35 36 | ** ** The "fossil shell" command is intended for use with SEE-enabled fossil. ** It allows multiple commands to be issued without having to reenter the ** crypto phasephrase for each command. */ #include "config.h" #include "fshell.h" #include <ctype.h> #ifndef _WIN32 #include <sys/types.h> #include <sys/wait.h> #endif | > | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | ** ** The "fossil shell" command is intended for use with SEE-enabled fossil. ** It allows multiple commands to be issued without having to reenter the ** crypto phasephrase for each command. */ #include "config.h" #include "fshell.h" #include "linenoise.h" #include <ctype.h> #ifndef _WIN32 #include <sys/types.h> #include <sys/wait.h> #endif |
︙ | ︙ | |||
53 54 55 56 57 58 59 | #else int nArg; int mxArg = 0; int n, i; char **azArg = 0; int fDebug; pid_t childPid; | | > > > | | 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | #else int nArg; int mxArg = 0; int n, i; char **azArg = 0; int fDebug; pid_t childPid; char *zLine = 0; fDebug = find_option("debug", 0, 0)!=0; db_find_and_open_repository(OPEN_ANY_SCHEMA|OPEN_OK_NOT_FOUND, 0); db_close(0); sqlite3_shutdown(); while( (free(zLine), zLine = linenoise("fossil> ")) ){ /* Remember shell history within the current session */ linenoiseHistoryAdd(zLine); /* Parse the line of input */ n = (int)strlen(zLine); for(i=0, nArg=1; i<n; i++){ while( fossil_isspace(zLine[i]) ){ i++; } if( i>=n ) break; if( nArg>=mxArg ){ mxArg = nArg+10; |
︙ | ︙ |
Changes to src/fusefs.c.
︙ | ︙ | |||
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | ** This module implements the userspace side of a Fuse Filesystem that ** contains all check-ins for a fossil repository. ** ** This module is a mostly a no-op unless compiled with -DFOSSIL_HAVE_FUSEFS. ** The FOSSIL_HAVE_FUSEFS should be omitted on systems that lack support for ** the Fuse Filesystem, of course. */ #include "config.h" #include <stdio.h> #include <string.h> #include <errno.h> #include <fcntl.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include "fusefs.h" | > < | 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | ** This module implements the userspace side of a Fuse Filesystem that ** contains all check-ins for a fossil repository. ** ** This module is a mostly a no-op unless compiled with -DFOSSIL_HAVE_FUSEFS. ** The FOSSIL_HAVE_FUSEFS should be omitted on systems that lack support for ** the Fuse Filesystem, of course. */ #ifdef FOSSIL_HAVE_FUSEFS #include "config.h" #include <stdio.h> #include <string.h> #include <errno.h> #include <fcntl.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include "fusefs.h" #define FUSE_USE_VERSION 26 #include <fuse.h> /* ** Global state information about the archive */ |
︙ | ︙ | |||
205 206 207 208 209 210 211 | fusefs_load_rid(rid, fusefs.az[1]); if( fusefs.pMan==0 ) return -ENOENT; filler(buf, ".", NULL, 0); filler(buf, "..", NULL, 0); manifest_file_rewind(fusefs.pMan); if( n==2 ){ while( (pFile = manifest_file_next(fusefs.pMan, 0))!=0 ){ | | > | 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 | fusefs_load_rid(rid, fusefs.az[1]); if( fusefs.pMan==0 ) return -ENOENT; filler(buf, ".", NULL, 0); filler(buf, "..", NULL, 0); manifest_file_rewind(fusefs.pMan); if( n==2 ){ while( (pFile = manifest_file_next(fusefs.pMan, 0))!=0 ){ if( nPrev>0 && strncmp(pFile->zName, zPrev, nPrev)==0 && pFile->zName[nPrev]=='/' ) continue; zPrev = pFile->zName; for(nPrev=0; zPrev[nPrev] && zPrev[nPrev]!='/'; nPrev++){} z = mprintf("%.*s", nPrev, zPrev); filler(buf, z, NULL, 0); fossil_free(z); cnt++; } |
︙ | ︙ | |||
280 281 282 283 284 285 286 | } static struct fuse_operations fusefs_methods = { .getattr = fusefs_getattr, .readdir = fusefs_readdir, .read = fusefs_read, }; | < | 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | } static struct fuse_operations fusefs_methods = { .getattr = fusefs_getattr, .readdir = fusefs_readdir, .read = fusefs_read, }; /* ** COMMAND: fusefs ** ** Usage: %fossil fusefs [--debug] DIRECTORY ** ** This command uses the Fuse Filesystem (FuseFS) to mount a directory |
︙ | ︙ | |||
312 313 314 315 316 317 318 | ** appropriate support libraries. ** ** After stopping the "fossil fusefs" command, it might also be necessary ** to run "fusermount -u DIRECTORY" to reset the FuseFS before using it ** again. */ void fusefs_cmd(void){ | < < < | 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | ** appropriate support libraries. ** ** After stopping the "fossil fusefs" command, it might also be necessary ** to run "fusermount -u DIRECTORY" to reset the FuseFS before using it ** again. */ void fusefs_cmd(void){ char *zMountPoint; char *azNewArgv[5]; int doDebug = find_option("debug","d",0)!=0; db_find_and_open_repository(0,0); verify_all_options(); blob_init(&fusefs.content, 0, 0); |
︙ | ︙ | |||
336 337 338 339 340 341 342 | azNewArgv[2] = "-s"; azNewArgv[3] = zMountPoint; azNewArgv[4] = 0; g.localOpen = 0; /* Prevent tags like "current" and "prev" */ fuse_main(4, azNewArgv, &fusefs_methods, NULL); fusefs_reset(); fusefs_clear_path(); | < > | 333 334 335 336 337 338 339 340 341 | azNewArgv[2] = "-s"; azNewArgv[3] = zMountPoint; azNewArgv[4] = 0; g.localOpen = 0; /* Prevent tags like "current" and "prev" */ fuse_main(4, azNewArgv, &fusefs_methods, NULL); fusefs_reset(); fusefs_clear_path(); } #endif /* FOSSIL_HAVE_FUSEFS */ |
Changes to src/graph.c.
︙ | ︙ | |||
256 257 258 259 260 261 262 | */ static void assignChildrenToRail(GraphRow *pBottom){ int iRail = pBottom->iRail; GraphRow *pCurrent; GraphRow *pPrior; u64 mask = ((u64)1)<<iRail; | < | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | */ static void assignChildrenToRail(GraphRow *pBottom){ int iRail = pBottom->iRail; GraphRow *pCurrent; GraphRow *pPrior; u64 mask = ((u64)1)<<iRail; pBottom->railInUse |= mask; pPrior = pBottom; for(pCurrent=pBottom->pChild; pCurrent; pCurrent=pCurrent->pChild){ assert( pPrior->idx > pCurrent->idx ); assert( pCurrent->iRail<0 ); pCurrent->iRail = iRail; pCurrent->railInUse |= mask; |
︙ | ︙ | |||
342 343 344 345 346 347 348 | /* ** Compute the complete graph */ void graph_finish(GraphContext *p, int omitDescenders){ GraphRow *pRow, *pDesc, *pDup, *pLoop, *pParent; | | > > > | 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 | /* ** Compute the complete graph */ void graph_finish(GraphContext *p, int omitDescenders){ GraphRow *pRow, *pDesc, *pDup, *pLoop, *pParent; int i, j; u64 mask; int hasDup = 0; /* True if one or more isDup entries */ const char *zTrunk; int railRid[GR_MAX_RAIL]; /* Maps rails to rids for lines that enter from bottom of screen */ if( p==0 || p->pFirst==0 || p->nErr ) return; p->nErr = 1; /* Assume an error until proven otherwise */ /* Initialize all rows */ p->nHash = p->nRow*2 + 1; p->apHash = safeMalloc( sizeof(p->apHash[0])*p->nHash ); for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( pRow->pNext ) pRow->pNext->pPrev = pRow; pRow->iRail = -1; pRow->mergeOut = -1; if( (pDup = hashFind(p, pRow->rid))!=0 ){ hasDup = 1; pDup->isDup = 1; } hashInsert(p, pRow, 1); } p->mxRail = -1; memset(railRid, 0, sizeof(railRid)); /* Purge merge-parents that are out-of-graph if descenders are not ** drawn. ** ** Each node has one primary parent and zero or more "merge" parents. ** A merge parent is a prior check-in from which changes were merged into ** the current check-in. If a merge parent is not in the visible section |
︙ | ︙ | |||
456 457 458 459 460 461 462 463 464 465 466 467 468 469 | }else{ pRow->iRail = ++p->mxRail; } if( p->mxRail>=GR_MAX_RAIL ) return; mask = BIT(pRow->iRail); if( !omitDescenders ){ pRow->bDescender = pRow->nParent>0; for(pLoop=pRow; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } } assignChildrenToRail(pRow); } } | > > > | 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 | }else{ pRow->iRail = ++p->mxRail; } if( p->mxRail>=GR_MAX_RAIL ) return; mask = BIT(pRow->iRail); if( !omitDescenders ){ pRow->bDescender = pRow->nParent>0; if( pRow->bDescender ){ railRid[pRow->iRail] = pRow->aParent[0]; } for(pLoop=pRow; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } } assignChildrenToRail(pRow); } } |
︙ | ︙ | |||
534 535 536 537 538 539 540 | */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ int parentRid = pRow->aParent[i]; pDesc = hashFind(p, parentRid); if( pDesc==0 ){ /* Merge from a node that is off-screen */ | > > > > > > > > | | > > | 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 | */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ int parentRid = pRow->aParent[i]; pDesc = hashFind(p, parentRid); if( pDesc==0 ){ /* Merge from a node that is off-screen */ int iMrail = -1; for(j=0; j<GR_MAX_RAIL; j++){ if( railRid[j]==parentRid ){ iMrail = j; break; } } if( iMrail==-1 ){ iMrail = findFreeRail(p, pRow->idx, p->nRow, 0); if( p->mxRail>=GR_MAX_RAIL ) return; railRid[iMrail] = parentRid; } mask = BIT(iMrail); pRow->mergeIn[iMrail] = 1; pRow->mergeDown |= mask; for(pLoop=pRow->pNext; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } }else{ |
︙ | ︙ |
Changes to src/http_socket.c.
︙ | ︙ | |||
21 22 23 24 25 26 27 | ** This file implements a singleton. A single client socket may be active ** at a time. State information is stored in static variables. The identity ** of the server is held in global variables that are set by url_parse(). ** ** Low-level sockets are abstracted out into this module because they ** are handled different on Unix and windows. */ | | > > < < < | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ** This file implements a singleton. A single client socket may be active ** at a time. State information is stored in static variables. The identity ** of the server is held in global variables that are set by url_parse(). ** ** Low-level sockets are abstracted out into this module because they ** are handled different on Unix and windows. */ #if defined(_WIN32) # define _WIN32_WINNT 0x501 #endif #ifndef __EXTENSIONS__ # define __EXTENSIONS__ 1 /* IPv6 won't compile on Solaris without this */ #endif #include "config.h" #include "http_socket.h" #if defined(_WIN32) # include <winsock2.h> # include <ws2tcpip.h> #else # include <netinet/in.h> # include <arpa/inet.h> # include <sys/socket.h> # include <netdb.h> |
︙ | ︙ |
Changes to src/import.c.
︙ | ︙ | |||
1610 1611 1612 1613 1614 1615 1616 | int format; /* 1=git, 2=svn, 3=any */ } renOpts[] = { {"rename-branch", &gimport.zBranchPre, "", &gimport.zBranchSuf, "", 3}, {"rename-tag" , &gimport.zTagPre , "", &gimport.zTagSuf , "", 3}, {"rename-rev" , &gsvn.zRevPre, "svn-rev-", &gsvn.zRevSuf , "", 2}, }, *renOpt = renOpts; int i; | | | 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 | int format; /* 1=git, 2=svn, 3=any */ } renOpts[] = { {"rename-branch", &gimport.zBranchPre, "", &gimport.zBranchSuf, "", 3}, {"rename-tag" , &gimport.zTagPre , "", &gimport.zTagSuf , "", 3}, {"rename-rev" , &gsvn.zRevPre, "svn-rev-", &gsvn.zRevSuf , "", 2}, }, *renOpt = renOpts; int i; for( i = 0; i < count(renOpts); ++i, ++renOpt ){ if( 1 << svnFlag & renOpt->format ){ const char *zArgument = find_option(renOpt->zOpt, 0, 1); if( zArgument ){ const char *sep = strchr(zArgument, '%'); if( !sep ){ fossil_fatal("missing '%%' in argument to --%s", renOpt->zOpt); }else if( strchr(sep + 1, '%') ){ |
︙ | ︙ | |||
1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 | ** contains the text of an artifact that will add a tag to a check-in. ** The git-fast-export file format might specify the same tag multiple ** times but only the last tag should be used. And we do not know which ** occurrence of the tag is the last until the import finishes. */ db_multi_exec( "CREATE TEMP TABLE xmark(tname TEXT UNIQUE, trid INT, tuuid TEXT);" "CREATE TEMP TABLE xbranch(tname TEXT UNIQUE, brnm TEXT);" "CREATE TEMP TABLE xtag(tname TEXT UNIQUE, tcontent TEXT);" ); if( markfile_in ){ FILE *f = fossil_fopen(markfile_in, "r"); if( !f ){ | > | | | | 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 | ** contains the text of an artifact that will add a tag to a check-in. ** The git-fast-export file format might specify the same tag multiple ** times but only the last tag should be used. And we do not know which ** occurrence of the tag is the last until the import finishes. */ db_multi_exec( "CREATE TEMP TABLE xmark(tname TEXT UNIQUE, trid INT, tuuid TEXT);" "CREATE INDEX temp.i_xmark ON xmark(trid);" "CREATE TEMP TABLE xbranch(tname TEXT UNIQUE, brnm TEXT);" "CREATE TEMP TABLE xtag(tname TEXT UNIQUE, tcontent TEXT);" ); if( markfile_in ){ FILE *f = fossil_fopen(markfile_in, "r"); if( !f ){ fossil_fatal("cannot open %s for reading", markfile_in); } if( import_marks(f, &blobs, NULL, NULL)<0 ){ fossil_fatal("error importing marks from file: %s", markfile_in); } fclose(f); } manifest_crosslink_begin(); git_fast_import(pIn); db_prepare(&q, "SELECT tcontent FROM xtag"); |
︙ | ︙ | |||
1793 1794 1795 1796 1797 1798 1799 | Stmt q_marks; FILE *f; db_prepare(&q_marks, "SELECT DISTINCT trid FROM xmark"); while( db_step(&q_marks)==SQLITE_ROW ){ rid = db_column_int(&q_marks, 0); if( db_int(0, "SELECT count(objid) FROM event" " WHERE objid=%d AND type='ci'", rid)==0 ){ | | | < > | | 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 | Stmt q_marks; FILE *f; db_prepare(&q_marks, "SELECT DISTINCT trid FROM xmark"); while( db_step(&q_marks)==SQLITE_ROW ){ rid = db_column_int(&q_marks, 0); if( db_int(0, "SELECT count(objid) FROM event" " WHERE objid=%d AND type='ci'", rid)==0 ){ /* Blob marks exported by git aren't saved between runs, so they need ** to be left free for git to re-use in the future. */ }else{ bag_insert(&vers, rid); } } db_finalize(&q_marks); f = fossil_fopen(markfile_out, "w"); if( !f ){ fossil_fatal("cannot open %s for writing", markfile_out); } export_marks(f, &blobs, &vers); fclose(f); bag_clear(&blobs); bag_clear(&vers); } manifest_crosslink_end(MC_NONE); |
︙ | ︙ |
Changes to src/info.c.
︙ | ︙ | |||
229 230 231 232 233 234 235 | } fossil_print("check-ins: %d\n", db_int(-1, "SELECT count(*) FROM event WHERE type='ci' /*scan*/")); }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ | | | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 | } fossil_print("check-ins: %d\n", db_int(-1, "SELECT count(*) FROM event WHERE type='ci' /*scan*/")); }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ fossil_fatal("no such object: %s", g.argv[2]); } show_common_info(rid, "uuid:", 1, 1); } } /* ** Show information about all tags on a given check-in. |
︙ | ︙ | |||
824 825 826 827 828 829 830 | if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Update of \"%h\"", pWiki->zWikiTitle); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); zDate = db_text(0, "SELECT datetime(%.17g)", pWiki->rDate); | | | < | < | 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 | if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Update of \"%h\"", pWiki->zWikiTitle); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); zDate = db_text(0, "SELECT datetime(%.17g)", pWiki->rDate); style_submenu_element("Raw", "artifact/%s", zUuid); style_submenu_element("History", "whistory?name=%t", pWiki->zWikiTitle); style_submenu_element("Page", "wiki?name=%t", pWiki->zWikiTitle); login_anonymous_available(); @ <div class="section">Overview</div> @ <p><table class="label-value"> @ <tr><th>Artifact ID:</th> @ <td>%z(href("%R/artifact/%!S",zUuid))%s(zUuid)</a> if( g.perm.Setup ){ @ (%d(rid)) |
︙ | ︙ | |||
1051 1052 1053 1054 1055 1056 1057 | zFrom = P("from"); zTo = P("to"); if(zGlob && !*zGlob){ zGlob = NULL; } diffFlags = construct_diff_flags(verboseFlag, sideBySide); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; | | < | < | | | | | < | | | 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 | zFrom = P("from"); zTo = P("to"); if(zGlob && !*zGlob){ zGlob = NULL; } diffFlags = construct_diff_flags(verboseFlag, sideBySide); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; style_submenu_element("Path", "%R/timeline?me=%T&you=%T", zFrom, zTo); if( sideBySide || verboseFlag ){ style_submenu_element("Hide Diff", "%R/vdiff?from=%T&to=%T&sbs=0%s%T%s", zFrom, zTo, zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); } if( !sideBySide ){ style_submenu_element("Side-by-Side Diff", "%R/vdiff?from=%T&to=%T&sbs=1%s%T%s", zFrom, zTo, zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); } if( sideBySide || !verboseFlag ) { style_submenu_element("Unified Diff", "%R/vdiff?from=%T&to=%T&sbs=0&v%s%T%s", zFrom, zTo, zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); } style_submenu_element("Invert", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T%s", zTo, zFrom, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); if( zGlob ){ style_submenu_element("Clear glob", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s", zFrom, zTo, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zW); }else{ style_submenu_element("Patch", "%R/vpatch?from=%T&to=%T%s", zFrom, zTo, zW); } if( sideBySide || verboseFlag ){ if( *zW ){ style_submenu_element("Show Whitespace Differences", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T", zFrom, zTo, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zGlob ? "&glob=" : "", zGlob ? zGlob : ""); }else{ style_submenu_element("Ignore Whitespace", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T&w", zFrom, zTo, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zGlob ? "&glob=" : "", zGlob ? zGlob : ""); } } style_header("Check-in Differences"); if( P("nohdr")==0 ){ |
︙ | ︙ | |||
1512 1513 1514 1515 1516 1517 1518 | zV1 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v1); zV2 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v2); diffFlags = construct_diff_flags(1, sideBySide) | DIFF_HTML; style_header("Diff"); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; if( *zW ){ | | | | | | | 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 | zV1 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v1); zV2 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v2); diffFlags = construct_diff_flags(1, sideBySide) | DIFF_HTML; style_header("Diff"); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; if( *zW ){ style_submenu_element("Show Whitespace Changes", "%s/fdiff?v1=%T&v2=%T&sbs=%d", g.zTop, P("v1"), P("v2"), sideBySide); }else{ style_submenu_element("Ignore Whitespace", "%s/fdiff?v1=%T&v2=%T&sbs=%d&w", g.zTop, P("v1"), P("v2"), sideBySide); } style_submenu_element("Patch", "%s/fdiff?v1=%T&v2=%T&patch", g.zTop, P("v1"), P("v2")); if( !sideBySide ){ style_submenu_element("Side-by-Side Diff", "%s/fdiff?v1=%T&v2=%T&sbs=1%s", g.zTop, P("v1"), P("v2"), zW); }else{ style_submenu_element("Unified Diff", "%s/fdiff?v1=%T&v2=%T&sbs=0%s", g.zTop, P("v1"), P("v2"), zW); } if( P("smhdr")!=0 ){ @ <h2>Differences From Artifact @ %z(href("%R/artifact/%!S",zV1))[%S(zV1)]</a> To |
︙ | ︙ | |||
1670 1671 1672 1673 1674 1675 1676 | rid = name_to_rid_www("name"); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( rid==0 ) fossil_redirect_home(); if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ | | | < | | | 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 | rid = name_to_rid_www("name"); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( rid==0 ) fossil_redirect_home(); if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ style_submenu_element("Unshun", "%s/shun?accept=%s&sub=1#delshun", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } style_header("Hex Artifact Content"); zUuid = db_text("?","SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Setup ){ @ <h2>Artifact %s(zUuid) (%d(rid)):</h2> }else{ @ <h2>Artifact %s(zUuid):</h2> } blob_zero(&downloadName); if( P("verbose")!=0 ) objdescFlags |= OBJDESC_DETAIL; object_description(rid, objdescFlags, &downloadName); style_submenu_element("Download", "%s/raw/%T?name=%s", g.zTop, blob_str(&downloadName), zUuid); @ <hr /> content_get(rid, &content); @ <blockquote><pre> hexdump(&content); @ </pre></blockquote> style_footer(); } |
︙ | ︙ | |||
1893 1894 1895 1896 1897 1898 1899 | cgi_redirectf("%R/raw/%T?name=%s", blob_str(&downloadName), db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid)); /*NOTREACHED*/ } if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ | | | < | 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 | cgi_redirectf("%R/raw/%T?name=%s", blob_str(&downloadName), db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid)); /*NOTREACHED*/ } if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ style_submenu_element("Unshun", "%s/shun?accept=%s&sub=1#accshun", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } style_header("%s", descOnly ? "Artifact Description" : "Artifact Content"); zUuid = db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Setup ){ @ <h2>Artifact %s(zUuid) (%d(rid)):</h2> }else{ |
︙ | ︙ | |||
1923 1924 1925 1926 1927 1928 1929 | const char *zUser = db_column_text(&q,0); const char *zDate = db_column_text(&q,1); const char *zIp = db_column_text(&q,2); @ <p>Received on %s(zDate) from %h(zUser) at %h(zIp).</p> } db_finalize(&q); } | | | | < | < | < | < | < | | | < | | < | 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 | const char *zUser = db_column_text(&q,0); const char *zDate = db_column_text(&q,1); const char *zIp = db_column_text(&q,2); @ <p>Received on %s(zDate) from %h(zUser) at %h(zIp).</p> } db_finalize(&q); } style_submenu_element("Download", "%R/raw/%T?name=%s", blob_str(&downloadName), zUuid); if( db_exists("SELECT 1 FROM mlink WHERE fid=%d", rid) ){ style_submenu_element("Check-ins Using", "%R/timeline?n=200&uf=%s", zUuid); } asText = P("txt")!=0; zMime = mimetype_from_name(blob_str(&downloadName)); if( zMime ){ if( fossil_strcmp(zMime, "text/html")==0 ){ if( asText ){ style_submenu_element("Html", "%s/artifact/%s", g.zTop, zUuid); }else{ renderAsHtml = 1; style_submenu_element("Text", "%s/artifact/%s?txt=1", g.zTop, zUuid); } }else if( fossil_strcmp(zMime, "text/x-fossil-wiki")==0 || fossil_strcmp(zMime, "text/x-markdown")==0 ){ if( asText ){ style_submenu_element("Wiki", "%s/artifact/%s", g.zTop, zUuid); }else{ renderAsWiki = 1; style_submenu_element("Text", "%s/artifact/%s?txt=1", g.zTop, zUuid); } } } if( (objType & (OBJTYPE_WIKI|OBJTYPE_TICKET))!=0 ){ style_submenu_element("Parsed", "%R/info/%s", zUuid); } if( descOnly ){ style_submenu_element("Content", "%R/artifact/%s", zUuid); }else{ style_submenu_element("Line Numbers", "%R/artifact/%s%s", zUuid, ((zLn&&*zLn) ? "" : "?txt=1&ln=0")); @ <hr /> content_get(rid, &content); if( renderAsWiki ){ wiki_render_by_mimetype(&content, zMime); }else if( renderAsHtml ){ @ <iframe src="%R/raw/%T(blob_str(&downloadName))?name=%s(zUuid)" @ width="100%%" frameborder="0" marginwidth="0" marginheight="0" @ sandbox="allow-same-origin" @ onload="this.height=this.contentDocument.documentElement.scrollHeight;"> @ </iframe> }else{ style_submenu_element("Hex", "%s/hexdump?name=%s", g.zTop, zUuid); blob_to_utf8_no_bom(&content, 0); zMime = mimetype_from_content(&content); @ <blockquote> if( zMime==0 ){ const char *z; z = blob_str(&content); if( zLn ){ output_text_with_line_numbers(z, zLn); }else{ @ <pre> @ %h(z) @ </pre> } }else if( strncmp(zMime, "image/", 6)==0 ){ @ <i>(file is %d(blob_size(&content)) bytes of image data)</i><br /> @ <img src="%R/raw/%s(zUuid)?m=%s(zMime)" /> style_submenu_element("Image", "%R/raw/%s?m=%s", zUuid, zMime); }else{ @ <i>(file is %d(blob_size(&content)) bytes of binary data)</i> } @ </blockquote> } } style_footer(); |
︙ | ︙ | |||
2023 2024 2025 2026 2027 2028 2029 | login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } rid = name_to_rid_www("name"); if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ | | | < | 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 | login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } rid = name_to_rid_www("name"); if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ style_submenu_element("Unshun", "%s/shun?accept=%s&sub=1#accshun", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } pTktChng = manifest_get(rid, CFTYPE_TICKET, 0); if( pTktChng==0 ) fossil_redirect_home(); zDate = db_text(0, "SELECT datetime(%.12f)", pTktChng->rDate); memcpy(zTktName, pTktChng->zTicketUuid, UUID_SIZE+1); if( g.perm.ModTkt && (zModAction = P("modaction"))!=0 ){ |
︙ | ︙ | |||
2058 2059 2060 2061 2062 2063 2064 | moderation_approve(rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) ? db_text("(No title)", "SELECT title FROM ticket WHERE tkt_uuid=%Q", zTktName) : 0; style_header("Ticket Change Details"); | | | | | | | < | 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 | moderation_approve(rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) ? db_text("(No title)", "SELECT title FROM ticket WHERE tkt_uuid=%Q", zTktName) : 0; style_header("Ticket Change Details"); style_submenu_element("Raw", "%R/artifact/%s", zUuid); style_submenu_element("History", "%R/tkthistory/%s", zTktName); style_submenu_element("Page", "%R/tktview/%t", zTktName); style_submenu_element("Timeline", "%R/tkttimeline/%t", zTktName); if( P("plaintext") ){ style_submenu_element("Formatted", "%R/info/%s", zUuid); }else{ style_submenu_element("Plaintext", "%R/info/%s?plaintext", zUuid); } @ <div class="section">Overview</div> @ <p><table class="label-value"> @ <tr><th>Artifact ID:</th> @ <td>%z(href("%R/artifact/%!S",zUuid))%s(zUuid)</a> if( g.perm.Setup ){ |
︙ | ︙ | |||
2262 2263 2264 2265 2266 2267 2268 | { "#d69b80", 0 }, { "#d1d680", 0 }, { "#91d680", 0 }, { "custom", "##" }, }; | | | 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 | { "#d69b80", 0 }, { "#d1d680", 0 }, { "#91d680", 0 }, { "custom", "##" }, }; int nColor = count(aColor)-1; int stdClrFound = 0; int i; if( zIdPropagate ){ @ <div><label> if( fPropagate ){ @ <input type="checkbox" name="%s(zIdPropagate)" checked="checked" /> |
︙ | ︙ |
Changes to src/json_branch.c.
︙ | ︙ | |||
290 291 292 293 294 295 296 | brid = content_put(&branch); if( brid==0 ){ fossil_fatal("Problem committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ | | | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | brid = content_put(&branch); if( brid==0 ){ fossil_fatal("Problem committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ fossil_fatal("%s", g.zErrMsg); } assert( blob_is_reset(&branch) ); content_deltify(rootid, brid, 0); if( zNewRid ){ *zNewRid = brid; } |
︙ | ︙ |
Changes to src/json_wiki.c.
︙ | ︙ | |||
113 114 115 116 117 118 119 120 | json_new_int((cson_int_t)(zBody?strlen(zBody):0))); }else{ if( contentFormat>0 ){/*HTML-ize it*/ Blob content = empty_blob; Blob raw = empty_blob; zFormat = "html"; if(zBody && *zBody){ blob_append(&raw,zBody,-1); | > > > > | > > > > > > > > > > > > > > | 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | json_new_int((cson_int_t)(zBody?strlen(zBody):0))); }else{ if( contentFormat>0 ){/*HTML-ize it*/ Blob content = empty_blob; Blob raw = empty_blob; zFormat = "html"; if(zBody && *zBody){ const char *zMimetype = pWiki->zMimetype; if( zMimetype==0 ) zMimetype = "text/plain"; zMimetype = wiki_filter_mimetypes(zMimetype); blob_append(&raw,zBody,-1); if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ wiki_convert(&raw,&content,0); }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ markdown_to_html(&raw,0,&content); }else if( fossil_strcmp(zMimetype, "text/plain")==0 ){ htmlize_to_blob(&content,blob_str(&raw),blob_size(&raw)); }else{ json_set_err( FSL_JSON_E_UNKNOWN, "Unsupported MIME type '%s' for wiki page '%s'.", zMimetype, pWiki->zWikiTitle ); blob_reset(&content); blob_reset(&raw); cson_free_object(pay); manifest_destroy(pWiki); return NULL; } len = (unsigned int)blob_size(&content); } cson_object_set(pay,"size",json_new_int((cson_int_t)len)); cson_object_set(pay,"content", cson_value_new_string(blob_buffer(&content),len)); blob_reset(&content); blob_reset(&raw); |
︙ | ︙ |
Changes to src/linenoise.c.
︙ | ︙ | |||
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | * Sequence: ESC [ 2 J * Effect: clear the whole screen * */ #include <termios.h> #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <errno.h> #include <string.h> #include <stdlib.h> #include <ctype.h> #include <sys/types.h> #include <sys/ioctl.h> #include <unistd.h> #include "linenoise.h" #define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100 #define LINENOISE_MAX_LINE 4096 static const char *unsupported_term[] = {"dumb","cons25","emacs",NULL}; static linenoiseCompletionCallback *completionCallback = NULL; static struct termios orig_termios; /* In order to restore at exit.*/ | > > | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | * Sequence: ESC [ 2 J * Effect: clear the whole screen * */ #include <termios.h> #include <unistd.h> #include <stdarg.h> #include <stdlib.h> #include <stdio.h> #include <errno.h> #include <string.h> #include <stdlib.h> #include <ctype.h> #include <sys/types.h> #include <sys/ioctl.h> #include <unistd.h> #include "linenoise.h" #include "sqlite3.h" #define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100 #define LINENOISE_MAX_LINE 4096 static const char *unsupported_term[] = {"dumb","cons25","emacs",NULL}; static linenoiseCompletionCallback *completionCallback = NULL; static struct termios orig_termios; /* In order to restore at exit.*/ |
︙ | ︙ | |||
189 190 191 192 193 194 195 196 197 198 199 200 201 202 | } \ fprintf(lndebug_fp, ", " fmt, arg1); \ fflush(lndebug_fp); \ } while (0) #else #define lndebug(fmt, arg1) #endif /* ======================= Low level terminal handling ====================== */ /* Set if to use or not the multi line mode. */ void linenoiseSetMultiLine(int ml) { mlmode = ml; } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 | } \ fprintf(lndebug_fp, ", " fmt, arg1); \ fflush(lndebug_fp); \ } while (0) #else #define lndebug(fmt, arg1) #endif /* =========================== C89 compatibility ============================ */ /* snprintf() is not C89, but sqlite3_vsnprintf() can be adapted. */ static int linenoiseSnprintf(char *str, size_t size, const char *format, ...) { va_list ap; int result; va_start(ap,format); result = (int)strlen(sqlite3_vsnprintf((int)size,str,format,ap)); va_end(ap); return result; } #undef snprintf #define snprintf linenoiseSnprintf /* strdup() is technically not standard C89 despite being in POSIX. */ static char *linenoiseStrdup(const char *s) { size_t size = strlen(s)+1; char *result = malloc(size); if (result) memcpy(result,s,size); return result; } #undef strdup #define strdup linenoiseStrdup /* strcasecmp() is not standard C89. SQLite offers a direct replacement. */ #undef strcasecmp #define strcasecmp sqlite3_stricmp /* ======================= Low level terminal handling ====================== */ /* Set if to use or not the multi line mode. */ void linenoiseSetMultiLine(int ml) { mlmode = ml; } |
︙ | ︙ |
Changes to src/main.c.
︙ | ︙ | |||
16 17 18 19 20 21 22 23 24 25 26 27 28 29 | ******************************************************************************* ** ** This module codes the main() procedure that runs first when the ** program is invoked. */ #include "VERSION.h" #include "config.h" #include "main.h" #include <string.h> #include <time.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #include <stdlib.h> /* atexit() */ | > > > | < < | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | ******************************************************************************* ** ** This module codes the main() procedure that runs first when the ** program is invoked. */ #include "VERSION.h" #include "config.h" #if defined(_WIN32) # include <windows.h> #endif #include "main.h" #include <string.h> #include <time.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #include <stdlib.h> /* atexit() */ #if !defined(_WIN32) # include <errno.h> /* errno global */ #endif #ifdef FOSSIL_ENABLE_SSL # include "openssl/crypto.h" #endif #if defined(FOSSIL_ENABLE_MINIZ) # define MINIZ_HEADER_FILE_ONLY |
︙ | ︙ | |||
297 298 299 300 301 302 303 304 305 306 307 308 309 310 | Global g; /* ** atexit() handler which frees up "some" of the resources ** used by fossil. */ static void fossil_atexit(void) { #if defined(_WIN32) && !defined(_WIN64) && defined(FOSSIL_ENABLE_TCL) && \ defined(USE_TCL_STUBS) /* ** If Tcl is compiled on Windows using the latest MinGW, Fossil can crash ** when exiting while a stubs-enabled Tcl is still loaded. This is due to ** a bug in MinGW, see: ** | > > > > > > > > > > > > | 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 | Global g; /* ** atexit() handler which frees up "some" of the resources ** used by fossil. */ static void fossil_atexit(void) { #if USE_SEE /* ** Zero, unlock, and free the saved database encryption key now. */ db_unsave_encryption_key(); #endif #if defined(_WIN32) || defined(__BIONIC__) /* ** Free the secure getpass() buffer now. */ freepass(); #endif #if defined(_WIN32) && !defined(_WIN64) && defined(FOSSIL_ENABLE_TCL) && \ defined(USE_TCL_STUBS) /* ** If Tcl is compiled on Windows using the latest MinGW, Fossil can crash ** when exiting while a stubs-enabled Tcl is still loaded. This is due to ** a bug in MinGW, see: ** |
︙ | ︙ | |||
1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 | }else{ db_open_repository(zRepo); } } } } /* ** undocumented format: ** ** fossil http INFILE OUTFILE IPADDR ?REPOSITORY? ** ** The argv==6 form (with no options) is used by the win32 server only. ** | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 | }else{ db_open_repository(zRepo); } } } } #if defined(_WIN32) && USE_SEE /* ** This function attempts to parse a string value in the following ** format: ** ** "%lu:%p:%u" ** ** There are three parts, which must be delimited by colons. The ** first part is an unsigned long integer in base-10 (decimal) format. ** The second part is a numerical representation of a native pointer, ** in the appropriate implementation defined format. The third part ** is an unsigned integer in base-10 (decimal) format. ** ** If the specified value cannot be parsed, for any reason, a fatal ** error will be raised and the process will be terminated. */ void parse_pid_key_value( const char *zPidKey, /* The value to be parsed. */ DWORD *pProcessId, /* The extracted process identifier. */ LPVOID *ppAddress, /* The extracted pointer value. */ SIZE_T *pnSize /* The extracted size value. */ ){ unsigned int nSize = 0; if( sscanf(zPidKey, "%lu:%p:%u", pProcessId, ppAddress, &nSize)==3 ){ *pnSize = (SIZE_T)nSize; }else{ fossil_fatal("failed to parse pid key"); } } #endif /* ** undocumented format: ** ** fossil http INFILE OUTFILE IPADDR ?REPOSITORY? ** ** The argv==6 form (with no options) is used by the win32 server only. ** |
︙ | ︙ | |||
1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 | ** --nojail drop root privilege but do not enter the chroot jail ** --nossl signal that no SSL connections are available ** --notfound URL use URL as "HTTP 404, object not found" page. ** --repolist If REPOSITORY is directory, URL "/" lists all repos ** --scgi Interpret input as SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** --th-trace trace TH1 execution (for debugging purposes) ** ** See also: cgi, server, winsrv */ void cmd_http(void){ const char *zIpAddr = 0; const char *zNotFound; const char *zHost; const char *zAltBase; const char *zFileGlob; int useSCGI; int noJail; int allowRepoList; Th_InitTraceLog(); /* The winhttp module passes the --files option as --files-urlenc with ** the argument being URL encoded, to avoid wildcard expansion in the ** shell. This option is for internal use and is undocumented. */ | > > > > > | 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 | ** --nojail drop root privilege but do not enter the chroot jail ** --nossl signal that no SSL connections are available ** --notfound URL use URL as "HTTP 404, object not found" page. ** --repolist If REPOSITORY is directory, URL "/" lists all repos ** --scgi Interpret input as SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** --th-trace trace TH1 execution (for debugging purposes) ** --usepidkey Use saved encryption key from parent process. This is ** only necessary when using SEE on Windows. ** ** See also: cgi, server, winsrv */ void cmd_http(void){ const char *zIpAddr = 0; const char *zNotFound; const char *zHost; const char *zAltBase; const char *zFileGlob; int useSCGI; int noJail; int allowRepoList; #if defined(_WIN32) && USE_SEE const char *zPidKey; #endif Th_InitTraceLog(); /* The winhttp module passes the --files option as --files-urlenc with ** the argument being URL encoded, to avoid wildcard expansion in the ** shell. This option is for internal use and is undocumented. */ |
︙ | ︙ | |||
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 | if( zAltBase ) set_base_url(zAltBase); if( find_option("https",0,0)!=0 ){ zIpAddr = fossil_getenv("REMOTE_HOST"); /* From stunnel */ cgi_replace_parameter("HTTPS","on"); } zHost = find_option("host", 0, 1); if( zHost ) cgi_replace_parameter("HTTP_HOST",zHost); /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 && g.argc!=5 && g.argc!=6 ){ fossil_fatal("no repository specified"); } | > > > > > > > > > > > | 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 | if( zAltBase ) set_base_url(zAltBase); if( find_option("https",0,0)!=0 ){ zIpAddr = fossil_getenv("REMOTE_HOST"); /* From stunnel */ cgi_replace_parameter("HTTPS","on"); } zHost = find_option("host", 0, 1); if( zHost ) cgi_replace_parameter("HTTP_HOST",zHost); #if defined(_WIN32) && USE_SEE zPidKey = find_option("usepidkey", 0, 1); if( zPidKey ){ DWORD processId = 0; LPVOID pAddress = NULL; SIZE_T nSize = 0; parse_pid_key_value(zPidKey, &processId, &pAddress, &nSize); db_read_saved_encryption_key_from_process(processId, pAddress, nSize); } #endif /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 && g.argc!=5 && g.argc!=6 ){ fossil_fatal("no repository specified"); } |
︙ | ︙ | |||
2169 2170 2171 2172 2173 2174 2175 | ** --nossl signal that no SSL connections are available ** --notfound URL Redirect ** -P|--port TCPPORT listen to request on port TCPPORT ** --th-trace trace TH1 execution (for debugging purposes) ** --repolist If REPOSITORY is dir, URL "/" lists repos. ** --scgi Accept SCGI rather than HTTP ** --skin LABEL Use override skin LABEL | | > > > > | 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 | ** --nossl signal that no SSL connections are available ** --notfound URL Redirect ** -P|--port TCPPORT listen to request on port TCPPORT ** --th-trace trace TH1 execution (for debugging purposes) ** --repolist If REPOSITORY is dir, URL "/" lists repos. ** --scgi Accept SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** --usepidkey Use saved encryption key from parent process. This is ** only necessary when using SEE on Windows. ** ** See also: cgi, http, winsrv */ void cmd_webserver(void){ int iPort, mxPort; /* Range of TCP ports allowed */ const char *zPort; /* Value of the --port option */ const char *zBrowser; /* Name of web browser program */ char *zBrowserCmd = 0; /* Command to launch the web browser */ int isUiCmd; /* True if command is "ui", not "server' */ const char *zNotFound; /* The --notfound option or NULL */ int flags = 0; /* Server flags */ #if !defined(_WIN32) int noJail; /* Do not enter the chroot jail */ #endif int allowRepoList; /* List repositories on URL "/" */ const char *zAltBase; /* Argument to the --baseurl option */ const char *zFileGlob; /* Static content must match this */ char *zIpAddr = 0; /* Bind to this IP address */ int fCreate = 0; /* The --create flag */ const char *zInitPage = 0; /* Start on this page. --page option */ #if defined(_WIN32) && USE_SEE const char *zPidKey; #endif #if defined(_WIN32) const char *zStopperFile; /* Name of file used to terminate server */ zStopperFile = find_option("stopper", 0, 1); #endif zFileGlob = find_option("files-urlenc",0,1); |
︙ | ︙ | |||
2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 | }else{ /* without --https, defaults to not available. */ g.sslNotAvailable = 1; } if( find_option("localhost", 0, 0)!=0 ){ flags |= HTTP_SERVER_LOCALHOST; } /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); if( isUiCmd ){ flags |= HTTP_SERVER_LOCALHOST|HTTP_SERVER_REPOLIST; | > > > > > > > > > > > | 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 | }else{ /* without --https, defaults to not available. */ g.sslNotAvailable = 1; } if( find_option("localhost", 0, 0)!=0 ){ flags |= HTTP_SERVER_LOCALHOST; } #if defined(_WIN32) && USE_SEE zPidKey = find_option("usepidkey", 0, 1); if( zPidKey ){ DWORD processId = 0; LPVOID pAddress = NULL; SIZE_T nSize = 0; parse_pid_key_value(zPidKey, &processId, &pAddress, &nSize); db_read_saved_encryption_key_from_process(processId, pAddress, nSize); } #endif /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); if( isUiCmd ){ flags |= HTTP_SERVER_LOCALHOST|HTTP_SERVER_REPOLIST; |
︙ | ︙ | |||
2273 2274 2275 2276 2277 2278 2279 | #if !defined(__DARWIN__) && !defined(__APPLE__) && !defined(__HAIKU__) zBrowser = db_get("web-browser", 0); if( zBrowser==0 ){ static const char *const azBrowserProg[] = { "xdg-open", "gnome-open", "firefox", "google-chrome" }; int i; zBrowser = "echo"; | | | 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 | #if !defined(__DARWIN__) && !defined(__APPLE__) && !defined(__HAIKU__) zBrowser = db_get("web-browser", 0); if( zBrowser==0 ){ static const char *const azBrowserProg[] = { "xdg-open", "gnome-open", "firefox", "google-chrome" }; int i; zBrowser = "echo"; for(i=0; i<count(azBrowserProg); i++){ if( binaryOnPath(azBrowserProg[i]) ){ zBrowser = azBrowserProg[i]; break; } } } #else |
︙ | ︙ |
Changes to src/main.mk.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | # ############################################################################## # WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") ############################################################################## # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) SRC = \ $(SRCDIR)/add.c \ $(SRCDIR)/allrepo.c \ $(SRCDIR)/attach.c \ | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | # ############################################################################## # WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") ############################################################################## # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XBCC = $(BCC) $(BCCFLAGS) $(CFLAGS) XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) SRC = \ $(SRCDIR)/add.c \ $(SRCDIR)/allrepo.c \ $(SRCDIR)/attach.c \ |
︙ | ︙ | |||
449 450 451 452 453 454 455 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c | | | | | | | | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(XBCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(XBCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(XBCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c $(OBJDIR)/mkbuiltin: $(SRCDIR)/mkbuiltin.c $(XBCC) -o $(OBJDIR)/mkbuiltin $(SRCDIR)/mkbuiltin.c $(OBJDIR)/mkversion: $(SRCDIR)/mkversion.c $(XBCC) -o $(OBJDIR)/mkversion $(SRCDIR)/mkversion.c $(OBJDIR)/codecheck1: $(SRCDIR)/codecheck1.c $(XBCC) -o $(OBJDIR)/codecheck1 $(SRCDIR)/codecheck1.c # Run the test suite. # Other flags that can be included in TESTFLAGS are: # # -halt Stop testing after the first failed test # -keep Keep the temporary workspace for debugging # -prot Write a detailed log of the tests to the file ./prot |
︙ | ︙ |
Changes to src/makeheaders.html.
1 2 3 4 5 6 7 8 | <html> <head><title>The Makeheaders Program</title></head> <body bgcolor=white> <h1 align=center>The Makeheaders Program</h1> <p> This document describes <em>makeheaders</em>, | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | <html> <head><title>The Makeheaders Program</title></head> <body bgcolor=white> <h1 align=center>The Makeheaders Program</h1> <p> This document describes <em>makeheaders</em>, a tool that automatically generates “<code>.h</code>” files for a C or C++ programming project. </p> <h2>Table Of Contents</h2> <ul> |
︙ | ︙ | |||
65 66 67 68 69 70 71 | <p> Declarations in C include things such as the following: <ul> <li> Typedefs. <li> Structure, union and enumeration declarations. <li> Function and procedure prototypes. <li> Preprocessor macros and #defines. | | | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | <p> Declarations in C include things such as the following: <ul> <li> Typedefs. <li> Structure, union and enumeration declarations. <li> Function and procedure prototypes. <li> Preprocessor macros and #defines. <li> “<code>extern</code>” variable declarations. </ul> </p> <p> Definitions in C, on the other hand, include these kinds of things: <ul> <li> Variable definitions. |
︙ | ︙ | |||
87 88 89 90 91 92 93 | modern software engineering. Another way of looking at the difference is that the declaration is the <em>interface</em> and the definition is the <em>implementation</em>. </p> <p> In C programs, it has always been the tradition that declarations are | | | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 | modern software engineering. Another way of looking at the difference is that the declaration is the <em>interface</em> and the definition is the <em>implementation</em>. </p> <p> In C programs, it has always been the tradition that declarations are put in files with the “<code>.h</code>” suffix and definitions are placed in “<code>.c</code>” files. The .c files contain “<code>#include</code>” preprocessor statements that cause the contents of .h files to be included as part of the source code when the .c file is compiled. In this way, the .h files define the interface to a subsystem and the .c files define how the subsystem is implemented. </p> <a name="H0003"></a> |
︙ | ︙ | |||
260 261 262 263 264 265 266 | into the file of your choice. </p> <p> A similar option is -H. Like the lower-case -h option, big -H generates a single include file on standard output. But unlike small -h, the big -H only emits prototypes and declarations that | | | 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 | into the file of your choice. </p> <p> A similar option is -H. Like the lower-case -h option, big -H generates a single include file on standard output. But unlike small -h, the big -H only emits prototypes and declarations that have been designated as “exportable”. The idea is that -H will generate an include file that defines the interface to a library. More will be said about this in section 3.4. </p> <p> Sometimes you want the base name of the .c file and the .h file to |
︙ | ︙ | |||
291 292 293 294 295 296 297 | If you want a particular file to be scanned by makeheaders but you don't want makeheaders to generate a header file for that file, then you can supply an empty header filename, like this: <pre> makeheaders alpha.c beta.c gamma.c: </pre> In this example, makeheaders will scan the three files named | | | | | | | | | | > | > | | | | | | | > | | | | | | | | | 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 | If you want a particular file to be scanned by makeheaders but you don't want makeheaders to generate a header file for that file, then you can supply an empty header filename, like this: <pre> makeheaders alpha.c beta.c gamma.c: </pre> In this example, makeheaders will scan the three files named “<code>alpha.c</code>”, “<code>beta.c</code>” and “<code>gamma.c</code>” but because of the colon on the end of third filename it will only generate headers for the first two files. Unfortunately, it is not possible to get makeheaders to process any file whose name contains a colon. </p> <p> In a large project, the length of the command line for makeheaders can become very long. If the operating system doesn't support long command lines (example: DOS and Win32) you may not be able to list all of the input files in the space available. In that case, you can use the “<code>-f</code>” option followed by the name of a file to cause makeheaders to read command line options and filename from the file instead of from the command line. For example, you might prepare a file named “<code>mkhdr.dat</code>” that contains text like this: <pre> src/alpha.c:hdr/alpha.h src/beta.c:hdr/beta.h src/gamma.c:hdr/gamma.h ... </pre> Then invoke makeheaders as follows: <pre> makeheaders -f mkhdr.dat </pre> </p> <p> The “<code>-local</code>” option causes makeheaders to generate of prototypes for “<code>static</code>” functions and procedures. Such prototypes are normally omitted. </p> <p> Finally, makeheaders also includes a “<code>-doc</code>” option. This command line option prevents makeheaders from generating any headers at all. Instead, makeheaders will write to standard output information about every definition and declaration that it encounters in its scan of source files. The information output includes the type of the definition or declaration and any comment that preceeds the definition or declaration. The output is in a format that can be easily parsed, and is intended to be read by another program that will generate documentation about the program. We'll talk more about this feature later. </p> <p> If you forget what command line options are available, or forget their exact name, you can invoke makeheaders using an unknown command line option (like “<code>--help</code>” or “<code>-?</code>”) and it will print a summary of the available options on standard error. If you need to process a file whose name begins with “<code>-</code>”, you can prepend a “<code>./</code>” to its name in order to get it accepted by the command line parser. Or, you can insert the special option “<code>--</code>” on the command line to cause all subsequent command line arguments to be treated as filenames even if their names begin with “<code>-</code>”. </p> <a name="H0006"></a> <h2>3.0 Preparing Source Files For Use With Makeheaders</h2> <p> Very little has to be done to prepare source files for use with makeheaders since makeheaders will read and understand ordinary C code. But it is important that you structure your files in a way that makes sense in the makeheaders context. This section will describe several typical uses of makeheaders. </p> <a name="H0007"></a> <h3>3.1 The Basic Setup</h3> <p> The simplest way to use makeheaders is to put all definitions in one or more .c files and all structure and type declarations in separate .h files. The only restriction is that you should take care to chose basenames for your .h files that are different from the basenames for your .c files. Recall that if your .c file is named (for example) “<code>alpha.c</code>” makeheaders will attempt to generate a corresponding header file named “<code>alpha.h</code>”. For that reason, you don't want to use that name for any of the .h files you write since that will prevent makeheaders from generating the .h file automatically. </p> <p> The structure of a .c file intented for use with makeheaders is very simple. All you have to do is add a single “<code>#include</code>” to the top of the file that sources the header file that makeheaders will generate. Hence, the beginning of a source file named “<code>alpha.c</code>” might look something like this: </p> <pre> /* * Introductory comment... */ #include "alpha.h" /* The rest of your code... */ </pre> <p> Your manually generated header files require no special attention at all. Code them as you normally would. However, makeheaders will work better if you omit the “<code>#if</code>” statements people often put around the outside of header files that prevent the files from being included more than once. For example, to create a header file named “<code>beta.h</code>”, many people will habitually write the following: <pre> #ifndef BETA_H #define BETA_H /* declarations for beta.h go here */ #endif </pre> You can forego this cleverness with makeheaders. Remember that the header files you write will never really be included by any C code. Instead, makeheaders will scan your header files to extract only those declarations that are needed by individual .c files and then copy those declarations to the .h files corresponding to the .c files. Hence, the “<code>#if</code>” wrapper serves no useful purpose. But it does make makeheaders work harder, forcing it to put the statements <pre> #if !defined(BETA_H) #endif </pre> |
︙ | ︙ | |||
463 464 465 466 467 468 469 | manually written .h files and then automatically generate .h files corresponding to all .c files. </p> <p> Note that the wildcard expression used in the above example, | | | 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 | manually written .h files and then automatically generate .h files corresponding to all .c files. </p> <p> Note that the wildcard expression used in the above example, “<code>*.[ch]</code>”, will expand to include all .h files in the current directory, both those entered manually be the programmer and others generated automatically by a prior run of makeheaders. But that is not a problem. The makeheaders program will recognize and ignore any files it has previously generated that show up on its input list. </p> |
︙ | ︙ | |||
487 488 489 490 491 492 493 | <ul> <p><li> When a function is defined in any .c file, a prototype of that function is placed in the generated .h file of every .c file that calls the function.</p> | | | | | | | | | | | > | > | 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 | <ul> <p><li> When a function is defined in any .c file, a prototype of that function is placed in the generated .h file of every .c file that calls the function.</p> <P>If the “<code>static</code>” keyword of C appears at the beginning of the function definition, the prototype is suppressed. If you use the “<code>LOCAL</code>” keyword where you would normally say “<code>static</code>”, then a prototype is generated, but it will only appear in the single header file that corresponds to the source file containing the function. For example, if the file <code>alpha.c</code> contains the following: <pre> LOCAL int testFunc(void){ return 0; } </pre> Then the header file <code>alpha.h</code> will contain <pre> #define LOCAL static LOCAL int testFunc(void); </pre> However, no other generated header files will contain a prototype for <code>testFunc()</code> since the function has only file scope.</p> <p>When the “<code>LOCAL</code>” keyword is used, makeheaders will also generate a #define for LOCAL, like this: <pre> #define LOCAL static </pre> so that the C compiler will know what it means.</p> <p>If you invoke makeheaders with a “<code>-local</code>” command-line option, then it treats the “<code>static</code>” keyword like “<code>LOCAL</code>” and generates prototypes in the header file that corresponds to the source file containing the function definition.</p> <p><li> When a global variable is defined in a .c file, an “<code>extern</code>” declaration of that variable is placed in the header of every .c file that uses the variable. </p> <p><li> When a structure, union or enumeration declaration or a function prototype or a C++ class declaration appears in a |
︙ | ︙ | |||
548 549 550 551 552 553 554 | those files and are not copied. </p> <p><li> When a structure, union or enumeration declaration appears in a .h file, makeheaders will automatically generate a typedef that allows the declaration to be referenced without | | | | | | | 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 | those files and are not copied. </p> <p><li> When a structure, union or enumeration declaration appears in a .h file, makeheaders will automatically generate a typedef that allows the declaration to be referenced without the “<code>struct</code>”, “<code>union</code>” or “<code>enum</code>” qualifier. In other words, if makeheaders sees the code: <pre> struct Examp { /* ... */ }; </pre> it will automatically generate a corresponding typedef like this: <pre> typedef struct Examp Examp; </pre> </p> <p><li> Makeheaders generates an error message if it encounters a function or variable definition within a .h file. The .h files are suppose to contain only interface, not implementation. C compilers will not enforce this convention, but makeheaders does. </ul> <p> As a final note, we observe that automatically generated declarations are ordered as required by the ANSI-C programming language. If the declaration of some structure “<code>X</code>” requires a prior declaration of another structure “<code>Y</code>”, then Y will appear first in the generated headers. </p> <a name="H0009"></a> <h3>3.3 How To Avoid Having To Write Any Header Files</h3> <p> In my experience, large projects work better if all of the manually |
︙ | ︙ | |||
609 610 611 612 613 614 615 | You can instruct makeheaders to treat any part of a .c file as if it were a .h file by enclosing that part of the .c file within: <pre> #if INTERFACE #endif </pre> Thus any structure definitions that appear after the | | | | | | | 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 | You can instruct makeheaders to treat any part of a .c file as if it were a .h file by enclosing that part of the .c file within: <pre> #if INTERFACE #endif </pre> Thus any structure definitions that appear after the “<code>#if INTERFACE</code>” but before the corresponding “<code>#endif</code>” are eligable to be copied into the automatically generated .h files of other .c files. </p> <p> If you use the “<code>#if INTERFACE</code>” mechanism in a .c file, then the generated header for that .c file will contain a line like this: <pre> #define INTERFACE 0 </pre> In other words, the C compiler will never see any of the text that defines the interface. But makeheaders will copy all necessary definitions and declarations into the .h file it generates, so .c files will compile as if the declarations were really there. This approach has the advantage that you don't have to worry with putting the declarations in the correct ANSI-C order -- makeheaders will do that for you automatically. </p> <p> Note that you don't have to use this approach exclusively. You can put some declarations in .h files and others within the “<code>#if INTERFACE</code>” regions of .c files. Makeheaders treats all declarations alike, no matter where they come from. You should also note that a single .c file can contain as many “<code>#if INTERFACE</code>” regions as desired. </p> <a name="H0010"></a> <h3>3.4 Designating Declarations For Export</h3> <p> In a large project, one will often construct a hierarchy of |
︙ | ︙ | |||
662 663 664 665 666 667 668 | (The second interface is normally a subset of the first.) Ordinary C does not provide support for a tiered interface like this, but makeheaders does. </p> <p> Using makeheaders, it is possible to designate routines and data | | | 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 | (The second interface is normally a subset of the first.) Ordinary C does not provide support for a tiered interface like this, but makeheaders does. </p> <p> Using makeheaders, it is possible to designate routines and data structures as being for “<code>export</code>”. Exported objects are visible not only to other files within the same library or subassembly but also to other libraries and subassemblies in the larger program. By default, makeheaders only makes objects visible to other members of the same library. </p> |
︙ | ︙ | |||
688 689 690 691 692 693 694 | This is not a perfect solution, but it works well in practice. </p> <p> But trouble quickly arises when we attempt to devise a mechanism for telling makeheaders which prototypes it should export and which it should keep local. | | | | | 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 | This is not a perfect solution, but it works well in practice. </p> <p> But trouble quickly arises when we attempt to devise a mechanism for telling makeheaders which prototypes it should export and which it should keep local. The built-in “<code>static</code>” keyword of C works well for prohibiting prototypes from leaving a single source file, but because C doesn't support a linkage hierarchy, there is nothing in the C language to help us. We'll have to invite our own keyword: “<code>EXPORT</code>” </p> <p> Makeheaders allows the EXPORT keyword to precede any function or procedure definition. The routine following the EXPORT keyword is then eligable to appear in the header file generated using the -H command line option. |
︙ | ︙ | |||
724 725 726 727 728 729 730 | are visible to all files within the library, any declarations or definitions within <pre> #if EXPORT_INTERFACE #endif </pre> will become part of the exported interface. | | | | | | | | 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 | are visible to all files within the library, any declarations or definitions within <pre> #if EXPORT_INTERFACE #endif </pre> will become part of the exported interface. The “<code>#if EXPORT_INTERFACE</code>” mechanism can be used in either .c or .h files. (The “<code>#if INTERFACE</code>” can also be used in both .h and .c files, but since it's use in a .h file would be redundant, we haven't mentioned it before.) </p> <a name="H0011"></a> <h3>3.5 Local declarations processed by makeheaders</h3> <p> Structure declarations and typedefs that appear in .c files are normally ignored by makeheaders. Such declarations are only intended for use by the source file in which they appear and so makeheaders doesn't need to copy them into any generated header files. We call such declarations “<code>private</code>”. </p> <p> Sometimes it is convenient to have makeheaders sort a sequence of private declarations into the correct order for us automatically. Or, we could have static functions and procedures for which we would like makeheaders to generate prototypes, but the arguments to these |
︙ | ︙ | |||
762 763 764 765 766 767 768 | <p> When this situation arises, enclose the private declarations within <pre> #if LOCAL_INTERFACE #endif </pre> | | > | | > | 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 | <p> When this situation arises, enclose the private declarations within <pre> #if LOCAL_INTERFACE #endif </pre> A “<code>LOCAL_INTERFACE</code>” block works very much like the “<code>INTERFACE</code>” and “<code>EXPORT_INTERFACE</code>” blocks described above, except that makeheaders insures that the objects declared in a LOCAL_INTERFACE are only visible to the file containing the LOCAL_INTERFACE. </p> <a name="H0012"></a> <h3>3.6 Using Makeheaders With C++ Code</h3> <p> You can use makeheaders to generate header files for C++ code, in addition to C. Makeheaders will recognize and copy both “<code>class</code>” declarations and inline function definitions, and it knows not to try to generate prototypes for methods. </p> <p> In fact, makeheaders is smart enough to be used in projects that employ a mixture of C and C++. |
︙ | ︙ | |||
803 804 805 806 807 808 809 | <p> No special command-line options are required to use makeheaders with C++ input. Makeheaders will recognize that its source code is C++ by the suffix on the source code filename. Simple ".c" or ".h" suffixes are assumed to be ANSI-C. Anything else, including ".cc", ".C" and ".cpp" is assumed to be C++. The name of the header file generated by makeheaders is derived from | | | | | | 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 | <p> No special command-line options are required to use makeheaders with C++ input. Makeheaders will recognize that its source code is C++ by the suffix on the source code filename. Simple ".c" or ".h" suffixes are assumed to be ANSI-C. Anything else, including ".cc", ".C" and ".cpp" is assumed to be C++. The name of the header file generated by makeheaders is derived from the name of the source file by converting every "c" to "h" and every "C" to "H" in the suffix of the filename. Thus the C++ source file “<code>alpha.cpp</code>” will induce makeheaders to generate a header file named “<code>alpha.hpp</code>”. </p> <p> Makeheaders augments class definitions by inserting prototypes to methods where appropriate. If a method definition begins with one of the special keywords <b>PUBLIC</b>, <b>PROTECTED</b>, or <b>PRIVATE</b> (in upper-case to distinguish them from the regular C++ keywords with the same meaning) then a prototype for that method will be inserted into the class definition. If none of these keywords appear, then the prototype is not inserted. For example, in the following code, the constructor is not explicitly declared in the class definition but makeheaders will add it there |
︙ | ︙ | |||
865 866 867 868 869 870 871 | </p> <h4>3.6.1 C++ Limitations</h4> <p> Makeheaders does not understand more recent C++ syntax such as templates and namespaces. | | | | | 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 | </p> <h4>3.6.1 C++ Limitations</h4> <p> Makeheaders does not understand more recent C++ syntax such as templates and namespaces. Perhaps these issues will be addressed in future revisions. </p> <a name="H0013"></a> <h3>3.7 Conditional Compilation</h3> <p> The makeheaders program understands and tracks the conditional compilation constructs in the source code files it scans. Hence, if the following code appears in a source file <pre> #ifdef UNIX # define WORKS_WELL 1 #else # define WORKS_WELL 0 #endif </pre> then the next patch of code will appear in the generated header for every .c file that uses the WORKS_WELL constant: <pre> #if defined(UNIX) # define WORKS_WELL 1 |
︙ | ︙ | |||
916 917 918 919 920 921 922 | </p> <p> Makeheaders does not understand the old K&R style of function and procedure definitions. It only understands the modern ANSI-C style, and will probably become very confused if it encounters an old K&R function. | | | | 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 | </p> <p> Makeheaders does not understand the old K&R style of function and procedure definitions. It only understands the modern ANSI-C style, and will probably become very confused if it encounters an old K&R function. Therefore you should take care to avoid putting K&R function definitions in your code. </p> <p> Makeheaders does not understand when you define more than one global variable with the same type separated by a comma. In other words, makeheaders does not understand this: <pre> |
︙ | ︙ | |||
991 992 993 994 995 996 997 | unrelated file in a different part of the source tree. <li> Information is kept in only one place. When a change occurs in the code, it is not necessary to make a corresponding change in a separate document. Just rerun the documentation generator. </ul> The makeheaders program does not generate program documentation itself. But you can use makeheaders to parse the program source code, extract | | | | | | | | | | > | | 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 | unrelated file in a different part of the source tree. <li> Information is kept in only one place. When a change occurs in the code, it is not necessary to make a corresponding change in a separate document. Just rerun the documentation generator. </ul> The makeheaders program does not generate program documentation itself. But you can use makeheaders to parse the program source code, extract the information that is relevant to the documentation and to pass this information to another tool to do the actual documentation preparation. </p> <p> When makeheaders is run with the “<code>-doc</code>” option, it emits no header files at all. Instead, it does a complete dump of its internal tables to standard output in a form that is easily parsed. This output can then be used by another program (the implementation of which is left as an exercise to the reader) that will use the information to prepare suitable documentation. </p> <p> The “<code>-doc</code>” option causes makeheaders to print information to standard output about all of the following objects: <ul> <li> C++ class declarations <li> Structure and union declarations <li> Enumerations <li> Typedefs <li> Procedure and function definitions <li> Global variables <li> Preprocessor macros (ex: “<code>#define</code>”) </ul> For each of these objects, the following information is output: <ul> <li> The name of the object. <li> The type of the object. (Structure, typedef, macro, etc.) <li> Flags to indicate if the declaration is exported (contained within an EXPORT_INTERFACE block) or local (contained with LOCAL_INTERFACE). <li> A flag to indicate if the object is declared in a C++ file. <li> The name of the file in which the object was declared. <li> The complete text of any block comment that preceeds the declarations. <li> If the declaration occurred inside a preprocessor conditional (“<code>#if</code>”) then the text of that conditional is provided. <li> The complete text of a declaration for the object. </ul> The exact output format will not be described here. It is simple to understand and parse and should be obvious to anyone who inspects some sample output. </p> <a name="H0016"></a> <h2>5.0 Compiling The Makeheaders Program</h2> <p> The source code for makeheaders is a single file of ANSI-C code, approximately 3000 lines in length. The program makes only modest demands of the system and C library and should compile without alteration on most ANSI C compilers and on most operating systems. It is known to compile using several variations of GCC for Unix as well as Cygwin32 and MSVC 5.0 for Win32. </p> |
︙ | ︙ |
Changes to src/makemake.tcl.
︙ | ︙ | |||
253 254 255 256 257 258 259 260 261 262 263 264 265 266 | # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } | > | 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 | # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XBCC = $(BCC) $(BCCFLAGS) $(CFLAGS) XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } |
︙ | ︙ | |||
296 297 298 299 300 301 302 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c | | | | | | | | 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(XBCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(XBCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(XBCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c $(OBJDIR)/mkbuiltin: $(SRCDIR)/mkbuiltin.c $(XBCC) -o $(OBJDIR)/mkbuiltin $(SRCDIR)/mkbuiltin.c $(OBJDIR)/mkversion: $(SRCDIR)/mkversion.c $(XBCC) -o $(OBJDIR)/mkversion $(SRCDIR)/mkversion.c $(OBJDIR)/codecheck1: $(SRCDIR)/codecheck1.c $(XBCC) -o $(OBJDIR)/codecheck1 $(SRCDIR)/codecheck1.c # Run the test suite. # Other flags that can be included in TESTFLAGS are: # # -halt Stop testing after the first failed test # -keep Keep the temporary workspace for debugging # -prot Write a detailed log of the tests to the file ./prot |
︙ | ︙ | |||
915 916 917 918 919 920 921 922 923 924 925 926 927 928 | #### Include a configuration file that can override any one of these settings. # -include config.w32 # STOP HERE # You should not need to change anything below this line #-------------------------------------------------------- XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } writeln "\n" | > | 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 | #### Include a configuration file that can override any one of these settings. # -include config.w32 # STOP HERE # You should not need to change anything below this line #-------------------------------------------------------- XBCC = $(BCC) $(CFLAGS) XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } writeln "\n" |
︙ | ︙ | |||
1011 1012 1013 1014 1015 1016 1017 | ifdef USE_WINDOWS $(MKDIR) $(subst /,\,$(OBJDIR)) else $(MKDIR) $(OBJDIR) endif $(TRANSLATE): $(SRCDIR)/translate.c | | | | | | | | 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 | ifdef USE_WINDOWS $(MKDIR) $(subst /,\,$(OBJDIR)) else $(MKDIR) $(OBJDIR) endif $(TRANSLATE): $(SRCDIR)/translate.c $(XBCC) -o $@ $(SRCDIR)/translate.c $(MAKEHEADERS): $(SRCDIR)/makeheaders.c $(XBCC) -o $@ $(SRCDIR)/makeheaders.c $(MKINDEX): $(SRCDIR)/mkindex.c $(XBCC) -o $@ $(SRCDIR)/mkindex.c $(MKBUILTIN): $(SRCDIR)/mkbuiltin.c $(XBCC) -o $@ $(SRCDIR)/mkbuiltin.c $(MKVERSION): $(SRCDIR)/mkversion.c $(XBCC) -o $@ $(SRCDIR)/mkversion.c $(CODECHECK1): $(SRCDIR)/codecheck1.c $(XBCC) -o $@ $(SRCDIR)/codecheck1.c # WARNING. DANGER. Running the test suite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. test: $(OBJDIR) $(APPNAME) $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) |
︙ | ︙ | |||
1303 1304 1305 1306 1307 1308 1309 | writeln "\t+echo fossil >> \$@" writeln "\t+echo \$(LIBS) >> \$@" writeln "\t+echo. >> \$@" writeln "\t+echo fossil >> \$@" writeln { translate$E: $(SRCDIR)\translate.c | | | | | | | | 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 | writeln "\t+echo fossil >> \$@" writeln "\t+echo \$(LIBS) >> \$@" writeln "\t+echo. >> \$@" writeln "\t+echo fossil >> \$@" writeln { translate$E: $(SRCDIR)\translate.c $(XBCC) -o$@ $** makeheaders$E: $(SRCDIR)\makeheaders.c $(XBCC) -o$@ $** mkindex$E: $(SRCDIR)\mkindex.c $(XBCC) -o$@ $** mkbuiltin$E: $(SRCDIR)\mkbuiltin.c $(XBCC) -o$@ $** mkversion$E: $(SRCDIR)\mkversion.c $(XBCC) -o$@ $** codecheck1$E: $(SRCDIR)\codecheck1.c $(XBCC) -o$@ $** $(OBJDIR)\shell$O : $(SRCDIR)\shell.c $(TCC) -o$@ -c $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) $** $(OBJDIR)\sqlite3$O : $(SRCDIR)\sqlite3.c $(TCC) -o$@ -c $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $** |
︙ | ︙ | |||
1825 1826 1827 1828 1829 1830 1831 | writeln "!endif" writeln "\techo \$(LIBS) $redir \$@" writeln { $(OX): @-mkdir $@ translate$E: $(SRCDIR)\translate.c | | | | | | | | 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 | writeln "!endif" writeln "\techo \$(LIBS) $redir \$@" writeln { $(OX): @-mkdir $@ translate$E: $(SRCDIR)\translate.c $(XBCC) $** makeheaders$E: $(SRCDIR)\makeheaders.c $(XBCC) $** mkindex$E: $(SRCDIR)\mkindex.c $(XBCC) $** mkbuiltin$E: $(SRCDIR)\mkbuiltin.c $(XBCC) $** mkversion$E: $(SRCDIR)\mkversion.c $(XBCC) $** codecheck1$E: $(SRCDIR)\codecheck1.c $(XBCC) $** !if $(USE_SEE)!=0 SQLITE3_SHELL_SRC = $(SRCDIR)\shell-see.c !else SQLITE3_SHELL_SRC = $(SRCDIR)\shell.c !endif |
︙ | ︙ |
Changes to src/markdown.c.
︙ | ︙ | |||
293 294 295 296 297 298 299 | if( i>=size ) return 0; /* binary search of the tag */ key.text = data; key.size = i; return bsearch(&key, block_tags, | | | 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 | if( i>=size ) return 0; /* binary search of the tag */ key.text = data; key.size = i; return bsearch(&key, block_tags, count(block_tags), sizeof block_tags[0], cmp_html_tag); } /* new_work_buffer -- get a new working buffer from the stack or create one */ static struct Blob *new_work_buffer(struct render *rndr){ |
︙ | ︙ |
Changes to src/merge3.c.
︙ | ︙ | |||
314 315 316 317 318 319 320 | int i, j; int len = (int)strlen(mergeMarker[0]); const char *z = blob_buffer(p); int n = blob_size(p) - len + 1; assert( len==(int)strlen(mergeMarker[1]) ); assert( len==(int)strlen(mergeMarker[2]) ); assert( len==(int)strlen(mergeMarker[3]) ); | | | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | int i, j; int len = (int)strlen(mergeMarker[0]); const char *z = blob_buffer(p); int n = blob_size(p) - len + 1; assert( len==(int)strlen(mergeMarker[1]) ); assert( len==(int)strlen(mergeMarker[2]) ); assert( len==(int)strlen(mergeMarker[3]) ); assert( count(mergeMarker)==4 ); for(i=0; i<n; ){ for(j=0; j<4; j++){ if( memcmp(&z[i], mergeMarker[j], len)==0 ) return 1; } while( i<n && z[i]!='\n' ){ i++; } while( i<n && z[i]=='\n' ){ i++; } } |
︙ | ︙ | |||
374 375 376 377 378 379 380 | /* We should be done with options.. */ verify_all_options(); if( g.argc!=6 ){ usage("PIVOT V1 V2 MERGED"); } if( blob_read_from_file(&pivot, g.argv[2])<0 ){ | | | | | | 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | /* We should be done with options.. */ verify_all_options(); if( g.argc!=6 ){ usage("PIVOT V1 V2 MERGED"); } if( blob_read_from_file(&pivot, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&v1, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } if( blob_read_from_file(&v2, g.argv[4])<0 ){ fossil_fatal("cannot read %s", g.argv[4]); } blob_merge(&pivot, &v1, &v2, &merged); if( blob_write_to_file(&merged, g.argv[5])<blob_size(&merged) ){ fossil_fatal("cannot write %s", g.argv[4]); } blob_reset(&pivot); blob_reset(&v1); blob_reset(&v2); blob_reset(&merged); } |
︙ | ︙ |
Changes to src/mkindex.c.
︙ | ︙ | |||
200 201 202 203 204 205 206 | aEntry[nUsed].eType |= CMDFLAG_2ND_TIER; }else{ /* Otherwise, this is a first-tier command */ aEntry[nUsed].eType |= CMDFLAG_1ST_TIER; } } | | | | 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 | aEntry[nUsed].eType |= CMDFLAG_2ND_TIER; }else{ /* Otherwise, this is a first-tier command */ aEntry[nUsed].eType |= CMDFLAG_1ST_TIER; } } /* Process additional flags that might follow the command name */ while( zLine[i+j]!=0 ){ i += j; while( fossil_isspace(zLine[i]) ){ i++; } if( zLine[i]==0 ) break; for(j=0; zLine[i+j] && !fossil_isspace(zLine[i+j]); j++){} if( j==8 && strncmp(&zLine[i], "1st-tier", j)==0 ){ aEntry[nUsed].eType &= ~(CMDFLAG_2ND_TIER|CMDFLAG_TEST); aEntry[nUsed].eType |= CMDFLAG_1ST_TIER; }else if( j==8 && strncmp(&zLine[i], "2nd-tier", j)==0 ){ aEntry[nUsed].eType &= ~(CMDFLAG_1ST_TIER|CMDFLAG_TEST); aEntry[nUsed].eType |= CMDFLAG_2ND_TIER; }else if( j==4 && strncmp(&zLine[i], "test", j)==0 ){ aEntry[nUsed].eType &= ~(CMDFLAG_1ST_TIER|CMDFLAG_2ND_TIER); aEntry[nUsed].eType |= CMDFLAG_TEST; }else{ fprintf(stderr, "%s:%d: unknown option: '%.*s'\n", zFile, nLine, j, &zLine[i]); nErr++; } } nUsed++; } /* ** Check to see if the current line is an #if and if it is, add it to ** the zIf[] string. If the current line is an #endif or #else or #elif ** then cancel the current zIf[] string. |
︙ | ︙ |
Changes to src/mkversion.c.
︙ | ︙ | |||
62 63 64 65 66 67 68 69 70 71 72 73 74 75 | } for(z=vx; z[0]=='0'; z++){} printf("#define RELEASE_VERSION_NUMBER %s\n", z); memset(vx,0,sizeof(vx)); strcpy(vx,b); d = 0; for(z=vx; z[0]; z++){ if( z[0]!='.' ) continue; if ( d<3 ){ z[0] = ','; d++; }else{ z[0] = '\0'; break; | > > > > | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | } for(z=vx; z[0]=='0'; z++){} printf("#define RELEASE_VERSION_NUMBER %s\n", z); memset(vx,0,sizeof(vx)); strcpy(vx,b); d = 0; for(z=vx; z[0]; z++){ if( z[0]=='-' ){ z[0] = 0; break; } if( z[0]!='.' ) continue; if ( d<3 ){ z[0] = ','; d++; }else{ z[0] = '\0'; break; |
︙ | ︙ |
Changes to src/moderate.c.
︙ | ︙ | |||
65 66 67 68 69 70 71 | "modreq", "attachRid", "mlink", "mid", "mlink", "fid", "tagxref", "srcid", "tagxref", "rid", }; int i; | | | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | "modreq", "attachRid", "mlink", "mid", "mlink", "fid", "tagxref", "srcid", "tagxref", "rid", }; int i; for(i=0; i<count(aTabField); i+=2){ if( db_exists("SELECT 1 FROM \"%w\" WHERE \"%w\"=%d", aTabField[i], aTabField[i+1], rid) ) return 1; } return 0; } /* |
︙ | ︙ |
Changes to src/name.c.
︙ | ︙ | |||
1004 1005 1006 1007 1008 1009 1010 | int mx = db_int(0, "SELECT max(rid) FROM blob"); int unpubOnly = PB("unpub"); char *zRange; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("List Of Artifacts"); | | | | 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 | int mx = db_int(0, "SELECT max(rid) FROM blob"); int unpubOnly = PB("unpub"); char *zRange; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("List Of Artifacts"); style_submenu_element("250 Largest", "bigbloblist"); if( !unpubOnly && mx>n && P("s")==0 ){ int i; @ <p>Select a range of artifacts to view:</p> @ <ul> for(i=1; i<=mx; i+=n){ @ <li> %z(href("%R/bloblist?s=%d&n=%d",i,n)) @ %d(i)..%d(i+n-1<mx?i+n-1:mx)</a> } @ </ul> style_footer(); return; } if( !unpubOnly && mx>n ){ style_submenu_element("Index", "bloblist"); } if( unpubOnly ){ zRange = mprintf("IN private"); }else{ zRange = mprintf("BETWEEN %d AND %d", s, s+n-1); } describe_artifacts(zRange); |
︙ | ︙ | |||
1202 1203 1204 1205 1206 1207 1208 | } for(j=0; j<aCollide[i].cnt && j<MAX_COLLIDE; j++){ char *zId = aCollide[i].azHit[j]; if( zId==0 ) continue; @ %z(href("%R/whatis/%s",zId))%h(zId)</a> } } | | | | | 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 | } for(j=0; j<aCollide[i].cnt && j<MAX_COLLIDE; j++){ char *zId = aCollide[i].azHit[j]; if( zId==0 ) continue; @ %z(href("%R/whatis/%s",zId))%h(zId)</a> } } for(i=4; i<count(aCollide); i++){ for(j=0; j<aCollide[i].cnt && j<MAX_COLLIDE; j++){ fossil_free(aCollide[i].azHit[j]); } } } /* ** WEBPAGE: hash-collisions ** ** Show the number of hash collisions for hash prefixes of various lengths. */ void hash_collisions_webpage(void){ login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("SHA1 Prefix Collisions"); style_submenu_element("Activity Reports", "reports"); style_submenu_element("Stats", "stat"); @ <h1>Hash Prefix Collisions on Check-ins</h1> collision_report("SELECT (SELECT uuid FROM blob WHERE rid=objid)" " FROM event WHERE event.type='ci'" " ORDER BY 1"); @ <h1>Hash Prefix Collisions on All Artifacts</h1> collision_report("SELECT uuid FROM blob ORDER BY 1"); style_footer(); } |
Changes to src/printf.c.
︙ | ︙ | |||
156 157 158 159 160 161 162 | { 'G', 0, 1, etGENERIC, 14, 0 }, { 'i', 10, 1, etRADIX, 0, 0 }, { 'n', 0, 0, etSIZE, 0, 0 }, { '%', 0, 0, etPERCENT, 0, 0 }, { 'p', 16, 0, etPOINTER, 0, 1 }, { '/', 0, 0, etPATH, 0, 0 }, }; | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | { 'G', 0, 1, etGENERIC, 14, 0 }, { 'i', 10, 1, etRADIX, 0, 0 }, { 'n', 0, 0, etSIZE, 0, 0 }, { '%', 0, 0, etPERCENT, 0, 0 }, { 'p', 16, 0, etPOINTER, 0, 1 }, { '/', 0, 0, etPATH, 0, 0 }, }; #define etNINFO count(fmtinfo) /* ** "*val" is a double such that 0.1 <= *val < 10.0 ** Return the ascii code for the leading digit of *val, then ** multiply "*val" by 10.0 to renormalize. ** ** Example: |
︙ | ︙ | |||
873 874 875 876 877 878 879 | static int stdoutAtBOL = 1; /* ** Write to standard output or standard error. ** ** On windows, transform the output into the current terminal encoding ** if the output is going to the screen. If output is redirected into | | > > > > > > > < | > | > > | 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 | static int stdoutAtBOL = 1; /* ** Write to standard output or standard error. ** ** On windows, transform the output into the current terminal encoding ** if the output is going to the screen. If output is redirected into ** a file, no translation occurs. Switch output mode to binary to ** properly process line-endings, make sure to switch the mode back to ** text when done. ** No translation ever occurs on unix. */ void fossil_puts(const char *z, int toStdErr){ FILE* out = (toStdErr ? stderr : stdout); int n = (int)strlen(z); if( n==0 ) return; assert( toStdErr==0 || toStdErr==1 ); if( toStdErr==0 ) stdoutAtBOL = (z[n-1]=='\n'); #if defined(_WIN32) if( fossil_utf8_to_console(z, n, toStdErr) >= 0 ){ return; } fflush(out); _setmode(_fileno(out), _O_BINARY); #endif fwrite(z, 1, n, out); #if defined(_WIN32) fflush(out); _setmode(_fileno(out), _O_TEXT); #endif } /* ** Force the standard output cursor to move to the beginning ** of a line, if it is not there already. */ int fossil_force_newline(void){ |
︙ | ︙ | |||
968 969 970 971 972 973 974 | fprintf(out, "------------- %04d-%02d-%02d %02d:%02d:%02d UTC ------------\n", pNow->tm_year+1900, pNow->tm_mon+1, pNow->tm_mday+1, pNow->tm_hour, pNow->tm_min, pNow->tm_sec); va_start(ap, zFormat); vfprintf(out, zFormat, ap); fprintf(out, "\n"); va_end(ap); | | | 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 | fprintf(out, "------------- %04d-%02d-%02d %02d:%02d:%02d UTC ------------\n", pNow->tm_year+1900, pNow->tm_mon+1, pNow->tm_mday+1, pNow->tm_hour, pNow->tm_min, pNow->tm_sec); va_start(ap, zFormat); vfprintf(out, zFormat, ap); fprintf(out, "\n"); va_end(ap); for(i=0; i<count(azEnv); i++){ char *p; if( (p = fossil_getenv(azEnv[i]))!=0 ){ fprintf(out, "%s=%s\n", azEnv[i], p); fossil_path_free(p); }else if( (z = P(azEnv[i]))!=0 ){ fprintf(out, "%s=%s\n", azEnv[i], z); } |
︙ | ︙ |
Changes to src/regexp.c.
︙ | ︙ | |||
203 204 205 206 207 208 209 | strncmp((const char*)zIn+in.i, (const char*)pRe->zInit, pRe->nInit)!=0) ){ in.i++; } if( in.i+pRe->nInit>in.mx ) return 0; } | | | 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | strncmp((const char*)zIn+in.i, (const char*)pRe->zInit, pRe->nInit)!=0) ){ in.i++; } if( in.i+pRe->nInit>in.mx ) return 0; } if( pRe->nState<=count(aSpace)*2 ){ pToFree = 0; aStateSet[0].aState = aSpace; }else{ pToFree = fossil_malloc( sizeof(ReStateNumber)*2*pRe->nState ); if( pToFree==0 ) return -1; aStateSet[0].aState = pToFree; } |
︙ | ︙ |
Changes to src/report.c.
︙ | ︙ | |||
196 197 198 199 200 201 202 | "tagxref", "unversioned", }; int i; if( fossil_strncmp(zArg1, "fx_", 3)==0 ){ break; } | | | | 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | "tagxref", "unversioned", }; int i; if( fossil_strncmp(zArg1, "fx_", 3)==0 ){ break; } for(i=0; i<count(azAllowed); i++){ if( fossil_stricmp(zArg1, azAllowed[i])==0 ) break; } if( i>=count(azAllowed) ){ *(char**)pError = mprintf("access to table \"%s\" is restricted",zArg1); rc = SQLITE_DENY; }else if( !g.perm.RdAddr && strncmp(zArg2, "private_", 8)==0 ){ rc = SQLITE_IGNORE; } break; } |
︙ | ︙ | |||
448 449 450 451 452 453 454 | if( P("copy") ){ rn = 0; zTitle = mprintf("Copy Of %s", zTitle); zOwner = g.zLogin; } } if( zOwner==0 ) zOwner = g.zLogin; | | | | 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 | if( P("copy") ){ rn = 0; zTitle = mprintf("Copy Of %s", zTitle); zOwner = g.zLogin; } } if( zOwner==0 ) zOwner = g.zLogin; style_submenu_element("Cancel", "reportlist"); if( rn>0 ){ style_submenu_element("Delete", "rptedit?rn=%d&del1=1", rn); } style_header("%s", rn>0 ? "Edit Report Format":"Create New Report Format"); if( zErr ){ @ <blockquote class="reportError">%h(zErr)</blockquote> } @ <form action="rptedit" method="post"><div> @ <input type="hidden" name="rn" value="%d(rn)" /> |
︙ | ︙ | |||
698 699 700 701 702 703 704 | pState->wikiFlags = WIKI_NOBADLINKS; pState->zWikiStart = ""; pState->zWikiEnd = ""; if( P("plaintext") ){ pState->wikiFlags |= WIKI_LINKSONLY; pState->zWikiStart = "<pre class='verbatim'>"; pState->zWikiEnd = "</pre>"; | | < | | | 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 | pState->wikiFlags = WIKI_NOBADLINKS; pState->zWikiStart = ""; pState->zWikiEnd = ""; if( P("plaintext") ){ pState->wikiFlags |= WIKI_LINKSONLY; pState->zWikiStart = "<pre class='verbatim'>"; pState->zWikiEnd = "</pre>"; style_submenu_element("Formatted", "%R/rptview?rn=%d", pState->rn); }else{ style_submenu_element("Plaintext", "%R/rptview?rn=%d&plaintext", pState->rn); } }else{ pState->nCol++; } } } |
︙ | ︙ | |||
1195 1196 1197 1198 1199 1200 1201 | } count = 0; if( !tabs ){ struct GenerateHTML sState = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }; db_multi_exec("PRAGMA empty_result_callbacks=ON"); | | < | | | < | 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 | } count = 0; if( !tabs ){ struct GenerateHTML sState = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }; db_multi_exec("PRAGMA empty_result_callbacks=ON"); style_submenu_element("Raw", "rptview?tablist=1&%h", PD("QUERY_STRING","")); if( g.perm.Admin || (g.perm.TktFmt && g.zLogin && fossil_strcmp(g.zLogin,zOwner)==0) ){ style_submenu_element("Edit", "rptedit?rn=%d", rn); } if( g.perm.TktFmt ){ style_submenu_element("SQL", "rptsql?rn=%d",rn); } if( g.perm.NewTkt ){ style_submenu_element("New Ticket", "%s/tktnew", g.zTop); } style_header("%s", zTitle); output_color_key(zClrKey, 1, "border=\"0\" cellpadding=\"3\" cellspacing=\"0\" class=\"report\""); @ <table border="1" cellpadding="2" cellspacing="0" class="report" @ id="reportTable"> sState.rn = rn; |
︙ | ︙ |
Changes to src/rss.c.
︙ | ︙ | |||
214 215 216 217 218 219 220 | if( zFreeProjectName != 0 ){ free( zFreeProjectName ); } } /* | | | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 | if( zFreeProjectName != 0 ){ free( zFreeProjectName ); } } /* ** COMMAND: rss* ** ** Usage: %fossil rss ?OPTIONS? ** ** The CLI variant of the /timeline.rss page, this produces an RSS ** feed of the timeline to stdout. Options: ** ** -type|y FLAG |
︙ | ︙ |
Changes to src/search.c.
︙ | ︙ | |||
405 406 407 408 409 410 411 | } } /* search_match(TEXT, TEXT, ....) ** ** Using the full-scan search engine created by the most recent call ** to search_init(), match the input the TEXT arguments. | | | | 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 | } } /* search_match(TEXT, TEXT, ....) ** ** Using the full-scan search engine created by the most recent call ** to search_init(), match the input the TEXT arguments. ** Remember the results global full-scan search object. ** Return non-zero on a match and zero on a miss. */ static void search_match_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *azDoc[5]; int nDoc; int rc; for(nDoc=0; nDoc<count(azDoc) && nDoc<argc; nDoc++){ azDoc[nDoc] = (const char*)sqlite3_value_text(argv[nDoc]); if( azDoc[nDoc]==0 ) azDoc[nDoc] = ""; } rc = search_match(&gSearch, nDoc, azDoc); sqlite3_result_int(context, rc); } |
︙ | ︙ | |||
655 656 657 658 659 660 661 | { SRCH_TKT, "search-tkt" }, { SRCH_WIKI, "search-wiki" }, }; int i; if( g.perm.Read==0 ) srchFlags &= ~(SRCH_CKIN|SRCH_DOC); if( g.perm.RdTkt==0 ) srchFlags &= ~(SRCH_TKT); if( g.perm.RdWiki==0 ) srchFlags &= ~(SRCH_WIKI); | | | 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 | { SRCH_TKT, "search-tkt" }, { SRCH_WIKI, "search-wiki" }, }; int i; if( g.perm.Read==0 ) srchFlags &= ~(SRCH_CKIN|SRCH_DOC); if( g.perm.RdTkt==0 ) srchFlags &= ~(SRCH_TKT); if( g.perm.RdWiki==0 ) srchFlags &= ~(SRCH_WIKI); for(i=0; i<count(aSetng); i++){ unsigned int m = aSetng[i].m; if( (srchFlags & m)==0 ) continue; if( ((knownGood|knownBad) & m)!=0 ) continue; if( db_get_boolean(aSetng[i].zKey,0) ){ knownGood |= m; }else{ knownBad |= m; |
︙ | ︙ | |||
890 891 892 893 894 895 896 | static const struct { unsigned m; char c; } aMask[] = { { SRCH_CKIN, 'c' }, { SRCH_DOC, 'd' }, { SRCH_TKT, 't' }, { SRCH_WIKI, 'w' }, }; int i; | | | 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 | static const struct { unsigned m; char c; } aMask[] = { { SRCH_CKIN, 'c' }, { SRCH_DOC, 'd' }, { SRCH_TKT, 't' }, { SRCH_WIKI, 'w' }, }; int i; for(i=0; i<count(aMask); i++){ if( srchFlags & aMask[i].m ){ blob_appendf(&sql, "%sftsdocs.type='%c'", zSep, aMask[i].c); zSep = " OR "; } } blob_append(&sql,")",1); } |
︙ | ︙ | |||
1068 1069 1070 1071 1072 1073 1074 | { "t", "Tickets", SRCH_TKT }, { "w", "Wiki", SRCH_WIKI }, }; const char *zY = PD("y","all"); unsigned newFlags = srchFlags; int i; @ <select size='1' name='y'> | | | 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 | { "t", "Tickets", SRCH_TKT }, { "w", "Wiki", SRCH_WIKI }, }; const char *zY = PD("y","all"); unsigned newFlags = srchFlags; int i; @ <select size='1' name='y'> for(i=0; i<count(aY); i++){ if( (aY[i].m & srchFlags)==0 ) continue; cgi_printf("<option value='%s'", aY[i].z); if( fossil_strcmp(zY,aY[i].z)==0 ){ newFlags &= aY[i].m; cgi_printf(" selected"); } cgi_printf(">%s</option>\n", aY[i].zNm); |
︙ | ︙ | |||
1725 1726 1727 1728 1729 1730 1731 | int i, j, n; int iCmd = 0; int iAction = 0; db_find_and_open_repository(0, 0); if( g.argc>2 ){ zSubCmd = g.argv[2]; n = (int)strlen(zSubCmd); | | | | | 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 | int i, j, n; int iCmd = 0; int iAction = 0; db_find_and_open_repository(0, 0); if( g.argc>2 ){ zSubCmd = g.argv[2]; n = (int)strlen(zSubCmd); for(i=0; i<count(aCmd); i++){ if( fossil_strncmp(aCmd[i].z, zSubCmd, n)==0 ) break; } if( i>=count(aCmd) ){ Blob all; blob_init(&all,0,0); for(i=0; i<count(aCmd); i++) blob_appendf(&all, " %s", aCmd[i].z); fossil_fatal("unknown \"%s\" - should be on of:%s", zSubCmd, blob_str(&all)); return; } iCmd = aCmd[i].iCmd; } g.perm.Read = 1; |
︙ | ︙ | |||
1755 1756 1757 1758 1759 1760 1761 | db_begin_transaction(); /* Adjust search settings */ if( iCmd==3 || iCmd==4 ){ const char *zCtrl; if( g.argc<4 ) usage(mprintf("%s STRING",zSubCmd)); zCtrl = g.argv[3]; | | | | 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 | db_begin_transaction(); /* Adjust search settings */ if( iCmd==3 || iCmd==4 ){ const char *zCtrl; if( g.argc<4 ) usage(mprintf("%s STRING",zSubCmd)); zCtrl = g.argv[3]; for(j=0; j<count(aSetng); j++){ if( strchr(zCtrl, aSetng[j].zSw[0])!=0 ){ db_set_int(aSetng[j].zSetting, iCmd-3, 0); } } } if( iCmd==5 ){ if( g.argc<4 ) usage("porter ON/OFF"); db_set_int("search-stemmer", is_truth(g.argv[3]), 0); } /* destroy or rebuild the index, if requested */ if( iAction>=1 ){ search_drop_index(); } if( iAction>=2 ){ search_rebuild_index(); } /* Always show the status before ending */ for(i=0; i<count(aSetng); i++){ fossil_print("%-16s %s\n", aSetng[i].zName, db_get_boolean(aSetng[i].zSetting,0) ? "on" : "off"); } fossil_print("%-16s %s\n", "Porter stemmer:", db_get_boolean("search-stemmer",0) ? "on" : "off"); if( search_index_exists() ){ fossil_print("%-16s enabled\n", "full-text index:"); |
︙ | ︙ |
Changes to src/setup.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Implementation of the Setup page */ #include "config.h" #include <assert.h> #include "setup.h" | < < < < | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ** ** Implementation of the Setup page */ #include "config.h" #include <assert.h> #include "setup.h" /* ** Output a single entry for a menu generated using an HTML table. ** If zLink is not NULL or an empty string, then it is the page that ** the menu entry will hyperlink to. If zLink is NULL or "", then ** the menu entry has no hyperlink - it is disabled. */ void setup_menu_entry( |
︙ | ︙ | |||
147 148 149 150 151 152 153 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } | | | | | 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_submenu_element("Add", "setup_uedit"); style_submenu_element("Log", "access_log"); style_submenu_element("Help", "setup_ulist_notes"); style_header("User List"); @ <table border=1 cellpadding=2 cellspacing=0 class='userTable'> @ <thead><tr> @ <th>UID <th>Category @ <th>Capabilities (<a href='%R/setup_ucap_list'>key</a>) @ <th>Info <th>Last Change</tr></thead> @ <tbody> |
︙ | ︙ | |||
555 556 557 558 559 560 561 | "<span class=\"ueditInheritNobody\"><sub>[N]</sub></span>"; } free(z2); } /* Begin generating the page */ | | | 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 | "<span class=\"ueditInheritNobody\"><sub>[N]</sub></span>"; } free(z2); } /* Begin generating the page */ style_submenu_element("Cancel", "setup_ulist"); if( uid ){ style_header("Edit User %h", zLogin); }else{ style_header("Add A New User"); } @ <div class="ueditCapBox"> @ <form action="%s(g.zPath)" method="post"><div> |
︙ | ︙ | |||
942 943 944 945 946 947 948 | login_verify_csrf_secret(); db_set(zVar, iQ ? "1" : "0", 0); admin_log("Set option [%q] to [%q].", zVar, iQ ? "on" : "off"); iVal = iQ; } } | | | | 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 | login_verify_csrf_secret(); db_set(zVar, iQ ? "1" : "0", 0); admin_log("Set option [%q] to [%q].", zVar, iQ ? "on" : "off"); iVal = iQ; } } @ <label><input type="checkbox" name="%s(zQParm)" if( iVal ){ @ checked="checked" } if( disabled ){ @ disabled="disabled" } @ /> <b>%s(zLabel)</b></label> } /* ** Generate an entry box for an attribute. */ void entry_attribute( const char *zLabel, /* The text label on the entry box */ |
︙ | ︙ | |||
1429 1430 1431 1432 1433 1434 1435 | @ %s(zTmDiff) hours behind UTC.</p> }else{ @ %s(zTmDiff) hours ahead of UTC.</p> } @ <hr /> multiple_choice_attribute("Per-Item Time Format", "timeline-date-format", | | | 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 | @ %s(zTmDiff) hours behind UTC.</p> }else{ @ %s(zTmDiff) hours ahead of UTC.</p> } @ <hr /> multiple_choice_attribute("Per-Item Time Format", "timeline-date-format", "tdf", "0", count(azTimeFormats)/2, azTimeFormats); @ <p>If the "HH:MM" or "HH:MM:SS" format is selected, then the date is shown @ in a separate box (using CSS class "timelineDate") whenever the date changes. @ With the "YYYY-MM-DD HH:MM" and "YYMMDD ..." formats, the complete date @ and time is shown on every timeline entry (using the CSS class "timelineTime").</p> @ <hr /> onoff_attribute("Show version differences by default", |
︙ | ︙ | |||
1535 1536 1537 1538 1539 1540 1541 | @ </td></tr></table> @ </div></form> @ <p>Settings marked with (v) are 'versionable' and will be overridden @ by the contents of files named <tt>.fossil-settings/PROPERTY</tt> @ in the check-out root. @ If such a file is present, the corresponding field above is not @ editable.</p><hr /><p> | | | 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 | @ </td></tr></table> @ </div></form> @ <p>Settings marked with (v) are 'versionable' and will be overridden @ by the contents of files named <tt>.fossil-settings/PROPERTY</tt> @ in the check-out root. @ If such a file is present, the corresponding field above is not @ editable.</p><hr /><p> @ These settings work the same as the @ <a href='%R/help?cmd=settings'>fossil set</a> command. db_end_transaction(0); style_footer(); } /* ** WEBPAGE: setup_config |
︙ | ︙ |
Changes to src/shell.c.
︙ | ︙ | |||
664 665 666 667 668 669 670 | */ #define MODE_Line 0 /* One column per line. Blank line between records */ #define MODE_Column 1 /* One record per line in neat columns */ #define MODE_List 2 /* One record per line with a separator */ #define MODE_Semi 3 /* Same as MODE_List but append ";" to each line */ #define MODE_Html 4 /* Generate an XHTML table */ #define MODE_Insert 5 /* Generate SQL "insert" statements */ | > | | | | | > | 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 | */ #define MODE_Line 0 /* One column per line. Blank line between records */ #define MODE_Column 1 /* One record per line in neat columns */ #define MODE_List 2 /* One record per line with a separator */ #define MODE_Semi 3 /* Same as MODE_List but append ";" to each line */ #define MODE_Html 4 /* Generate an XHTML table */ #define MODE_Insert 5 /* Generate SQL "insert" statements */ #define MODE_Quote 6 /* Quote values as for SQL */ #define MODE_Tcl 7 /* Generate ANSI-C or TCL quoted elements */ #define MODE_Csv 8 /* Quote strings, numbers are plain */ #define MODE_Explain 9 /* Like MODE_Column, but do not truncate data */ #define MODE_Ascii 10 /* Use ASCII unit and record separators (0x1F/0x1E) */ #define MODE_Pretty 11 /* Pretty-print schemas */ static const char *modeDescr[] = { "line", "column", "list", "semi", "html", "insert", "quote", "tcl", "csv", "explain", "ascii", "prettyprint", }; |
︙ | ︙ | |||
1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 | output_csv(p, azArg[i], i<nArg-1); } utf8_printf(p->out, "%s", p->rowSeparator); } setTextMode(p->out, 1); break; } case MODE_Insert: { p->cnt++; if( azArg==0 ) break; | > > | | | | | | | | | | > | 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 | output_csv(p, azArg[i], i<nArg-1); } utf8_printf(p->out, "%s", p->rowSeparator); } setTextMode(p->out, 1); break; } case MODE_Quote: case MODE_Insert: { p->cnt++; if( azArg==0 ) break; if( p->cMode==MODE_Insert ){ utf8_printf(p->out,"INSERT INTO %s",p->zDestTable); if( p->showHeader ){ raw_printf(p->out,"("); for(i=0; i<nArg; i++){ char *zSep = i>0 ? ",": ""; utf8_printf(p->out, "%s%s", zSep, azCol[i]); } raw_printf(p->out,")"); } raw_printf(p->out," VALUES("); } for(i=0; i<nArg; i++){ char *zSep = i>0 ? ",": ""; if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ utf8_printf(p->out,"%sNULL",zSep); }else if( aiType && aiType[i]==SQLITE_TEXT ){ if( zSep[0] ) utf8_printf(p->out,"%s",zSep); output_quoted_string(p->out, azArg[i]); |
︙ | ︙ | |||
1229 1230 1231 1232 1233 1234 1235 | }else if( isNumber(azArg[i], 0) ){ utf8_printf(p->out,"%s%s",zSep, azArg[i]); }else{ if( zSep[0] ) utf8_printf(p->out,"%s",zSep); output_quoted_string(p->out, azArg[i]); } } | | | 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 | }else if( isNumber(azArg[i], 0) ){ utf8_printf(p->out,"%s%s",zSep, azArg[i]); }else{ if( zSep[0] ) utf8_printf(p->out,"%s",zSep); output_quoted_string(p->out, azArg[i]); } } raw_printf(p->out,p->cMode==MODE_Quote?"\n":");\n"); break; } case MODE_Ascii: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ if( i>0 ) utf8_printf(p->out, "%s", p->colSeparator); utf8_printf(p->out,"%s",azCol[i] ? azCol[i] : ""); |
︙ | ︙ | |||
2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 | " ascii Columns/rows delimited by 0x1F and 0x1E\n" " csv Comma-separated values\n" " column Left-aligned columns. (See .width)\n" " html HTML <table> code\n" " insert SQL insert statements for TABLE\n" " line One value per line\n" " list Values delimited by .separator strings\n" " tabs Tab-separated values\n" " tcl TCL list elements\n" ".nullvalue STRING Use STRING in place of NULL values\n" ".once FILENAME Output for the next SQL command only to FILENAME\n" ".open ?--new? ?FILE? Close existing database and reopen FILE\n" " The --new starts with an empty file\n" ".output ?FILENAME? Send output to FILENAME or stdout\n" | > | 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 | " ascii Columns/rows delimited by 0x1F and 0x1E\n" " csv Comma-separated values\n" " column Left-aligned columns. (See .width)\n" " html HTML <table> code\n" " insert SQL insert statements for TABLE\n" " line One value per line\n" " list Values delimited by .separator strings\n" " quote Escape answers as for SQL\n" " tabs Tab-separated values\n" " tcl TCL list elements\n" ".nullvalue STRING Use STRING in place of NULL values\n" ".once FILENAME Output for the next SQL command only to FILENAME\n" ".open ?--new? ?FILE? Close existing database and reopen FILE\n" " The --new starts with an empty file\n" ".output ?FILENAME? Send output to FILENAME or stdout\n" |
︙ | ︙ | |||
3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 | sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_CrLf); }else if( c2=='t' && strncmp(azArg[1],"tabs",n2)==0 ){ p->mode = MODE_List; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Tab); }else if( c2=='i' && strncmp(azArg[1],"insert",n2)==0 ){ p->mode = MODE_Insert; set_table_name(p, nArg>=3 ? azArg[2] : "table"); }else if( c2=='a' && strncmp(azArg[1],"ascii",n2)==0 ){ p->mode = MODE_Ascii; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Unit); sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Record); }else { raw_printf(stderr, "Error: mode should be one of: " "ascii column csv html insert line list tabs tcl\n"); | > > | 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 | sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_CrLf); }else if( c2=='t' && strncmp(azArg[1],"tabs",n2)==0 ){ p->mode = MODE_List; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Tab); }else if( c2=='i' && strncmp(azArg[1],"insert",n2)==0 ){ p->mode = MODE_Insert; set_table_name(p, nArg>=3 ? azArg[2] : "table"); }else if( c2=='q' && strncmp(azArg[1],"quote",n2)==0 ){ p->mode = MODE_Quote; }else if( c2=='a' && strncmp(azArg[1],"ascii",n2)==0 ){ p->mode = MODE_Ascii; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Unit); sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Record); }else { raw_printf(stderr, "Error: mode should be one of: " "ascii column csv html insert line list tabs tcl\n"); |
︙ | ︙ |
Changes to src/shun.c.
︙ | ︙ | |||
315 316 317 318 319 320 321 | login_needed(0); return; } style_header("Artifact Receipts"); if( showAll ){ ofst = 0; }else{ | | | | 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 | login_needed(0); return; } style_header("Artifact Receipts"); if( showAll ){ ofst = 0; }else{ style_submenu_element("All", "rcvfromlist?all=1"); } if( ofst>0 ){ style_submenu_element("Newer", "rcvfromlist?ofst=%d", ofst>30 ? ofst-30 : 0); } db_multi_exec( "CREATE TEMP TABLE rcvidUsed(x INTEGER PRIMARY KEY);" "INSERT OR IGNORE INTO rcvidUsed(x) SELECT rcvid FROM blob;" ); if( db_table_exists("repository","unversioned") ){ |
︙ | ︙ | |||
361 362 363 364 365 366 367 | cnt = 0; while( db_step(&q)==SQLITE_ROW ){ int rcvid = db_column_int(&q, 0); const char *zUser = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); const char *zIpAddr = db_column_text(&q, 3); if( cnt==30 && !showAll ){ | | < | 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | cnt = 0; while( db_step(&q)==SQLITE_ROW ){ int rcvid = db_column_int(&q, 0); const char *zUser = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); const char *zIpAddr = db_column_text(&q, 3); if( cnt==30 && !showAll ){ style_submenu_element("Older", "rcvfromlist?ofst=%d", ofst+30); }else{ cnt++; @ <tr> if( db_column_int(&q,4) ){ @ <td style="padding-right: 15px;text-align: right;"> @ <a href="rcvfrom?rcvid=%d(rcvid)">%d(rcvid)</a></td> }else{ |
︙ | ︙ | |||
404 405 406 407 408 409 410 | return; } style_header("Artifact Receipt %d", rcvid); if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " NOT EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ | | < | < | 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | return; } style_header("Artifact Receipt %d", rcvid); if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " NOT EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ style_submenu_element("Shun All", "shun?shun&rcvid=%d#addshun", rcvid); } if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ style_submenu_element("Unshun All", "shun?accept&rcvid=%d#delshun", rcvid); } db_prepare(&q, "SELECT login, datetime(rcvfrom.mtime), rcvfrom.ipaddr" " FROM rcvfrom LEFT JOIN user USING(uid)" " WHERE rcvid=%d", rcvid ); |
︙ | ︙ |
Changes to src/skins.c.
︙ | ︙ | |||
98 99 100 101 102 103 104 | char *skin_use_alternative(const char *zName){ int i; Blob err = BLOB_INITIALIZER; if( strchr(zName, '/')!=0 ){ zAltSkinDir = fossil_strdup(zName); return 0; } | | | | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | char *skin_use_alternative(const char *zName){ int i; Blob err = BLOB_INITIALIZER; if( strchr(zName, '/')!=0 ){ zAltSkinDir = fossil_strdup(zName); return 0; } for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zLabel, zName)==0 ){ pAltSkin = &aBuiltinSkin[i]; return 0; } } blob_appendf(&err, "available skins: %s", aBuiltinSkin[0].zLabel); for(i=1; i<count(aBuiltinSkin); i++){ blob_append(&err, " ", 1); blob_append(&err, aBuiltinSkin[i].zLabel, -1); } return blob_str(&err); } /* |
︙ | ︙ | |||
161 162 163 164 165 166 167 | } /* ** Return a pointer to a SkinDetail element. Return 0 if not found. */ static struct SkinDetail *skin_detail_find(const char *zName){ int lwr = 0; | | | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | } /* ** Return a pointer to a SkinDetail element. Return 0 if not found. */ static struct SkinDetail *skin_detail_find(const char *zName){ int lwr = 0; int upr = count(aSkinDetail); while( upr>=lwr ){ int mid = (upr+lwr)/2; int c = fossil_strcmp(aSkinDetail[mid].zName, zName); if( c==0 ) return &aSkinDetail[mid]; if( c<0 ){ lwr = mid+1; }else{ |
︙ | ︙ | |||
281 282 283 284 285 286 287 | /* ** Return true if there exists a skin name "zSkinName". */ static int skinExists(const char *zSkinName){ int i; if( zSkinName==0 ) return 0; | | | | 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | /* ** Return true if there exists a skin name "zSkinName". */ static int skinExists(const char *zSkinName){ int i; if( zSkinName==0 ) return 0; for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(zSkinName, aBuiltinSkin[i].zDesc)==0 ) return 1; } return db_exists("SELECT 1 FROM config WHERE name='skin:%q'", zSkinName); } /* ** Construct and return an string of SQL statements that represents ** a "skin" setting. If zName==0 then return the skin currently ** installed. Otherwise, return one of the built-in skins designated ** by zName. ** ** Memory to hold the returned string is obtained from malloc. */ static char *getSkin(const char *zName){ const char *z; char *zLabel; static const char *azType[] = { "css", "header", "footer", "details" }; int i; Blob val; blob_zero(&val); for(i=0; i<count(azType); i++){ if( zName ){ zLabel = mprintf("skins/%s/%s.txt", zName, azType[i]); z = builtin_text(zLabel); fossil_free(zLabel); }else{ z = db_get(azType[i], 0); if( z==0 ){ |
︙ | ︙ | |||
425 426 427 428 429 430 431 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } db_begin_transaction(); zCurrent = getSkin(0); | | | 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } db_begin_transaction(); zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ aBuiltinSkin[i].zSQL = getSkin(aBuiltinSkin[i].zLabel); } /* Process requests to delete a user-defined skin */ if( P("del1") && (zName = skinVarName(P("sn"), 1))!=0 ){ style_header("Confirm Custom Skin Delete"); @ <form action="%s(g.zTop)/setup_skin" method="post"><div> |
︙ | ︙ | |||
456 457 458 459 460 461 462 | /* The user pressed one of the "Install" buttons. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; /* Check to see if the current skin is already saved. If it is, there ** is no need to create a backup */ zCurrent = getSkin(0); | | | | 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 | /* The user pressed one of the "Install" buttons. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; /* Check to see if the current skin is already saved. If it is, there ** is no need to create a backup */ zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ seen = 1; break; } } if( !seen ){ seen = db_exists("SELECT 1 FROM config WHERE name GLOB 'skin:*'" " AND value=%Q", zCurrent); if( !seen ){ db_multi_exec( "INSERT INTO config(name,value,mtime) VALUES(" " strftime('skin:Backup On %%Y-%%m-%%d %%H:%%M:%%S')," " %Q,now())", zCurrent ); } } seen = 0; for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zDesc, z)==0 ){ seen = 1; zCurrent = aBuiltinSkin[i].zSQL; db_multi_exec("%s", zCurrent/*safe-for-%s*/); break; } } |
︙ | ︙ | |||
511 512 513 514 515 516 517 | @ "%h(pAltSkin->zLabel)". You can change the skin configuration @ below, but the changes will not take effect until the Fossil server @ is restarted without the override.</p> @ } @ <h2>Available Skins:</h2> @ <table border="0"> | | | 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 | @ "%h(pAltSkin->zLabel)". You can change the skin configuration @ below, but the changes will not take effect until the Fossil server @ is restarted without the override.</p> @ } @ <h2>Available Skins:</h2> @ <table border="0"> for(i=0; i<count(aBuiltinSkin); i++){ z = aBuiltinSkin[i].zDesc; @ <tr><td>%d(i+1).<td>%h(z)<td> <td> if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ @ (Currently In Use) seenCurrent = 1; }else{ @ <form action="%s(g.zTop)/setup_skin" method="post"> |
︙ | ︙ | |||
595 596 597 598 599 600 601 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } ii = atoi(PD("w","0")); | | | | | | | 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } ii = atoi(PD("w","0")); if( ii<0 || ii>count(aSkinAttr) ) ii = 0; zBasis = PD("basis","default"); zDflt = mprintf("skins/%s/%s.txt", zBasis, aSkinAttr[ii].zFile); db_begin_transaction(); if( P("revert")!=0 ){ db_multi_exec("DELETE FROM config WHERE name=%Q", aSkinAttr[ii].zFile); cgi_replace_parameter(aSkinAttr[ii].zFile, builtin_text(zDflt)); } style_header("%s", aSkinAttr[ii].zTitle); for(j=0; j<count(aSkinAttr); j++){ if( j==ii ) continue; style_submenu_element(aSkinAttr[j].zSubmenu, "%R/setup_skinedit?w=%d&basis=%h",j,zBasis); } style_submenu_element("Skins", "%R/setup_skin"); @ <form action="%s(g.zTop)/setup_skinedit" method="post"><div> login_insert_csrf_secret(); @ <input type='hidden' name='w' value='%d(ii)'> @ <h2>Edit %s(aSkinAttr[ii].zTitle):</h2> zContent = textarea_attribute("", 10, 80, aSkinAttr[ii].zFile, aSkinAttr[ii].zFile, builtin_text(zDflt), 0); @ <br /> @ <input type="submit" name="submit" value="Apply Changes" /> @ <hr /> @ Baseline: <select size='1' name='basis'> for(j=0; j<count(aBuiltinSkin); j++){ cgi_printf("<option value='%h'%s>%h</option>\n", aBuiltinSkin[j].zLabel, fossil_strcmp(zBasis,aBuiltinSkin[j].zLabel)==0 ? " selected" : "", aBuiltinSkin[j].zDesc ); } @ </select> |
︙ | ︙ |
Changes to src/sqlite3.c.
1 2 | /****************************************************************************** ** This file is an amalgamation of many separate C source files from SQLite | | | 1 2 3 4 5 6 7 8 9 10 | /****************************************************************************** ** This file is an amalgamation of many separate C source files from SQLite ** version 3.16.0. By combining all the individual C code files into this ** single large file, the entire code can be compiled as a single translation ** unit. This allows many compilers to do optimizations that would not be ** possible if the files were compiled separately. Performance improvements ** of 5% or more are commonly seen when SQLite is compiled as a single ** translation unit. ** ** This file is all you need to compile SQLite. To use SQLite in other |
︙ | ︙ | |||
377 378 379 380 381 382 383 | ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ | | | | | 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 | ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.16.0" #define SQLITE_VERSION_NUMBER 3016000 #define SQLITE_SOURCE_ID "2016-11-02 14:50:19 3028845329c9b7acdec2ec8b01d00d782347454c" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version, sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ | |||
1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 | ** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. ** ** <li>[[SQLITE_FCNTL_HAS_MOVED]] ** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a ** pointer to an integer and it writes a boolean into that integer depending ** on whether or not the file has been renamed, moved, or deleted since it ** was first opened. ** ** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This ** opcode causes the xFileControl method to swap the file handle with the one ** pointed to by the pArg argument. This capability is used during testing ** and only needs to be supported when SQLITE_TEST is defined. ** | > > > > > > | 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 | ** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. ** ** <li>[[SQLITE_FCNTL_HAS_MOVED]] ** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a ** pointer to an integer and it writes a boolean into that integer depending ** on whether or not the file has been renamed, moved, or deleted since it ** was first opened. ** ** <li>[[SQLITE_FCNTL_WIN32_GET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_GET_HANDLE] opcode can be used to obtain the ** underlying native file handle associated with a file handle. This file ** control interprets its argument as a pointer to a native file handle and ** writes the resulting value there. ** ** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This ** opcode causes the xFileControl method to swap the file handle with the one ** pointed to by the pArg argument. This capability is used during testing ** and only needs to be supported when SQLITE_TEST is defined. ** |
︙ | ︙ | |||
1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 | #define SQLITE_FCNTL_COMMIT_PHASETWO 22 #define SQLITE_FCNTL_WIN32_SET_HANDLE 23 #define SQLITE_FCNTL_WAL_BLOCK 24 #define SQLITE_FCNTL_ZIPVFS 25 #define SQLITE_FCNTL_RBU 26 #define SQLITE_FCNTL_VFS_POINTER 27 #define SQLITE_FCNTL_JOURNAL_POINTER 28 /* deprecated names */ #define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE #define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE #define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO | > > | 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 | #define SQLITE_FCNTL_COMMIT_PHASETWO 22 #define SQLITE_FCNTL_WIN32_SET_HANDLE 23 #define SQLITE_FCNTL_WAL_BLOCK 24 #define SQLITE_FCNTL_ZIPVFS 25 #define SQLITE_FCNTL_RBU 26 #define SQLITE_FCNTL_VFS_POINTER 27 #define SQLITE_FCNTL_JOURNAL_POINTER 28 #define SQLITE_FCNTL_WIN32_GET_HANDLE 29 #define SQLITE_FCNTL_PDB 30 /* deprecated names */ #define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE #define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE #define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO |
︙ | ︙ | |||
2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 | ** schema. ^The sole argument is a pointer to a constant UTF8 string ** which will become the new schema name in place of "main". ^SQLite ** does not make a copy of the new main schema name string, so the application ** must ensure that the argument passed into this DBCONFIG option is unchanged ** until after the database connection closes. ** </dd> ** ** </dl> */ #define SQLITE_DBCONFIG_MAINDBNAME 1000 /* const char* */ #define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ #define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER 1004 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION 1005 /* int int* */ /* ** CAPI3REF: Enable Or Disable Extended Result Codes ** METHOD: sqlite3 ** ** ^The sqlite3_extended_result_codes() routine enables or disables the | > > > > > > > > > > > > > | 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 | ** schema. ^The sole argument is a pointer to a constant UTF8 string ** which will become the new schema name in place of "main". ^SQLite ** does not make a copy of the new main schema name string, so the application ** must ensure that the argument passed into this DBCONFIG option is unchanged ** until after the database connection closes. ** </dd> ** ** <dt>SQLITE_DBCONFIG_NO_CKPT_ON_CLOSE</dt> ** <dd> Usually, when a database in wal mode is closed or detached from a ** database handle, SQLite checks if this will mean that there are now no ** connections at all to the database. If so, it performs a checkpoint ** operation before closing the connection. This option may be used to ** override this behaviour. The first parameter passed to this operation ** is an integer - non-zero to disable checkpoints-on-close, or zero (the ** default) to enable them. The second parameter is a pointer to an integer ** into which is written 0 or 1 to indicate whether checkpoints-on-close ** have been disabled - 0 if they are not disabled, 1 if they are. ** </dd> ** ** </dl> */ #define SQLITE_DBCONFIG_MAINDBNAME 1000 /* const char* */ #define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ #define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER 1004 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION 1005 /* int int* */ #define SQLITE_DBCONFIG_NO_CKPT_ON_CLOSE 1006 /* int int* */ /* ** CAPI3REF: Enable Or Disable Extended Result Codes ** METHOD: sqlite3 ** ** ^The sqlite3_extended_result_codes() routine enables or disables the |
︙ | ︙ | |||
12673 12674 12675 12676 12677 12678 12679 | #define OP_ResetCount 118 #define OP_SorterCompare 119 /* synopsis: if key(P1)!=trim(r[P3],P4) goto P2 */ #define OP_SorterData 120 /* synopsis: r[P2]=data */ #define OP_RowKey 121 /* synopsis: r[P2]=key */ #define OP_RowData 122 /* synopsis: r[P2]=data */ #define OP_Rowid 123 /* synopsis: r[P2]=rowid */ #define OP_NullRow 124 | | | 12694 12695 12696 12697 12698 12699 12700 12701 12702 12703 12704 12705 12706 12707 12708 | #define OP_ResetCount 118 #define OP_SorterCompare 119 /* synopsis: if key(P1)!=trim(r[P3],P4) goto P2 */ #define OP_SorterData 120 /* synopsis: r[P2]=data */ #define OP_RowKey 121 /* synopsis: r[P2]=key */ #define OP_RowData 122 /* synopsis: r[P2]=data */ #define OP_Rowid 123 /* synopsis: r[P2]=rowid */ #define OP_NullRow 124 #define OP_SorterInsert 125 /* synopsis: key=r[P2] */ #define OP_IdxInsert 126 /* synopsis: key=r[P2] */ #define OP_IdxDelete 127 /* synopsis: key=r[P2@P3] */ #define OP_Seek 128 /* synopsis: Move P3 to P1.rowid */ #define OP_IdxRowid 129 /* synopsis: r[P2]=rowid */ #define OP_Destroy 130 #define OP_Clear 131 #define OP_Real 132 /* same as TK_FLOAT, synopsis: r[P2]=P4 */ |
︙ | ︙ | |||
13028 13029 13030 13031 13032 13033 13034 | Pager **ppPager, const char*, int, int, int, void(*)(DbPage*) ); | | | 13049 13050 13051 13052 13053 13054 13055 13056 13057 13058 13059 13060 13061 13062 13063 | Pager **ppPager, const char*, int, int, int, void(*)(DbPage*) ); SQLITE_PRIVATE int sqlite3PagerClose(Pager *pPager, sqlite3*); SQLITE_PRIVATE int sqlite3PagerReadFileheader(Pager*, int, unsigned char*); /* Functions used to configure a Pager object. */ SQLITE_PRIVATE void sqlite3PagerSetBusyhandler(Pager*, int(*)(void *), void *); SQLITE_PRIVATE int sqlite3PagerSetPagesize(Pager*, u32*, int); #ifdef SQLITE_HAS_CODEC SQLITE_PRIVATE void sqlite3PagerAlignReserve(Pager*,Pager*); |
︙ | ︙ | |||
13079 13080 13081 13082 13083 13084 13085 | SQLITE_PRIVATE int sqlite3PagerCommitPhaseTwo(Pager*); SQLITE_PRIVATE int sqlite3PagerRollback(Pager*); SQLITE_PRIVATE int sqlite3PagerOpenSavepoint(Pager *pPager, int n); SQLITE_PRIVATE int sqlite3PagerSavepoint(Pager *pPager, int op, int iSavepoint); SQLITE_PRIVATE int sqlite3PagerSharedLock(Pager *pPager); #ifndef SQLITE_OMIT_WAL | | | > > > | 13100 13101 13102 13103 13104 13105 13106 13107 13108 13109 13110 13111 13112 13113 13114 13115 13116 13117 13118 13119 13120 13121 13122 13123 13124 13125 | SQLITE_PRIVATE int sqlite3PagerCommitPhaseTwo(Pager*); SQLITE_PRIVATE int sqlite3PagerRollback(Pager*); SQLITE_PRIVATE int sqlite3PagerOpenSavepoint(Pager *pPager, int n); SQLITE_PRIVATE int sqlite3PagerSavepoint(Pager *pPager, int op, int iSavepoint); SQLITE_PRIVATE int sqlite3PagerSharedLock(Pager *pPager); #ifndef SQLITE_OMIT_WAL SQLITE_PRIVATE int sqlite3PagerCheckpoint(Pager *pPager, sqlite3*, int, int*, int*); SQLITE_PRIVATE int sqlite3PagerWalSupported(Pager *pPager); SQLITE_PRIVATE int sqlite3PagerWalCallback(Pager *pPager); SQLITE_PRIVATE int sqlite3PagerOpenWal(Pager *pPager, int *pisOpen); SQLITE_PRIVATE int sqlite3PagerCloseWal(Pager *pPager, sqlite3*); SQLITE_PRIVATE int sqlite3PagerUseWal(Pager *pPager); # ifdef SQLITE_ENABLE_SNAPSHOT SQLITE_PRIVATE int sqlite3PagerSnapshotGet(Pager *pPager, sqlite3_snapshot **ppSnapshot); SQLITE_PRIVATE int sqlite3PagerSnapshotOpen(Pager *pPager, sqlite3_snapshot *pSnapshot); # endif #else # define sqlite3PagerUseWal(x) 0 #endif #ifdef SQLITE_ENABLE_ZIPVFS SQLITE_PRIVATE int sqlite3PagerWalFramesize(Pager *pPager); #endif /* Functions used to query pager state and configuration. */ |
︙ | ︙ | |||
14058 14059 14060 14061 14062 14063 14064 14065 14066 14067 14068 14069 14070 14071 | #define SQLITE_EnableTrigger 0x01000000 /* True to enable triggers */ #define SQLITE_DeferFKs 0x02000000 /* Defer all FK constraints */ #define SQLITE_QueryOnly 0x04000000 /* Disable database changes */ #define SQLITE_VdbeEQP 0x08000000 /* Debug EXPLAIN QUERY PLAN */ #define SQLITE_Vacuum 0x10000000 /* Currently in a VACUUM */ #define SQLITE_CellSizeCk 0x20000000 /* Check btree cell sizes on load */ #define SQLITE_Fts3Tokenizer 0x40000000 /* Enable fts3_tokenizer(2) */ /* ** Bits of the sqlite3.dbOptFlags field that are used by the ** sqlite3_test_control(SQLITE_TESTCTRL_OPTIMIZATIONS,...) interface to ** selectively disable various optimizations. */ | > | 14082 14083 14084 14085 14086 14087 14088 14089 14090 14091 14092 14093 14094 14095 14096 | #define SQLITE_EnableTrigger 0x01000000 /* True to enable triggers */ #define SQLITE_DeferFKs 0x02000000 /* Defer all FK constraints */ #define SQLITE_QueryOnly 0x04000000 /* Disable database changes */ #define SQLITE_VdbeEQP 0x08000000 /* Debug EXPLAIN QUERY PLAN */ #define SQLITE_Vacuum 0x10000000 /* Currently in a VACUUM */ #define SQLITE_CellSizeCk 0x20000000 /* Check btree cell sizes on load */ #define SQLITE_Fts3Tokenizer 0x40000000 /* Enable fts3_tokenizer(2) */ #define SQLITE_NoCkptOnClose 0x80000000 /* No checkpoint on close()/DETACH */ /* ** Bits of the sqlite3.dbOptFlags field that are used by the ** sqlite3_test_control(SQLITE_TESTCTRL_OPTIMIZATIONS,...) interface to ** selectively disable various optimizations. */ |
︙ | ︙ | |||
17266 17267 17268 17269 17270 17271 17272 17273 17274 17275 17276 17277 17278 17279 | "DEBUG", #endif #if SQLITE_DEFAULT_LOCKING_MODE "DEFAULT_LOCKING_MODE=" CTIMEOPT_VAL(SQLITE_DEFAULT_LOCKING_MODE), #endif #if defined(SQLITE_DEFAULT_MMAP_SIZE) && !defined(SQLITE_DEFAULT_MMAP_SIZE_xc) "DEFAULT_MMAP_SIZE=" CTIMEOPT_VAL(SQLITE_DEFAULT_MMAP_SIZE), #endif #if SQLITE_DISABLE_DIRSYNC "DISABLE_DIRSYNC", #endif #if SQLITE_DISABLE_LFS "DISABLE_LFS", #endif | > > > | 17291 17292 17293 17294 17295 17296 17297 17298 17299 17300 17301 17302 17303 17304 17305 17306 17307 | "DEBUG", #endif #if SQLITE_DEFAULT_LOCKING_MODE "DEFAULT_LOCKING_MODE=" CTIMEOPT_VAL(SQLITE_DEFAULT_LOCKING_MODE), #endif #if defined(SQLITE_DEFAULT_MMAP_SIZE) && !defined(SQLITE_DEFAULT_MMAP_SIZE_xc) "DEFAULT_MMAP_SIZE=" CTIMEOPT_VAL(SQLITE_DEFAULT_MMAP_SIZE), #endif #if SQLITE_DIRECT_OVERFLOW_READ "DIRECT_OVERFLOW_READ", #endif #if SQLITE_DISABLE_DIRSYNC "DISABLE_DIRSYNC", #endif #if SQLITE_DISABLE_LFS "DISABLE_LFS", #endif |
︙ | ︙ | |||
17352 17353 17354 17355 17356 17357 17358 17359 17360 17361 17362 17363 17364 17365 | "ENABLE_STAT3", #endif #if SQLITE_ENABLE_UNLOCK_NOTIFY "ENABLE_UNLOCK_NOTIFY", #endif #if SQLITE_ENABLE_UPDATE_DELETE_LIMIT "ENABLE_UPDATE_DELETE_LIMIT", #endif #if SQLITE_HAS_CODEC "HAS_CODEC", #endif #if HAVE_ISNAN || SQLITE_HAVE_ISNAN "HAVE_ISNAN", #endif | > > > | 17380 17381 17382 17383 17384 17385 17386 17387 17388 17389 17390 17391 17392 17393 17394 17395 17396 | "ENABLE_STAT3", #endif #if SQLITE_ENABLE_UNLOCK_NOTIFY "ENABLE_UNLOCK_NOTIFY", #endif #if SQLITE_ENABLE_UPDATE_DELETE_LIMIT "ENABLE_UPDATE_DELETE_LIMIT", #endif #if defined(SQLITE_ENABLE_URI_00_ERROR) "ENABLE_URI_00_ERROR", #endif #if SQLITE_HAS_CODEC "HAS_CODEC", #endif #if HAVE_ISNAN || SQLITE_HAVE_ISNAN "HAVE_ISNAN", #endif |
︙ | ︙ | |||
18103 18104 18105 18106 18107 18108 18109 | u8 *aRecord; /* old.* database record */ KeyInfo keyinfo; UnpackedRecord *pUnpacked; /* Unpacked version of aRecord[] */ UnpackedRecord *pNewUnpacked; /* Unpacked version of new.* record */ int iNewReg; /* Register for new.* values */ i64 iKey1; /* First key value passed to hook */ i64 iKey2; /* Second key value passed to hook */ | < > | 18134 18135 18136 18137 18138 18139 18140 18141 18142 18143 18144 18145 18146 18147 18148 18149 | u8 *aRecord; /* old.* database record */ KeyInfo keyinfo; UnpackedRecord *pUnpacked; /* Unpacked version of aRecord[] */ UnpackedRecord *pNewUnpacked; /* Unpacked version of new.* record */ int iNewReg; /* Register for new.* values */ i64 iKey1; /* First key value passed to hook */ i64 iKey2; /* Second key value passed to hook */ Mem *aNew; /* Array of new.* values */ Table *pTab; /* Schema object being upated */ }; /* ** Function prototypes */ SQLITE_PRIVATE void sqlite3VdbeError(Vdbe*, const char *, ...); SQLITE_PRIVATE void sqlite3VdbeFreeCursor(Vdbe *, VdbeCursor*); |
︙ | ︙ | |||
24596 24597 24598 24599 24600 24601 24602 | */ SQLITE_PRIVATE char *sqlite3DbStrDup(sqlite3 *db, const char *z){ char *zNew; size_t n; if( z==0 ){ return 0; } | | < | | 24627 24628 24629 24630 24631 24632 24633 24634 24635 24636 24637 24638 24639 24640 24641 24642 | */ SQLITE_PRIVATE char *sqlite3DbStrDup(sqlite3 *db, const char *z){ char *zNew; size_t n; if( z==0 ){ return 0; } n = strlen(z) + 1; zNew = sqlite3DbMallocRaw(db, n); if( zNew ){ memcpy(zNew, z, n); } return zNew; } SQLITE_PRIVATE char *sqlite3DbStrNDup(sqlite3 *db, const char *z, u64 n){ char *zNew; |
︙ | ︙ | |||
29125 29126 29127 29128 29129 29130 29131 | /* 118 */ "ResetCount" OpHelp(""), /* 119 */ "SorterCompare" OpHelp("if key(P1)!=trim(r[P3],P4) goto P2"), /* 120 */ "SorterData" OpHelp("r[P2]=data"), /* 121 */ "RowKey" OpHelp("r[P2]=key"), /* 122 */ "RowData" OpHelp("r[P2]=data"), /* 123 */ "Rowid" OpHelp("r[P2]=rowid"), /* 124 */ "NullRow" OpHelp(""), | | | 29155 29156 29157 29158 29159 29160 29161 29162 29163 29164 29165 29166 29167 29168 29169 | /* 118 */ "ResetCount" OpHelp(""), /* 119 */ "SorterCompare" OpHelp("if key(P1)!=trim(r[P3],P4) goto P2"), /* 120 */ "SorterData" OpHelp("r[P2]=data"), /* 121 */ "RowKey" OpHelp("r[P2]=key"), /* 122 */ "RowData" OpHelp("r[P2]=data"), /* 123 */ "Rowid" OpHelp("r[P2]=rowid"), /* 124 */ "NullRow" OpHelp(""), /* 125 */ "SorterInsert" OpHelp("key=r[P2]"), /* 126 */ "IdxInsert" OpHelp("key=r[P2]"), /* 127 */ "IdxDelete" OpHelp("key=r[P2@P3]"), /* 128 */ "Seek" OpHelp("Move P3 to P1.rowid"), /* 129 */ "IdxRowid" OpHelp("r[P2]=rowid"), /* 130 */ "Destroy" OpHelp(""), /* 131 */ "Clear" OpHelp(""), /* 132 */ "Real" OpHelp("r[P2]=P4"), |
︙ | ︙ | |||
40672 40673 40674 40675 40676 40677 40678 40679 40680 40681 40682 40683 40684 40685 | winIoerrRetryDelay = a[1]; }else{ a[1] = winIoerrRetryDelay; } OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); return SQLITE_OK; } #ifdef SQLITE_TEST case SQLITE_FCNTL_WIN32_SET_HANDLE: { LPHANDLE phFile = (LPHANDLE)pArg; HANDLE hOldFile = pFile->h; pFile->h = *phFile; *phFile = hOldFile; OSTRACE(("FCNTL oldFile=%p, newFile=%p, rc=SQLITE_OK\n", | > > > > > > | 40702 40703 40704 40705 40706 40707 40708 40709 40710 40711 40712 40713 40714 40715 40716 40717 40718 40719 40720 40721 | winIoerrRetryDelay = a[1]; }else{ a[1] = winIoerrRetryDelay; } OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); return SQLITE_OK; } case SQLITE_FCNTL_WIN32_GET_HANDLE: { LPHANDLE phFile = (LPHANDLE)pArg; *phFile = pFile->h; OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); return SQLITE_OK; } #ifdef SQLITE_TEST case SQLITE_FCNTL_WIN32_SET_HANDLE: { LPHANDLE phFile = (LPHANDLE)pArg; HANDLE hOldFile = pFile->h; pFile->h = *phFile; *phFile = hOldFile; OSTRACE(("FCNTL oldFile=%p, newFile=%p, rc=SQLITE_OK\n", |
︙ | ︙ | |||
44713 44714 44715 44716 44717 44718 44719 | sqlite3BeginBenignMalloc(); if( pcache1.nInitPage>0 ){ szBulk = pCache->szAlloc * (i64)pcache1.nInitPage; }else{ szBulk = -1024 * (i64)pcache1.nInitPage; } if( szBulk > pCache->szAlloc*(i64)pCache->nMax ){ | | | 44749 44750 44751 44752 44753 44754 44755 44756 44757 44758 44759 44760 44761 44762 44763 | sqlite3BeginBenignMalloc(); if( pcache1.nInitPage>0 ){ szBulk = pCache->szAlloc * (i64)pcache1.nInitPage; }else{ szBulk = -1024 * (i64)pcache1.nInitPage; } if( szBulk > pCache->szAlloc*(i64)pCache->nMax ){ szBulk = pCache->szAlloc*(i64)pCache->nMax; } zBulk = pCache->pBulk = sqlite3Malloc( szBulk ); sqlite3EndBenignMalloc(); if( zBulk ){ int nBulk = sqlite3MallocSize(zBulk)/pCache->szAlloc; int i; for(i=0; i<nBulk; i++){ |
︙ | ︙ | |||
46247 46248 46249 46250 46251 46252 46253 | */ #define WAL_SYNC_TRANSACTIONS 0x20 /* Sync at the end of each transaction */ #define SQLITE_SYNC_MASK 0x13 /* Mask off the SQLITE_SYNC_* values */ #ifdef SQLITE_OMIT_WAL # define sqlite3WalOpen(x,y,z) 0 # define sqlite3WalLimit(x,y) | | | | | 46283 46284 46285 46286 46287 46288 46289 46290 46291 46292 46293 46294 46295 46296 46297 46298 46299 46300 46301 46302 46303 46304 46305 46306 46307 46308 46309 46310 46311 46312 46313 46314 46315 46316 46317 46318 46319 46320 46321 46322 46323 46324 46325 | */ #define WAL_SYNC_TRANSACTIONS 0x20 /* Sync at the end of each transaction */ #define SQLITE_SYNC_MASK 0x13 /* Mask off the SQLITE_SYNC_* values */ #ifdef SQLITE_OMIT_WAL # define sqlite3WalOpen(x,y,z) 0 # define sqlite3WalLimit(x,y) # define sqlite3WalClose(v,w,x,y,z) 0 # define sqlite3WalBeginReadTransaction(y,z) 0 # define sqlite3WalEndReadTransaction(z) # define sqlite3WalDbsize(y) 0 # define sqlite3WalBeginWriteTransaction(y) 0 # define sqlite3WalEndWriteTransaction(x) 0 # define sqlite3WalUndo(x,y,z) 0 # define sqlite3WalSavepoint(y,z) # define sqlite3WalSavepointUndo(y,z) 0 # define sqlite3WalFrames(u,v,w,x,y,z) 0 # define sqlite3WalCheckpoint(q,r,s,t,u,v,w,x,y,z) 0 # define sqlite3WalCallback(z) 0 # define sqlite3WalExclusiveMode(y,z) 0 # define sqlite3WalHeapMemory(z) 0 # define sqlite3WalFramesize(z) 0 # define sqlite3WalFindFrame(x,y,z) 0 # define sqlite3WalFile(x) 0 #else #define WAL_SAVEPOINT_NDATA 4 /* Connection to a write-ahead log (WAL) file. ** There is one object of this type for each pager. */ typedef struct Wal Wal; /* Open and close a connection to a write-ahead log. */ SQLITE_PRIVATE int sqlite3WalOpen(sqlite3_vfs*, sqlite3_file*, const char *, int, i64, Wal**); SQLITE_PRIVATE int sqlite3WalClose(Wal *pWal, sqlite3*, int sync_flags, int, u8 *); /* Set the limiting size of a WAL file. */ SQLITE_PRIVATE void sqlite3WalLimit(Wal*, i64); /* Used by readers to open (lock) and close (unlock) a snapshot. A ** snapshot is like a read-transaction. It is the state of the database ** at an instant in time. sqlite3WalOpenSnapshot gets a read lock and |
︙ | ︙ | |||
46318 46319 46320 46321 46322 46323 46324 46325 46326 46327 46328 46329 46330 46331 | /* Write a frame or frames to the log. */ SQLITE_PRIVATE int sqlite3WalFrames(Wal *pWal, int, PgHdr *, Pgno, int, int); /* Copy pages from the log to the database file */ SQLITE_PRIVATE int sqlite3WalCheckpoint( Wal *pWal, /* Write-ahead log connection */ int eMode, /* One of PASSIVE, FULL and RESTART */ int (*xBusy)(void*), /* Function to call when busy */ void *pBusyArg, /* Context argument for xBusyHandler */ int sync_flags, /* Flags to sync db file with (or 0) */ int nBuf, /* Size of buffer nBuf */ u8 *zBuf, /* Temporary buffer to use */ int *pnLog, /* OUT: Number of frames in WAL */ | > | 46354 46355 46356 46357 46358 46359 46360 46361 46362 46363 46364 46365 46366 46367 46368 | /* Write a frame or frames to the log. */ SQLITE_PRIVATE int sqlite3WalFrames(Wal *pWal, int, PgHdr *, Pgno, int, int); /* Copy pages from the log to the database file */ SQLITE_PRIVATE int sqlite3WalCheckpoint( Wal *pWal, /* Write-ahead log connection */ sqlite3 *db, /* Check this handle's interrupt flag */ int eMode, /* One of PASSIVE, FULL and RESTART */ int (*xBusy)(void*), /* Function to call when busy */ void *pBusyArg, /* Context argument for xBusyHandler */ int sync_flags, /* Flags to sync db file with (or 0) */ int nBuf, /* Size of buffer nBuf */ u8 *zBuf, /* Temporary buffer to use */ int *pnLog, /* OUT: Number of frames in WAL */ |
︙ | ︙ | |||
47162 47163 47164 47165 47166 47167 47168 | #define isOpen(pFd) ((pFd)->pMethods!=0) /* ** Return true if this pager uses a write-ahead log instead of the usual ** rollback journal. Otherwise false. */ #ifndef SQLITE_OMIT_WAL | | > | 47199 47200 47201 47202 47203 47204 47205 47206 47207 47208 47209 47210 47211 47212 47213 47214 47215 47216 | #define isOpen(pFd) ((pFd)->pMethods!=0) /* ** Return true if this pager uses a write-ahead log instead of the usual ** rollback journal. Otherwise false. */ #ifndef SQLITE_OMIT_WAL SQLITE_PRIVATE int sqlite3PagerUseWal(Pager *pPager){ return (pPager->pWal!=0); } # define pagerUseWal(x) sqlite3PagerUseWal(x) #else # define pagerUseWal(x) 0 # define pagerRollbackWal(x) 0 # define pagerWalFrames(v,w,x,y) 0 # define pagerOpenWalIfPresent(z) SQLITE_OK # define pagerBeginReadTransaction(z) SQLITE_OK #endif |
︙ | ︙ | |||
50366 50367 50368 50369 50370 50371 50372 | ** result in a coredump. ** ** This function always succeeds. If a transaction is active an attempt ** is made to roll it back. If an error occurs during the rollback ** a hot journal may be left in the filesystem but no error is returned ** to the caller. */ | | > | | 50404 50405 50406 50407 50408 50409 50410 50411 50412 50413 50414 50415 50416 50417 50418 50419 50420 50421 50422 50423 50424 50425 50426 50427 50428 50429 | ** result in a coredump. ** ** This function always succeeds. If a transaction is active an attempt ** is made to roll it back. If an error occurs during the rollback ** a hot journal may be left in the filesystem but no error is returned ** to the caller. */ SQLITE_PRIVATE int sqlite3PagerClose(Pager *pPager, sqlite3 *db){ u8 *pTmp = (u8 *)pPager->pTmpSpace; assert( db || pagerUseWal(pPager)==0 ); assert( assert_pager_state(pPager) ); disable_simulated_io_errors(); sqlite3BeginBenignMalloc(); pagerFreeMapHdrs(pPager); /* pPager->errCode = 0; */ pPager->exclusiveMode = 0; #ifndef SQLITE_OMIT_WAL sqlite3WalClose(pPager->pWal,db,pPager->ckptSyncFlags,pPager->pageSize,pTmp); pPager->pWal = 0; #endif pager_reset(pPager); if( MEMDB ){ pager_unlock(pPager); }else{ /* If it is open, sync the journal file before calling UnlockAndRollback. |
︙ | ︙ | |||
53539 53540 53541 53542 53543 53544 53545 | /* ** This function is called when the user invokes "PRAGMA wal_checkpoint", ** "PRAGMA wal_blocking_checkpoint" or calls the sqlite3_wal_checkpoint() ** or wal_blocking_checkpoint() API functions. ** ** Parameter eMode is one of SQLITE_CHECKPOINT_PASSIVE, FULL or RESTART. */ | | > > > > > > | | 53578 53579 53580 53581 53582 53583 53584 53585 53586 53587 53588 53589 53590 53591 53592 53593 53594 53595 53596 53597 53598 53599 53600 53601 | /* ** This function is called when the user invokes "PRAGMA wal_checkpoint", ** "PRAGMA wal_blocking_checkpoint" or calls the sqlite3_wal_checkpoint() ** or wal_blocking_checkpoint() API functions. ** ** Parameter eMode is one of SQLITE_CHECKPOINT_PASSIVE, FULL or RESTART. */ SQLITE_PRIVATE int sqlite3PagerCheckpoint( Pager *pPager, /* Checkpoint on this pager */ sqlite3 *db, /* Db handle used to check for interrupts */ int eMode, /* Type of checkpoint */ int *pnLog, /* OUT: Final number of frames in log */ int *pnCkpt /* OUT: Final number of checkpointed frames */ ){ int rc = SQLITE_OK; if( pPager->pWal ){ rc = sqlite3WalCheckpoint(pPager->pWal, db, eMode, (eMode==SQLITE_CHECKPOINT_PASSIVE ? 0 : pPager->xBusyHandler), pPager->pBusyHandlerArg, pPager->ckptSyncFlags, pPager->pageSize, (u8 *)pPager->pTmpSpace, pnLog, pnCkpt ); } return rc; |
︙ | ︙ | |||
53674 53675 53676 53677 53678 53679 53680 | ** to switching from WAL to rollback mode. ** ** Before closing the log file, this function attempts to take an ** EXCLUSIVE lock on the database file. If this cannot be obtained, an ** error (SQLITE_BUSY) is returned and the log connection is not closed. ** If successful, the EXCLUSIVE lock is not released before returning. */ | | | 53719 53720 53721 53722 53723 53724 53725 53726 53727 53728 53729 53730 53731 53732 53733 | ** to switching from WAL to rollback mode. ** ** Before closing the log file, this function attempts to take an ** EXCLUSIVE lock on the database file. If this cannot be obtained, an ** error (SQLITE_BUSY) is returned and the log connection is not closed. ** If successful, the EXCLUSIVE lock is not released before returning. */ SQLITE_PRIVATE int sqlite3PagerCloseWal(Pager *pPager, sqlite3 *db){ int rc = SQLITE_OK; assert( pPager->journalMode==PAGER_JOURNALMODE_WAL ); /* If the log file is not already open, but does exist in the file-system, ** it may need to be checkpointed before the connection can switch to ** rollback mode. Open it now so this can happen. |
︙ | ︙ | |||
53702 53703 53704 53705 53706 53707 53708 | /* Checkpoint and close the log. Because an EXCLUSIVE lock is held on ** the database file, the log and log-summary files will be deleted. */ if( rc==SQLITE_OK && pPager->pWal ){ rc = pagerExclusiveLock(pPager); if( rc==SQLITE_OK ){ | | | 53747 53748 53749 53750 53751 53752 53753 53754 53755 53756 53757 53758 53759 53760 53761 | /* Checkpoint and close the log. Because an EXCLUSIVE lock is held on ** the database file, the log and log-summary files will be deleted. */ if( rc==SQLITE_OK && pPager->pWal ){ rc = pagerExclusiveLock(pPager); if( rc==SQLITE_OK ){ rc = sqlite3WalClose(pPager->pWal, db, pPager->ckptSyncFlags, pPager->pageSize, (u8*)pPager->pTmpSpace); pPager->pWal = 0; pagerFixMaplimit(pPager); if( rc && !pPager->exclusiveMode ) pagerUnlockDb(pPager, SHARED_LOCK); } } return rc; |
︙ | ︙ | |||
55485 55486 55487 55488 55489 55490 55491 55492 55493 55494 55495 55496 55497 55498 | ** ** The caller must be holding sufficient locks to ensure that no other ** checkpoint is running (in any other thread or process) at the same ** time. */ static int walCheckpoint( Wal *pWal, /* Wal connection */ int eMode, /* One of PASSIVE, FULL or RESTART */ int (*xBusy)(void*), /* Function to call when busy */ void *pBusyArg, /* Context argument for xBusyHandler */ int sync_flags, /* Flags for OsSync() (or 0) */ u8 *zBuf /* Temporary buffer to use */ ){ int rc = SQLITE_OK; /* Return code */ | > | 55530 55531 55532 55533 55534 55535 55536 55537 55538 55539 55540 55541 55542 55543 55544 | ** ** The caller must be holding sufficient locks to ensure that no other ** checkpoint is running (in any other thread or process) at the same ** time. */ static int walCheckpoint( Wal *pWal, /* Wal connection */ sqlite3 *db, /* Check for interrupts on this handle */ int eMode, /* One of PASSIVE, FULL or RESTART */ int (*xBusy)(void*), /* Function to call when busy */ void *pBusyArg, /* Context argument for xBusyHandler */ int sync_flags, /* Flags for OsSync() (or 0) */ u8 *zBuf /* Temporary buffer to use */ ){ int rc = SQLITE_OK; /* Return code */ |
︙ | ︙ | |||
55579 55580 55581 55582 55583 55584 55585 55586 55587 55588 55589 55590 55591 55592 | } /* Iterate through the contents of the WAL, copying data to the db file */ while( rc==SQLITE_OK && 0==walIteratorNext(pIter, &iDbpage, &iFrame) ){ i64 iOffset; assert( walFramePgno(pWal, iFrame)==iDbpage ); if( iFrame<=nBackfill || iFrame>mxSafeFrame || iDbpage>mxPage ){ continue; } iOffset = walFrameOffset(iFrame, szPage) + WAL_FRAME_HDRSIZE; /* testcase( IS_BIG_INT(iOffset) ); // requires a 4GiB WAL file */ rc = sqlite3OsRead(pWal->pWalFd, zBuf, szPage, iOffset); if( rc!=SQLITE_OK ) break; | > > > > | 55625 55626 55627 55628 55629 55630 55631 55632 55633 55634 55635 55636 55637 55638 55639 55640 55641 55642 | } /* Iterate through the contents of the WAL, copying data to the db file */ while( rc==SQLITE_OK && 0==walIteratorNext(pIter, &iDbpage, &iFrame) ){ i64 iOffset; assert( walFramePgno(pWal, iFrame)==iDbpage ); if( db->u1.isInterrupted ){ rc = db->mallocFailed ? SQLITE_NOMEM_BKPT : SQLITE_INTERRUPT; break; } if( iFrame<=nBackfill || iFrame>mxSafeFrame || iDbpage>mxPage ){ continue; } iOffset = walFrameOffset(iFrame, szPage) + WAL_FRAME_HDRSIZE; /* testcase( IS_BIG_INT(iOffset) ); // requires a 4GiB WAL file */ rc = sqlite3OsRead(pWal->pWalFd, zBuf, szPage, iOffset); if( rc!=SQLITE_OK ) break; |
︙ | ︙ | |||
55683 55684 55685 55686 55687 55688 55689 55690 55691 55692 55693 55694 55695 55696 55697 55698 55699 55700 55701 55702 55703 55704 55705 | } /* ** Close a connection to a log file. */ SQLITE_PRIVATE int sqlite3WalClose( Wal *pWal, /* Wal to close */ int sync_flags, /* Flags to pass to OsSync() (or 0) */ int nBuf, u8 *zBuf /* Buffer of at least nBuf bytes */ ){ int rc = SQLITE_OK; if( pWal ){ int isDelete = 0; /* True to unlink wal and wal-index files */ /* If an EXCLUSIVE lock can be obtained on the database file (using the ** ordinary, rollback-mode locking methods, this guarantees that the ** connection associated with this log file is the only connection to ** the database. In this case checkpoint the database and unlink both ** the wal and wal-index files. ** ** The EXCLUSIVE lock is not released before returning. */ | > > | | | | | 55733 55734 55735 55736 55737 55738 55739 55740 55741 55742 55743 55744 55745 55746 55747 55748 55749 55750 55751 55752 55753 55754 55755 55756 55757 55758 55759 55760 55761 55762 55763 55764 55765 55766 55767 55768 55769 55770 55771 | } /* ** Close a connection to a log file. */ SQLITE_PRIVATE int sqlite3WalClose( Wal *pWal, /* Wal to close */ sqlite3 *db, /* For interrupt flag */ int sync_flags, /* Flags to pass to OsSync() (or 0) */ int nBuf, u8 *zBuf /* Buffer of at least nBuf bytes */ ){ int rc = SQLITE_OK; if( pWal ){ int isDelete = 0; /* True to unlink wal and wal-index files */ /* If an EXCLUSIVE lock can be obtained on the database file (using the ** ordinary, rollback-mode locking methods, this guarantees that the ** connection associated with this log file is the only connection to ** the database. In this case checkpoint the database and unlink both ** the wal and wal-index files. ** ** The EXCLUSIVE lock is not released before returning. */ if( (db->flags & SQLITE_NoCkptOnClose)==0 && SQLITE_OK==(rc = sqlite3OsLock(pWal->pDbFd, SQLITE_LOCK_EXCLUSIVE)) ){ if( pWal->exclusiveMode==WAL_NORMAL_MODE ){ pWal->exclusiveMode = WAL_EXCLUSIVE_MODE; } rc = sqlite3WalCheckpoint(pWal, db, SQLITE_CHECKPOINT_PASSIVE, 0, 0, sync_flags, nBuf, zBuf, 0, 0 ); if( rc==SQLITE_OK ){ int bPersist = -1; sqlite3OsFileControlHint( pWal->pDbFd, SQLITE_FCNTL_PERSIST_WAL, &bPersist ); if( bPersist!=1 ){ |
︙ | ︙ | |||
56953 56954 56955 56956 56957 56958 56959 56960 56961 56962 56963 56964 56965 56966 | ** we can from WAL into the database. ** ** If parameter xBusy is not NULL, it is a pointer to a busy-handler ** callback. In this case this function runs a blocking checkpoint. */ SQLITE_PRIVATE int sqlite3WalCheckpoint( Wal *pWal, /* Wal connection */ int eMode, /* PASSIVE, FULL, RESTART, or TRUNCATE */ int (*xBusy)(void*), /* Function to call when busy */ void *pBusyArg, /* Context argument for xBusyHandler */ int sync_flags, /* Flags to sync db file with (or 0) */ int nBuf, /* Size of temporary buffer */ u8 *zBuf, /* Temporary buffer to use */ int *pnLog, /* OUT: Number of frames in WAL */ | > | 57005 57006 57007 57008 57009 57010 57011 57012 57013 57014 57015 57016 57017 57018 57019 | ** we can from WAL into the database. ** ** If parameter xBusy is not NULL, it is a pointer to a busy-handler ** callback. In this case this function runs a blocking checkpoint. */ SQLITE_PRIVATE int sqlite3WalCheckpoint( Wal *pWal, /* Wal connection */ sqlite3 *db, /* Check this handle's interrupt flag */ int eMode, /* PASSIVE, FULL, RESTART, or TRUNCATE */ int (*xBusy)(void*), /* Function to call when busy */ void *pBusyArg, /* Context argument for xBusyHandler */ int sync_flags, /* Flags to sync db file with (or 0) */ int nBuf, /* Size of temporary buffer */ u8 *zBuf, /* Temporary buffer to use */ int *pnLog, /* OUT: Number of frames in WAL */ |
︙ | ︙ | |||
57027 57028 57029 57030 57031 57032 57033 | /* Copy data from the log to the database file. */ if( rc==SQLITE_OK ){ if( pWal->hdr.mxFrame && walPagesize(pWal)!=nBuf ){ rc = SQLITE_CORRUPT_BKPT; }else{ | | | 57080 57081 57082 57083 57084 57085 57086 57087 57088 57089 57090 57091 57092 57093 57094 | /* Copy data from the log to the database file. */ if( rc==SQLITE_OK ){ if( pWal->hdr.mxFrame && walPagesize(pWal)!=nBuf ){ rc = SQLITE_CORRUPT_BKPT; }else{ rc = walCheckpoint(pWal, db, eMode2, xBusy2, pBusyArg, sync_flags, zBuf); } /* If no error occurred, set the output variables. */ if( rc==SQLITE_OK || rc==SQLITE_BUSY ){ if( pnLog ) *pnLog = (int)pWal->hdr.mxFrame; if( pnCkpt ) *pnCkpt = (int)(walCkptInfo(pWal)->nBackfill); } |
︙ | ︙ | |||
60617 60618 60619 60620 60621 60622 60623 | } #endif *ppBtree = p; btree_open_out: if( rc!=SQLITE_OK ){ if( pBt && pBt->pPager ){ | | > > > > > > > | 60670 60671 60672 60673 60674 60675 60676 60677 60678 60679 60680 60681 60682 60683 60684 60685 60686 60687 60688 60689 60690 60691 60692 60693 60694 60695 60696 60697 60698 60699 60700 60701 60702 60703 | } #endif *ppBtree = p; btree_open_out: if( rc!=SQLITE_OK ){ if( pBt && pBt->pPager ){ sqlite3PagerClose(pBt->pPager, 0); } sqlite3_free(pBt); sqlite3_free(p); *ppBtree = 0; }else{ sqlite3_file *pFile; /* If the B-Tree was successfully opened, set the pager-cache size to the ** default value. Except, when opening on an existing shared pager-cache, ** do not change the pager-cache size. */ if( sqlite3BtreeSchema(p, 0, 0)==0 ){ sqlite3PagerSetCachesize(p->pBt->pPager, SQLITE_DEFAULT_CACHE_SIZE); } pFile = sqlite3PagerFile(pBt->pPager); if( pFile->pMethods ){ sqlite3OsFileControlHint(pFile, SQLITE_FCNTL_PDB, (void*)&pBt->db); } } if( mutexOpen ){ assert( sqlite3_mutex_held(mutexOpen) ); sqlite3_mutex_leave(mutexOpen); } assert( rc!=SQLITE_OK || sqlite3BtreeConnectionCount(*ppBtree)>0 ); return rc; |
︙ | ︙ | |||
60759 60760 60761 60762 60763 60764 60765 | if( !p->sharable || removeFromSharingList(pBt) ){ /* The pBt is no longer on the sharing list, so we can access ** it without having to hold the mutex. ** ** Clean out and delete the BtShared object. */ assert( !pBt->pCursor ); | | | 60819 60820 60821 60822 60823 60824 60825 60826 60827 60828 60829 60830 60831 60832 60833 | if( !p->sharable || removeFromSharingList(pBt) ){ /* The pBt is no longer on the sharing list, so we can access ** it without having to hold the mutex. ** ** Clean out and delete the BtShared object. */ assert( !pBt->pCursor ); sqlite3PagerClose(pBt->pPager, p->db); if( pBt->xFreeSchema && pBt->pSchema ){ pBt->xFreeSchema(pBt->pSchema); } sqlite3DbFree(0, pBt->pSchema); freeTempSpace(pBt); sqlite3_free(pBt); } |
︙ | ︙ | |||
62823 62824 62825 62826 62827 62828 62829 | ** up loading large records that span many overflow pages. */ if( (eOp&0x01)==0 /* (1) */ && offset==0 /* (2) */ && (bEnd || a==ovflSize) /* (6) */ && pBt->inTransaction==TRANS_READ /* (4) */ && (fd = sqlite3PagerFile(pBt->pPager))->pMethods /* (3) */ | | | 62883 62884 62885 62886 62887 62888 62889 62890 62891 62892 62893 62894 62895 62896 62897 | ** up loading large records that span many overflow pages. */ if( (eOp&0x01)==0 /* (1) */ && offset==0 /* (2) */ && (bEnd || a==ovflSize) /* (6) */ && pBt->inTransaction==TRANS_READ /* (4) */ && (fd = sqlite3PagerFile(pBt->pPager))->pMethods /* (3) */ && 0==sqlite3PagerUseWal(pBt->pPager) /* (5) */ && &pBuf[-4]>=pBufStart /* (7) */ ){ u8 aSave[4]; u8 *aWrite = &pBuf[-4]; assert( aWrite>=pBufStart ); /* hence (7) */ memcpy(aSave, aWrite, 4); rc = sqlite3OsRead(fd, aWrite, a+4, (i64)pBt->pageSize*(nextPage-1)); |
︙ | ︙ | |||
63079 63080 63081 63082 63083 63084 63085 | assert( pCur->skipNext!=SQLITE_OK ); return pCur->skipNext; } sqlite3BtreeClearCursor(pCur); } if( pCur->iPage>=0 ){ | | > | | > > | | 63139 63140 63141 63142 63143 63144 63145 63146 63147 63148 63149 63150 63151 63152 63153 63154 63155 63156 63157 63158 63159 63160 63161 63162 63163 63164 63165 63166 63167 63168 63169 | assert( pCur->skipNext!=SQLITE_OK ); return pCur->skipNext; } sqlite3BtreeClearCursor(pCur); } if( pCur->iPage>=0 ){ if( pCur->iPage ){ do{ assert( pCur->apPage[pCur->iPage]!=0 ); releasePageNotNull(pCur->apPage[pCur->iPage--]); }while( pCur->iPage); goto skip_init; } }else if( pCur->pgnoRoot==0 ){ pCur->eState = CURSOR_INVALID; return SQLITE_OK; }else{ assert( pCur->iPage==(-1) ); rc = getAndInitPage(pCur->pBtree->pBt, pCur->pgnoRoot, &pCur->apPage[0], 0, pCur->curPagerFlags); if( rc!=SQLITE_OK ){ pCur->eState = CURSOR_INVALID; return rc; } pCur->iPage = 0; pCur->curIntKey = pCur->apPage[0]->intKey; } pRoot = pCur->apPage[0]; assert( pRoot->pgno==pCur->pgnoRoot ); |
︙ | ︙ | |||
63115 63116 63117 63118 63119 63120 63121 63122 63123 63124 63125 63126 63127 63128 63129 63130 63131 63132 | ** in such a way that page pRoot is linked into a second b-tree table ** (or the freelist). */ assert( pRoot->intKey==1 || pRoot->intKey==0 ); if( pRoot->isInit==0 || (pCur->pKeyInfo==0)!=pRoot->intKey ){ return SQLITE_CORRUPT_BKPT; } pCur->aiIdx[0] = 0; pCur->info.nSize = 0; pCur->curFlags &= ~(BTCF_AtLast|BTCF_ValidNKey|BTCF_ValidOvfl); if( pRoot->nCell>0 ){ pCur->eState = CURSOR_VALID; }else if( !pRoot->leaf ){ Pgno subpage; if( pRoot->pgno!=1 ) return SQLITE_CORRUPT_BKPT; subpage = get4byte(&pRoot->aData[pRoot->hdrOffset+8]); pCur->eState = CURSOR_VALID; | > > | 63178 63179 63180 63181 63182 63183 63184 63185 63186 63187 63188 63189 63190 63191 63192 63193 63194 63195 63196 63197 | ** in such a way that page pRoot is linked into a second b-tree table ** (or the freelist). */ assert( pRoot->intKey==1 || pRoot->intKey==0 ); if( pRoot->isInit==0 || (pCur->pKeyInfo==0)!=pRoot->intKey ){ return SQLITE_CORRUPT_BKPT; } skip_init: pCur->aiIdx[0] = 0; pCur->info.nSize = 0; pCur->curFlags &= ~(BTCF_AtLast|BTCF_ValidNKey|BTCF_ValidOvfl); pRoot = pCur->apPage[0]; if( pRoot->nCell>0 ){ pCur->eState = CURSOR_VALID; }else if( !pRoot->leaf ){ Pgno subpage; if( pRoot->pgno!=1 ) return SQLITE_CORRUPT_BKPT; subpage = get4byte(&pRoot->aData[pRoot->hdrOffset+8]); pCur->eState = CURSOR_VALID; |
︙ | ︙ | |||
67699 67700 67701 67702 67703 67704 67705 | int rc = SQLITE_OK; if( p ){ BtShared *pBt = p->pBt; sqlite3BtreeEnter(p); if( pBt->inTransaction!=TRANS_NONE ){ rc = SQLITE_LOCKED; }else{ | | | 67764 67765 67766 67767 67768 67769 67770 67771 67772 67773 67774 67775 67776 67777 67778 | int rc = SQLITE_OK; if( p ){ BtShared *pBt = p->pBt; sqlite3BtreeEnter(p); if( pBt->inTransaction!=TRANS_NONE ){ rc = SQLITE_LOCKED; }else{ rc = sqlite3PagerCheckpoint(pBt->pPager, p->db, eMode, pnLog, pnCkpt); } sqlite3BtreeLeave(p); } return rc; } #endif |
︙ | ︙ | |||
69334 69335 69336 69337 69338 69339 69340 | SQLITE_PRIVATE void sqlite3VdbeMemCast(Mem *pMem, u8 aff, u8 encoding){ if( pMem->flags & MEM_Null ) return; switch( aff ){ case SQLITE_AFF_BLOB: { /* Really a cast to BLOB */ if( (pMem->flags & MEM_Blob)==0 ){ sqlite3ValueApplyAffinity(pMem, SQLITE_AFF_TEXT, encoding); assert( pMem->flags & MEM_Str || pMem->db->mallocFailed ); | | | 69399 69400 69401 69402 69403 69404 69405 69406 69407 69408 69409 69410 69411 69412 69413 | SQLITE_PRIVATE void sqlite3VdbeMemCast(Mem *pMem, u8 aff, u8 encoding){ if( pMem->flags & MEM_Null ) return; switch( aff ){ case SQLITE_AFF_BLOB: { /* Really a cast to BLOB */ if( (pMem->flags & MEM_Blob)==0 ){ sqlite3ValueApplyAffinity(pMem, SQLITE_AFF_TEXT, encoding); assert( pMem->flags & MEM_Str || pMem->db->mallocFailed ); if( pMem->flags & MEM_Str ) MemSetTypeFlag(pMem, MEM_Blob); }else{ pMem->flags &= ~(MEM_TypeMask&~MEM_Blob); } break; } case SQLITE_AFF_NUMERIC: { sqlite3VdbeMemNumerify(pMem); |
︙ | ︙ | |||
75074 75075 75076 75077 75078 75079 75080 | preupdate.iNewReg = iReg; preupdate.keyinfo.db = db; preupdate.keyinfo.enc = ENC(db); preupdate.keyinfo.nField = pTab->nCol; preupdate.keyinfo.aSortOrder = (u8*)&fakeSortOrder; preupdate.iKey1 = iKey1; preupdate.iKey2 = iKey2; | | | 75139 75140 75141 75142 75143 75144 75145 75146 75147 75148 75149 75150 75151 75152 75153 | preupdate.iNewReg = iReg; preupdate.keyinfo.db = db; preupdate.keyinfo.enc = ENC(db); preupdate.keyinfo.nField = pTab->nCol; preupdate.keyinfo.aSortOrder = (u8*)&fakeSortOrder; preupdate.iKey1 = iKey1; preupdate.iKey2 = iKey2; preupdate.pTab = pTab; db->pPreUpdate = &preupdate; db->xPreUpdateCallback(db->pPreUpdateArg, db, op, zDb, zTbl, iKey1, iKey2); db->pPreUpdate = 0; sqlite3DbFree(db, preupdate.aRecord); vdbeFreeUnpacked(db, preupdate.pUnpacked); vdbeFreeUnpacked(db, preupdate.pNewUnpacked); |
︙ | ︙ | |||
76806 76807 76808 76809 76810 76811 76812 76813 | } p->aRecord = aRec; } if( iIdx>=p->pUnpacked->nField ){ *ppValue = (sqlite3_value *)columnNullValue(); }else{ *ppValue = &p->pUnpacked->aMem[iIdx]; | > | | > > > > | 76871 76872 76873 76874 76875 76876 76877 76878 76879 76880 76881 76882 76883 76884 76885 76886 76887 76888 76889 76890 76891 76892 | } p->aRecord = aRec; } if( iIdx>=p->pUnpacked->nField ){ *ppValue = (sqlite3_value *)columnNullValue(); }else{ Mem *pMem = *ppValue = &p->pUnpacked->aMem[iIdx]; *ppValue = &p->pUnpacked->aMem[iIdx]; if( iIdx==p->pTab->iPKey ){ sqlite3VdbeMemSetInt64(pMem, p->iKey1); }else if( p->pTab->aCol[iIdx].affinity==SQLITE_AFF_REAL ){ if( pMem->flags & MEM_Int ){ sqlite3VdbeMemRealify(pMem); } } } preupdate_old_out: sqlite3Error(db, rc); return sqlite3ApiExit(db, rc); } |
︙ | ︙ | |||
76885 76886 76887 76888 76889 76890 76891 | } p->pNewUnpacked = pUnpack; } if( iIdx>=pUnpack->nField ){ pMem = (sqlite3_value *)columnNullValue(); }else{ pMem = &pUnpack->aMem[iIdx]; | | | | 76955 76956 76957 76958 76959 76960 76961 76962 76963 76964 76965 76966 76967 76968 76969 76970 76971 76972 76973 76974 76975 76976 76977 76978 76979 76980 76981 76982 76983 76984 76985 76986 76987 76988 76989 76990 | } p->pNewUnpacked = pUnpack; } if( iIdx>=pUnpack->nField ){ pMem = (sqlite3_value *)columnNullValue(); }else{ pMem = &pUnpack->aMem[iIdx]; if( iIdx==p->pTab->iPKey ){ sqlite3VdbeMemSetInt64(pMem, p->iKey2); } } }else{ /* For an UPDATE, memory cell (p->iNewReg+1+iIdx) contains the required ** value. Make a copy of the cell contents and return a pointer to it. ** It is not safe to return a pointer to the memory cell itself as the ** caller may modify the value text encoding. */ assert( p->op==SQLITE_UPDATE ); if( !p->aNew ){ p->aNew = (Mem *)sqlite3DbMallocZero(db, sizeof(Mem) * p->pCsr->nField); if( !p->aNew ){ rc = SQLITE_NOMEM; goto preupdate_new_out; } } assert( iIdx>=0 && iIdx<p->pCsr->nField ); pMem = &p->aNew[iIdx]; if( pMem->flags==0 ){ if( iIdx==p->pTab->iPKey ){ sqlite3VdbeMemSetInt64(pMem, p->iKey2); }else{ rc = sqlite3VdbeMemCopy(pMem, &p->v->aMem[p->iNewReg+1+iIdx]); if( rc!=SQLITE_OK ) goto preupdate_new_out; } } } |
︙ | ︙ | |||
79284 79285 79286 79287 79288 79289 79290 | /* If SQLITE_NULLEQ is set (which will only happen if the operator is ** OP_Eq or OP_Ne) then take the jump or not depending on whether ** or not both operands are null. */ assert( pOp->opcode==OP_Eq || pOp->opcode==OP_Ne ); assert( (flags1 & MEM_Cleared)==0 ); assert( (pOp->p5 & SQLITE_JUMPIFNULL)==0 ); | < | | 79354 79355 79356 79357 79358 79359 79360 79361 79362 79363 79364 79365 79366 79367 79368 | /* If SQLITE_NULLEQ is set (which will only happen if the operator is ** OP_Eq or OP_Ne) then take the jump or not depending on whether ** or not both operands are null. */ assert( pOp->opcode==OP_Eq || pOp->opcode==OP_Ne ); assert( (flags1 & MEM_Cleared)==0 ); assert( (pOp->p5 & SQLITE_JUMPIFNULL)==0 ); if( (flags1&flags3&MEM_Null)!=0 && (flags3&MEM_Cleared)==0 ){ res = 0; /* Operands are equal */ }else{ res = 1; /* Operands are not equal */ } }else{ |
︙ | ︙ | |||
81552 81553 81554 81555 81556 81557 81558 | } assert( memIsValid(pMem) ); REGISTER_TRACE(pOp->p3, pMem); sqlite3VdbeMemIntegerify(pMem); assert( (pMem->flags & MEM_Int)!=0 ); /* mem(P3) holds an integer */ if( pMem->u.i==MAX_ROWID || pC->useRandomRowid ){ | | | 81621 81622 81623 81624 81625 81626 81627 81628 81629 81630 81631 81632 81633 81634 81635 | } assert( memIsValid(pMem) ); REGISTER_TRACE(pOp->p3, pMem); sqlite3VdbeMemIntegerify(pMem); assert( (pMem->flags & MEM_Int)!=0 ); /* mem(P3) holds an integer */ if( pMem->u.i==MAX_ROWID || pC->useRandomRowid ){ rc = SQLITE_FULL; /* IMP: R-17817-00630 */ goto abort_due_to_error; } if( v<pMem->u.i+1 ){ v = pMem->u.i + 1; } pMem->u.i = v; } |
︙ | ︙ | |||
82319 82320 82321 82322 82323 82324 82325 82326 82327 82328 82329 82330 82331 82332 | ** If P5 has the OPFLAG_USESEEKRESULT bit set, then the cursor must have ** just done a seek to the spot where the new entry is to be inserted. ** This flag avoids doing an extra seek. ** ** This instruction only works for indices. The equivalent instruction ** for tables is OP_Insert. */ case OP_SorterInsert: /* in2 */ case OP_IdxInsert: { /* in2 */ VdbeCursor *pC; BtreePayload x; assert( pOp->p1>=0 && pOp->p1<p->nCursor ); pC = p->apCsr[pOp->p1]; | > > > > > > > | 82388 82389 82390 82391 82392 82393 82394 82395 82396 82397 82398 82399 82400 82401 82402 82403 82404 82405 82406 82407 82408 | ** If P5 has the OPFLAG_USESEEKRESULT bit set, then the cursor must have ** just done a seek to the spot where the new entry is to be inserted. ** This flag avoids doing an extra seek. ** ** This instruction only works for indices. The equivalent instruction ** for tables is OP_Insert. */ /* Opcode: SorterInsert P1 P2 * * * ** Synopsis: key=r[P2] ** ** Register P2 holds an SQL index key made using the ** MakeRecord instructions. This opcode writes that key ** into the sorter P1. Data for the entry is nil. */ case OP_SorterInsert: /* in2 */ case OP_IdxInsert: { /* in2 */ VdbeCursor *pC; BtreePayload x; assert( pOp->p1>=0 && pOp->p1<p->nCursor ); pC = p->apCsr[pOp->p1]; |
︙ | ︙ | |||
83547 83548 83549 83550 83551 83552 83553 | if( eOld==PAGER_JOURNALMODE_WAL ){ /* If leaving WAL mode, close the log file. If successful, the call ** to PagerCloseWal() checkpoints and deletes the write-ahead-log ** file. An EXCLUSIVE lock may still be held on the database file ** after a successful return. */ | | | 83623 83624 83625 83626 83627 83628 83629 83630 83631 83632 83633 83634 83635 83636 83637 | if( eOld==PAGER_JOURNALMODE_WAL ){ /* If leaving WAL mode, close the log file. If successful, the call ** to PagerCloseWal() checkpoints and deletes the write-ahead-log ** file. An EXCLUSIVE lock may still be held on the database file ** after a successful return. */ rc = sqlite3PagerCloseWal(pPager, db); if( rc==SQLITE_OK ){ sqlite3PagerSetJournalMode(pPager, eNew); } }else if( eOld==PAGER_JOURNALMODE_MEMORY ){ /* Cannot transition directly from MEMORY to WAL. Use mode OFF ** as an intermediate */ sqlite3PagerSetJournalMode(pPager, PAGER_JOURNALMODE_OFF); |
︙ | ︙ | |||
88810 88811 88812 88813 88814 88815 88816 | int is_agg = 0; /* True if is an aggregate function */ int nId; /* Number of characters in function name */ const char *zId; /* The function name. */ FuncDef *pDef; /* Information about the function */ u8 enc = ENC(pParse->db); /* The database encoding */ assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); | < | 88886 88887 88888 88889 88890 88891 88892 88893 88894 88895 88896 88897 88898 88899 | int is_agg = 0; /* True if is an aggregate function */ int nId; /* Number of characters in function name */ const char *zId; /* The function name. */ FuncDef *pDef; /* Information about the function */ u8 enc = ENC(pParse->db); /* The database encoding */ assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); zId = pExpr->u.zToken; nId = sqlite3Strlen30(zId); pDef = sqlite3FindFunction(pParse->db, zId, n, enc, 0); if( pDef==0 ){ pDef = sqlite3FindFunction(pParse->db, zId, -2, enc, 0); if( pDef==0 ){ no_such_func = 1; |
︙ | ︙ | |||
94291 94292 94293 94294 94295 94296 94297 | } if( pE2->op==TK_OR && (sqlite3ExprImpliesExpr(pE1, pE2->pLeft, iTab) || sqlite3ExprImpliesExpr(pE1, pE2->pRight, iTab) ) ){ return 1; } | | | | < | | 94366 94367 94368 94369 94370 94371 94372 94373 94374 94375 94376 94377 94378 94379 94380 94381 94382 94383 | } if( pE2->op==TK_OR && (sqlite3ExprImpliesExpr(pE1, pE2->pLeft, iTab) || sqlite3ExprImpliesExpr(pE1, pE2->pRight, iTab) ) ){ return 1; } if( pE2->op==TK_NOTNULL && pE1->op!=TK_ISNULL && pE1->op!=TK_IS ){ Expr *pX = sqlite3ExprSkipCollate(pE1->pLeft); testcase( pX!=pE1->pLeft ); if( sqlite3ExprCompare(pX, pE2->pLeft, iTab)==0 ) return 1; } return 0; } /* ** An instance of the following structure is used by the tree walker ** to determine if an expression can be evaluated by reference to the |
︙ | ︙ | |||
122353 122354 122355 122356 122357 122358 122359 | sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey); if( nKey ) db->nextPagesize = 0; } #endif sqlite3BtreeSetCacheSize(pTemp, db->aDb[iDb].pSchema->cache_size); sqlite3BtreeSetSpillSize(pTemp, sqlite3BtreeSetSpillSize(pMain,0)); | | | 122427 122428 122429 122430 122431 122432 122433 122434 122435 122436 122437 122438 122439 122440 122441 | sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey); if( nKey ) db->nextPagesize = 0; } #endif sqlite3BtreeSetCacheSize(pTemp, db->aDb[iDb].pSchema->cache_size); sqlite3BtreeSetSpillSize(pTemp, sqlite3BtreeSetSpillSize(pMain,0)); sqlite3BtreeSetPagerFlags(pTemp, PAGER_SYNCHRONOUS_OFF|PAGER_CACHESPILL); /* Begin a transaction and take an exclusive lock on the main database ** file. This is done before the sqlite3BtreeGetPageSize(pMain) call below, ** to ensure that we do not try to change the page-size on a WAL database. */ rc = execSql(db, pzErrMsg, "BEGIN"); if( rc!=SQLITE_OK ) goto end_of_vacuum; |
︙ | ︙ | |||
124661 124662 124663 124664 124665 124666 124667 | ** to the pRight values. This function modifies characters within the ** affinity string to SQLITE_AFF_BLOB if either: ** ** * the comparison will be performed with no affinity, or ** * the affinity change in zAff is guaranteed not to change the value. */ static void updateRangeAffinityStr( | < | 124735 124736 124737 124738 124739 124740 124741 124742 124743 124744 124745 124746 124747 124748 | ** to the pRight values. This function modifies characters within the ** affinity string to SQLITE_AFF_BLOB if either: ** ** * the comparison will be performed with no affinity, or ** * the affinity change in zAff is guaranteed not to change the value. */ static void updateRangeAffinityStr( Expr *pRight, /* RHS of comparison */ int n, /* Number of vector elements in comparison */ char *zAff /* Affinity string to modify */ ){ int i; for(i=0; i<n; i++){ Expr *p = sqlite3VectorFieldSubexpr(pRight, i); |
︙ | ︙ | |||
125747 125748 125749 125750 125751 125752 125753 | testcase( bRev ); testcase( pIdx->aSortOrder[nEq]==SQLITE_SO_DESC ); assert( (bRev & ~1)==0 ); pLevel->iLikeRepCntr <<=1; pLevel->iLikeRepCntr |= bRev ^ (pIdx->aSortOrder[nEq]==SQLITE_SO_DESC); } #endif | | | | < | > | 125820 125821 125822 125823 125824 125825 125826 125827 125828 125829 125830 125831 125832 125833 125834 125835 125836 125837 125838 | testcase( bRev ); testcase( pIdx->aSortOrder[nEq]==SQLITE_SO_DESC ); assert( (bRev & ~1)==0 ); pLevel->iLikeRepCntr <<=1; pLevel->iLikeRepCntr |= bRev ^ (pIdx->aSortOrder[nEq]==SQLITE_SO_DESC); } #endif if( pRangeStart==0 ){ j = pIdx->aiColumn[nEq]; if( (j>=0 && pIdx->pTable->aCol[j].notNull==0) || j==XN_EXPR ){ bSeekPastNull = 1; } } } assert( pRangeEnd==0 || (pRangeEnd->wtFlags & TERM_VNULL)==0 ); /* If we are doing a reverse order scan on an ascending index, or ** a forward order scan on a descending index, interchange the ** start and end terms (pRangeStart and pRangeEnd). |
︙ | ︙ | |||
125801 125802 125803 125804 125805 125806 125807 | if( (pRangeStart->wtFlags & TERM_VNULL)==0 && sqlite3ExprCanBeNull(pRight) ){ sqlite3VdbeAddOp2(v, OP_IsNull, regBase+nEq, addrNxt); VdbeCoverage(v); } if( zStartAff ){ | | | 125874 125875 125876 125877 125878 125879 125880 125881 125882 125883 125884 125885 125886 125887 125888 | if( (pRangeStart->wtFlags & TERM_VNULL)==0 && sqlite3ExprCanBeNull(pRight) ){ sqlite3VdbeAddOp2(v, OP_IsNull, regBase+nEq, addrNxt); VdbeCoverage(v); } if( zStartAff ){ updateRangeAffinityStr(pRight, nBtm, &zStartAff[nEq]); } nConstraint += nBtm; testcase( pRangeStart->wtFlags & TERM_VIRTUAL ); if( sqlite3ExprIsVector(pRight)==0 ){ disableTerm(pLevel, pRangeStart); }else{ startEq = 1; |
︙ | ︙ | |||
125851 125852 125853 125854 125855 125856 125857 | if( (pRangeEnd->wtFlags & TERM_VNULL)==0 && sqlite3ExprCanBeNull(pRight) ){ sqlite3VdbeAddOp2(v, OP_IsNull, regBase+nEq, addrNxt); VdbeCoverage(v); } if( zEndAff ){ | | | 125924 125925 125926 125927 125928 125929 125930 125931 125932 125933 125934 125935 125936 125937 125938 | if( (pRangeEnd->wtFlags & TERM_VNULL)==0 && sqlite3ExprCanBeNull(pRight) ){ sqlite3VdbeAddOp2(v, OP_IsNull, regBase+nEq, addrNxt); VdbeCoverage(v); } if( zEndAff ){ updateRangeAffinityStr(pRight, nTop, zEndAff); codeApplyAffinity(pParse, regBase+nEq, nTop, zEndAff); }else{ assert( pParse->db->mallocFailed ); } nConstraint += nTop; testcase( pRangeEnd->wtFlags & TERM_VIRTUAL ); |
︙ | ︙ | |||
127537 127538 127539 127540 127541 127542 127543 127544 127545 127546 127547 127548 127549 127550 | for(i=0; i<nLeft; i++){ int idxNew; Expr *pNew; Expr *pLeft = sqlite3ExprForVectorField(pParse, pExpr->pLeft, i); Expr *pRight = sqlite3ExprForVectorField(pParse, pExpr->pRight, i); pNew = sqlite3PExpr(pParse, pExpr->op, pLeft, pRight, 0); idxNew = whereClauseInsert(pWC, pNew, TERM_DYNAMIC); exprAnalyze(pSrc, pWC, idxNew); } pTerm = &pWC->a[idxTerm]; pTerm->wtFlags = TERM_CODED|TERM_VIRTUAL; /* Disable the original */ pTerm->eOperator = 0; } | > | 127610 127611 127612 127613 127614 127615 127616 127617 127618 127619 127620 127621 127622 127623 127624 | for(i=0; i<nLeft; i++){ int idxNew; Expr *pNew; Expr *pLeft = sqlite3ExprForVectorField(pParse, pExpr->pLeft, i); Expr *pRight = sqlite3ExprForVectorField(pParse, pExpr->pRight, i); pNew = sqlite3PExpr(pParse, pExpr->op, pLeft, pRight, 0); transferJoinMarkings(pNew, pExpr); idxNew = whereClauseInsert(pWC, pNew, TERM_DYNAMIC); exprAnalyze(pSrc, pWC, idxNew); } pTerm = &pWC->a[idxTerm]; pTerm->wtFlags = TERM_CODED|TERM_VIRTUAL; /* Disable the original */ pTerm->eOperator = 0; } |
︙ | ︙ | |||
127978 127979 127980 127981 127982 127983 127984 | int iCur; /* The cursor on the LHS of the term */ i16 iColumn; /* The column on the LHS of the term. -1 for IPK */ Expr *pX; /* An expression being tested */ WhereClause *pWC; /* Shorthand for pScan->pWC */ WhereTerm *pTerm; /* The term being tested */ int k = pScan->k; /* Where to start scanning */ | | | > | | > | 128052 128053 128054 128055 128056 128057 128058 128059 128060 128061 128062 128063 128064 128065 128066 128067 128068 128069 128070 128071 128072 | int iCur; /* The cursor on the LHS of the term */ i16 iColumn; /* The column on the LHS of the term. -1 for IPK */ Expr *pX; /* An expression being tested */ WhereClause *pWC; /* Shorthand for pScan->pWC */ WhereTerm *pTerm; /* The term being tested */ int k = pScan->k; /* Where to start scanning */ assert( pScan->iEquiv<=pScan->nEquiv ); pWC = pScan->pWC; while(1){ iColumn = pScan->aiColumn[pScan->iEquiv-1]; iCur = pScan->aiCur[pScan->iEquiv-1]; assert( pWC!=0 ); do{ for(pTerm=pWC->a+k; k<pWC->nTerm; k++, pTerm++){ if( pTerm->leftCursor==iCur && pTerm->u.leftColumn==iColumn && (iColumn!=XN_EXPR || sqlite3ExprCompare(pTerm->pExpr->pLeft,pScan->pIdxExpr,iCur)==0) && (pScan->iEquiv<=1 || !ExprHasProperty(pTerm->pExpr, EP_FromJoin)) ){ |
︙ | ︙ | |||
128032 128033 128034 128035 128036 128037 128038 128039 128040 128041 128042 128043 | && (pX = pTerm->pExpr->pRight)->op==TK_COLUMN && pX->iTable==pScan->aiCur[0] && pX->iColumn==pScan->aiColumn[0] ){ testcase( pTerm->eOperator & WO_IS ); continue; } pScan->k = k+1; return pTerm; } } } | > | | > | | 128108 128109 128110 128111 128112 128113 128114 128115 128116 128117 128118 128119 128120 128121 128122 128123 128124 128125 128126 128127 128128 128129 128130 128131 128132 | && (pX = pTerm->pExpr->pRight)->op==TK_COLUMN && pX->iTable==pScan->aiCur[0] && pX->iColumn==pScan->aiColumn[0] ){ testcase( pTerm->eOperator & WO_IS ); continue; } pScan->pWC = pWC; pScan->k = k+1; return pTerm; } } } pWC = pWC->pOuter; k = 0; }while( pWC!=0 ); if( pScan->iEquiv>=pScan->nEquiv ) break; pWC = pScan->pOrigWC; k = 0; pScan->iEquiv++; } return 0; } /* |
︙ | ︙ | |||
128074 128075 128076 128077 128078 128079 128080 | WhereScan *pScan, /* The WhereScan object being initialized */ WhereClause *pWC, /* The WHERE clause to be scanned */ int iCur, /* Cursor to scan for */ int iColumn, /* Column to scan for */ u32 opMask, /* Operator(s) to scan for */ Index *pIdx /* Must be compatible with this index */ ){ | < < < > > | > | | < > | | | > | | < | 128152 128153 128154 128155 128156 128157 128158 128159 128160 128161 128162 128163 128164 128165 128166 128167 128168 128169 128170 128171 128172 128173 128174 128175 128176 128177 128178 128179 128180 128181 128182 128183 | WhereScan *pScan, /* The WhereScan object being initialized */ WhereClause *pWC, /* The WHERE clause to be scanned */ int iCur, /* Cursor to scan for */ int iColumn, /* Column to scan for */ u32 opMask, /* Operator(s) to scan for */ Index *pIdx /* Must be compatible with this index */ ){ pScan->pOrigWC = pWC; pScan->pWC = pWC; pScan->pIdxExpr = 0; pScan->idxaff = 0; pScan->zCollName = 0; if( pIdx ){ int j = iColumn; iColumn = pIdx->aiColumn[j]; if( iColumn==XN_EXPR ){ pScan->pIdxExpr = pIdx->aColExpr->a[j].pExpr; }else if( iColumn==pIdx->pTable->iPKey ){ iColumn = XN_ROWID; }else if( iColumn>=0 ){ pScan->idxaff = pIdx->pTable->aCol[iColumn].affinity; pScan->zCollName = pIdx->azColl[j]; } }else if( iColumn==XN_EXPR ){ return 0; } pScan->opMask = opMask; pScan->k = 0; pScan->aiCur[0] = iCur; pScan->aiColumn[0] = iColumn; pScan->nEquiv = 1; pScan->iEquiv = 1; |
︙ | ︙ | |||
130003 130004 130005 130006 130007 130008 130009 | ** and the index: ** ** CREATE INDEX ... ON (a, b, c, d, e) ** ** then this function would be invoked with nEq=1. The value returned in ** this case is 3. */ | | | 130081 130082 130083 130084 130085 130086 130087 130088 130089 130090 130091 130092 130093 130094 130095 | ** and the index: ** ** CREATE INDEX ... ON (a, b, c, d, e) ** ** then this function would be invoked with nEq=1. The value returned in ** this case is 3. */ static int whereRangeVectorLen( Parse *pParse, /* Parsing context */ int iCur, /* Cursor open on pIdx */ Index *pIdx, /* The index to be used for a inequality constraint */ int nEq, /* Number of prior equality constraints on same index */ WhereTerm *pTerm /* The vector inequality constraint */ ){ int nCmp = sqlite3ExprVectorSize(pTerm->pExpr->pLeft); |
︙ | ︙ | |||
131449 131450 131451 131452 131453 131454 131455 | }else{ rev = revIdx ^ pOrderBy->a[i].sortOrder; if( rev ) *pRevMask |= MASKBIT(iLoop); revSet = 1; } } if( isMatch ){ | | | 131527 131528 131529 131530 131531 131532 131533 131534 131535 131536 131537 131538 131539 131540 131541 | }else{ rev = revIdx ^ pOrderBy->a[i].sortOrder; if( rev ) *pRevMask |= MASKBIT(iLoop); revSet = 1; } } if( isMatch ){ if( iColumn==XN_ROWID ){ testcase( distinctColumns==0 ); distinctColumns = 1; } obSat |= MASKBIT(i); }else{ /* No match found */ if( j==0 || j<nKeyCol ){ |
︙ | ︙ | |||
131904 131905 131906 131907 131908 131909 131910 | pWInfo->eDistinct = WHERE_DISTINCT_ORDERED; } }else{ pWInfo->nOBSat = pFrom->isOrdered; pWInfo->revMask = pFrom->revLoop; if( pWInfo->nOBSat<=0 ){ pWInfo->nOBSat = 0; | > | > > > | | > > | | | > | 131982 131983 131984 131985 131986 131987 131988 131989 131990 131991 131992 131993 131994 131995 131996 131997 131998 131999 132000 132001 132002 132003 132004 132005 132006 132007 132008 132009 | pWInfo->eDistinct = WHERE_DISTINCT_ORDERED; } }else{ pWInfo->nOBSat = pFrom->isOrdered; pWInfo->revMask = pFrom->revLoop; if( pWInfo->nOBSat<=0 ){ pWInfo->nOBSat = 0; if( nLoop>0 ){ u32 wsFlags = pFrom->aLoop[nLoop-1]->wsFlags; if( (wsFlags & WHERE_ONEROW)==0 && (wsFlags&(WHERE_IPK|WHERE_COLUMN_IN))!=(WHERE_IPK|WHERE_COLUMN_IN) ){ Bitmask m = 0; int rc = wherePathSatisfiesOrderBy(pWInfo, pWInfo->pOrderBy, pFrom, WHERE_ORDERBY_LIMIT, nLoop-1, pFrom->aLoop[nLoop-1], &m); testcase( wsFlags & WHERE_IPK ); testcase( wsFlags & WHERE_COLUMN_IN ); if( rc==pWInfo->pOrderBy->nExpr ){ pWInfo->bOrderedInnerLoop = 1; pWInfo->revMask = m; } } } } } if( (pWInfo->wctrlFlags & WHERE_SORTBYGROUP) && pWInfo->nOBSat==pWInfo->pOrderBy->nExpr && nLoop>0 ){ |
︙ | ︙ | |||
132633 132634 132635 132636 132637 132638 132639 132640 | if( pLevel->addrLikeRep ){ sqlite3VdbeAddOp2(v, OP_DecrJumpZero, (int)(pLevel->iLikeRepCntr>>1), pLevel->addrLikeRep); VdbeCoverage(v); } #endif if( pLevel->iLeftJoin ){ addr = sqlite3VdbeAddOp1(v, OP_IfPos, pLevel->iLeftJoin); VdbeCoverage(v); | > < | | | > > | 132718 132719 132720 132721 132722 132723 132724 132725 132726 132727 132728 132729 132730 132731 132732 132733 132734 132735 132736 132737 132738 132739 132740 | if( pLevel->addrLikeRep ){ sqlite3VdbeAddOp2(v, OP_DecrJumpZero, (int)(pLevel->iLikeRepCntr>>1), pLevel->addrLikeRep); VdbeCoverage(v); } #endif if( pLevel->iLeftJoin ){ int ws = pLoop->wsFlags; addr = sqlite3VdbeAddOp1(v, OP_IfPos, pLevel->iLeftJoin); VdbeCoverage(v); assert( (ws & WHERE_IDX_ONLY)==0 || (ws & WHERE_INDEXED)!=0 ); if( (ws & WHERE_IDX_ONLY)==0 ){ sqlite3VdbeAddOp1(v, OP_NullRow, pTabList->a[i].iCursor); } if( (ws & WHERE_INDEXED) || ((ws & WHERE_MULTI_OR) && pLevel->u.pCovidx) ){ sqlite3VdbeAddOp1(v, OP_NullRow, pLevel->iIdxCur); } if( pLevel->op==OP_Return ){ sqlite3VdbeAddOp2(v, OP_Gosub, pLevel->p1, pLevel->addrFirst); }else{ sqlite3VdbeGoto(v, pLevel->addrFirst); } |
︙ | ︙ | |||
138605 138606 138607 138608 138609 138610 138611 138612 138613 138614 138615 138616 138617 138618 | int op; /* The opcode */ u32 mask; /* Mask of the bit in sqlite3.flags to set/clear */ } aFlagOp[] = { { SQLITE_DBCONFIG_ENABLE_FKEY, SQLITE_ForeignKeys }, { SQLITE_DBCONFIG_ENABLE_TRIGGER, SQLITE_EnableTrigger }, { SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER, SQLITE_Fts3Tokenizer }, { SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION, SQLITE_LoadExtension }, }; unsigned int i; rc = SQLITE_ERROR; /* IMP: R-42790-23372 */ for(i=0; i<ArraySize(aFlagOp); i++){ if( aFlagOp[i].op==op ){ int onoff = va_arg(ap, int); int *pRes = va_arg(ap, int*); | > | 138692 138693 138694 138695 138696 138697 138698 138699 138700 138701 138702 138703 138704 138705 138706 | int op; /* The opcode */ u32 mask; /* Mask of the bit in sqlite3.flags to set/clear */ } aFlagOp[] = { { SQLITE_DBCONFIG_ENABLE_FKEY, SQLITE_ForeignKeys }, { SQLITE_DBCONFIG_ENABLE_TRIGGER, SQLITE_EnableTrigger }, { SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER, SQLITE_Fts3Tokenizer }, { SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION, SQLITE_LoadExtension }, { SQLITE_DBCONFIG_NO_CKPT_ON_CLOSE, SQLITE_NoCkptOnClose }, }; unsigned int i; rc = SQLITE_ERROR; /* IMP: R-42790-23372 */ for(i=0; i<ArraySize(aFlagOp); i++){ if( aFlagOp[i].op==op ){ int onoff = va_arg(ap, int); int *pRes = va_arg(ap, int*); |
︙ | ︙ | |||
139901 139902 139903 139904 139905 139906 139907 139908 139909 139910 139911 139912 139913 139914 | sqlite3ErrorWithMsg(db, SQLITE_ERROR, "unknown database: %s", zDb); }else{ db->busyHandler.nBusy = 0; rc = sqlite3Checkpoint(db, iDb, eMode, pnLog, pnCkpt); sqlite3Error(db, rc); } rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; #endif } /* | > > > > > > > | 139989 139990 139991 139992 139993 139994 139995 139996 139997 139998 139999 140000 140001 140002 140003 140004 140005 140006 140007 140008 140009 | sqlite3ErrorWithMsg(db, SQLITE_ERROR, "unknown database: %s", zDb); }else{ db->busyHandler.nBusy = 0; rc = sqlite3Checkpoint(db, iDb, eMode, pnLog, pnCkpt); sqlite3Error(db, rc); } rc = sqlite3ApiExit(db, rc); /* If there are no active statements, clear the interrupt flag at this ** point. */ if( db->nVdbeActive==0 ){ db->u1.isInterrupted = 0; } sqlite3_mutex_leave(db->mutex); return rc; #endif } /* |
︙ | ︙ | |||
140403 140404 140405 140406 140407 140408 140409 140410 140411 140412 140413 140414 140415 140416 140417 140418 140419 140420 140421 140422 140423 140424 140425 140426 140427 140428 | && sqlite3Isxdigit(zUri[iIn+1]) ){ int octet = (sqlite3HexToInt(zUri[iIn++]) << 4); octet += sqlite3HexToInt(zUri[iIn++]); assert( octet>=0 && octet<256 ); if( octet==0 ){ /* This branch is taken when "%00" appears within the URI. In this ** case we ignore all text in the remainder of the path, name or ** value currently being parsed. So ignore the current character ** and skip to the next "?", "=" or "&", as appropriate. */ while( (c = zUri[iIn])!=0 && c!='#' && (eState!=0 || c!='?') && (eState!=1 || (c!='=' && c!='&')) && (eState!=2 || c!='&') ){ iIn++; } continue; } c = octet; }else if( eState==1 && (c=='&' || c=='=') ){ if( zFile[iOut-1]==0 ){ /* An empty option name. Ignore this option altogether. */ while( zUri[iIn] && zUri[iIn]!='#' && zUri[iIn-1]!='&' ) iIn++; continue; | > > > > > > > | 140498 140499 140500 140501 140502 140503 140504 140505 140506 140507 140508 140509 140510 140511 140512 140513 140514 140515 140516 140517 140518 140519 140520 140521 140522 140523 140524 140525 140526 140527 140528 140529 140530 | && sqlite3Isxdigit(zUri[iIn+1]) ){ int octet = (sqlite3HexToInt(zUri[iIn++]) << 4); octet += sqlite3HexToInt(zUri[iIn++]); assert( octet>=0 && octet<256 ); if( octet==0 ){ #ifndef SQLITE_ENABLE_URI_00_ERROR /* This branch is taken when "%00" appears within the URI. In this ** case we ignore all text in the remainder of the path, name or ** value currently being parsed. So ignore the current character ** and skip to the next "?", "=" or "&", as appropriate. */ while( (c = zUri[iIn])!=0 && c!='#' && (eState!=0 || c!='?') && (eState!=1 || (c!='=' && c!='&')) && (eState!=2 || c!='&') ){ iIn++; } continue; #else /* If ENABLE_URI_00_ERROR is defined, "%00" in a URI is an error. */ *pzErrMsg = sqlite3_mprintf("unexpected %%00 in uri"); rc = SQLITE_ERROR; goto parse_uri_out; #endif } c = octet; }else if( eState==1 && (c=='&' || c=='=') ){ if( zFile[iOut-1]==0 ){ /* An empty option name. Ignore this option altogether. */ while( zUri[iIn] && zUri[iIn]!='#' && zUri[iIn-1]!='&' ) iIn++; continue; |
︙ | ︙ | |||
164294 164295 164296 164297 164298 164299 164300 | static int rtreeQueryStat1(sqlite3 *db, Rtree *pRtree){ const char *zFmt = "SELECT stat FROM %Q.sqlite_stat1 WHERE tbl = '%q_rowid'"; char *zSql; sqlite3_stmt *p; int rc; i64 nRow = 0; | | > > | | | 164396 164397 164398 164399 164400 164401 164402 164403 164404 164405 164406 164407 164408 164409 164410 164411 164412 164413 164414 164415 | static int rtreeQueryStat1(sqlite3 *db, Rtree *pRtree){ const char *zFmt = "SELECT stat FROM %Q.sqlite_stat1 WHERE tbl = '%q_rowid'"; char *zSql; sqlite3_stmt *p; int rc; i64 nRow = 0; rc = sqlite3_table_column_metadata( db, pRtree->zDb, "sqlite_stat1",0,0,0,0,0,0 ); if( rc!=SQLITE_OK ){ pRtree->nRowEst = RTREE_DEFAULT_ROWEST; return rc==SQLITE_ERROR ? SQLITE_OK : rc; } zSql = sqlite3_mprintf(zFmt, pRtree->zDb, pRtree->zName); if( zSql==0 ){ rc = SQLITE_NOMEM; }else{ rc = sqlite3_prepare_v2(db, zSql, -1, &p, 0); if( rc==SQLITE_OK ){ |
︙ | ︙ | |||
165198 165199 165200 165201 165202 165203 165204 | ** To access ICU "language specific" case mapping, upper() or lower() ** should be invoked with two arguments. The second argument is the name ** of the locale to use. Passing an empty string ("") or SQL NULL value ** as the second argument is the same as invoking the 1 argument version ** of upper() or lower(). ** ** lower('I', 'en_us') -> 'i' | | | 165302 165303 165304 165305 165306 165307 165308 165309 165310 165311 165312 165313 165314 165315 165316 | ** To access ICU "language specific" case mapping, upper() or lower() ** should be invoked with two arguments. The second argument is the name ** of the locale to use. Passing an empty string ("") or SQL NULL value ** as the second argument is the same as invoking the 1 argument version ** of upper() or lower(). ** ** lower('I', 'en_us') -> 'i' ** lower('I', 'tr_tr') -> '\u131' (small dotless i) ** ** http://www.icu-project.org/userguide/posix.html#case_mappings */ static void icuCaseFunc16(sqlite3_context *p, int nArg, sqlite3_value **apArg){ const UChar *zInput; /* Pointer to input string */ UChar *zOutput = 0; /* Pointer to output buffer */ int nInput; /* Size of utf-16 input string in bytes */ |
︙ | ︙ | |||
181494 181495 181496 181497 181498 181499 181500 181501 181502 181503 181504 181505 181506 181507 | int tflags, /* Mask of FTS5_TOKEN_* flags */ const char *pToken, /* Buffer containing token */ int nToken, /* Size of token in bytes */ int iStartOff, /* Start offset of token */ int iEndOff /* End offset of token */ ){ int rc = SQLITE_OK; if( (tflags & FTS5_TOKEN_COLOCATED)==0 ){ Fts5SFinder *p = (Fts5SFinder*)pContext; if( p->iPos>0 ){ int i; char c = 0; for(i=iStartOff-1; i>=0; i--){ | > > > | 181598 181599 181600 181601 181602 181603 181604 181605 181606 181607 181608 181609 181610 181611 181612 181613 181614 | int tflags, /* Mask of FTS5_TOKEN_* flags */ const char *pToken, /* Buffer containing token */ int nToken, /* Size of token in bytes */ int iStartOff, /* Start offset of token */ int iEndOff /* End offset of token */ ){ int rc = SQLITE_OK; UNUSED_PARAM2(pToken, nToken); UNUSED_PARAM(iEndOff); if( (tflags & FTS5_TOKEN_COLOCATED)==0 ){ Fts5SFinder *p = (Fts5SFinder*)pContext; if( p->iPos>0 ){ int i; char c = 0; for(i=iStartOff-1; i>=0; i--){ |
︙ | ︙ | |||
181650 181651 181652 181653 181654 181655 181656 | if( rc==SQLITE_OK && sFinder.nFirst && nDocsize>nToken ){ for(jj=0; jj<(sFinder.nFirst-1); jj++){ if( sFinder.aFirst[jj+1]>io ) break; } if( sFinder.aFirst[jj]<io ){ | < | 181757 181758 181759 181760 181761 181762 181763 181764 181765 181766 181767 181768 181769 181770 | if( rc==SQLITE_OK && sFinder.nFirst && nDocsize>nToken ){ for(jj=0; jj<(sFinder.nFirst-1); jj++){ if( sFinder.aFirst[jj+1]>io ) break; } if( sFinder.aFirst[jj]<io ){ memset(aSeen, 0, nPhrase); rc = fts5SnippetScore(pApi, pFts, nDocsize, aSeen, i, sFinder.aFirst[jj], nToken, &nScore, 0 ); nScore += (sFinder.aFirst[jj]==0 ? 120 : 100); if( rc==SQLITE_OK && nScore>nBestScore ){ |
︙ | ︙ | |||
195585 195586 195587 195588 195589 195590 195591 | static void fts5SourceIdFunc( sqlite3_context *pCtx, /* Function call context */ int nArg, /* Number of args */ sqlite3_value **apUnused /* Function arguments */ ){ assert( nArg==0 ); UNUSED_PARAM2(nArg, apUnused); | | | 195691 195692 195693 195694 195695 195696 195697 195698 195699 195700 195701 195702 195703 195704 195705 | static void fts5SourceIdFunc( sqlite3_context *pCtx, /* Function call context */ int nArg, /* Number of args */ sqlite3_value **apUnused /* Function arguments */ ){ assert( nArg==0 ); UNUSED_PARAM2(nArg, apUnused); sqlite3_result_text(pCtx, "fts5: 2016-10-26 16:05:10 ec9dab8054c71d112c68f58a45821b38c2a45677", -1, SQLITE_TRANSIENT); } static int fts5Init(sqlite3 *db){ static const sqlite3_module fts5Mod = { /* iVersion */ 2, /* xCreate */ fts5CreateMethod, /* xConnect */ fts5ConnectMethod, |
︙ | ︙ |
Changes to src/sqlite3.h.
︙ | ︙ | |||
117 118 119 120 121 122 123 | ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ | | | | | 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.16.0" #define SQLITE_VERSION_NUMBER 3016000 #define SQLITE_SOURCE_ID "2016-11-02 14:50:19 3028845329c9b7acdec2ec8b01d00d782347454c" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version, sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ | |||
973 974 975 976 977 978 979 980 981 982 983 984 985 986 | ** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. ** ** <li>[[SQLITE_FCNTL_HAS_MOVED]] ** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a ** pointer to an integer and it writes a boolean into that integer depending ** on whether or not the file has been renamed, moved, or deleted since it ** was first opened. ** ** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This ** opcode causes the xFileControl method to swap the file handle with the one ** pointed to by the pArg argument. This capability is used during testing ** and only needs to be supported when SQLITE_TEST is defined. ** | > > > > > > | 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 | ** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. ** ** <li>[[SQLITE_FCNTL_HAS_MOVED]] ** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a ** pointer to an integer and it writes a boolean into that integer depending ** on whether or not the file has been renamed, moved, or deleted since it ** was first opened. ** ** <li>[[SQLITE_FCNTL_WIN32_GET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_GET_HANDLE] opcode can be used to obtain the ** underlying native file handle associated with a file handle. This file ** control interprets its argument as a pointer to a native file handle and ** writes the resulting value there. ** ** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This ** opcode causes the xFileControl method to swap the file handle with the one ** pointed to by the pArg argument. This capability is used during testing ** and only needs to be supported when SQLITE_TEST is defined. ** |
︙ | ︙ | |||
1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 | #define SQLITE_FCNTL_COMMIT_PHASETWO 22 #define SQLITE_FCNTL_WIN32_SET_HANDLE 23 #define SQLITE_FCNTL_WAL_BLOCK 24 #define SQLITE_FCNTL_ZIPVFS 25 #define SQLITE_FCNTL_RBU 26 #define SQLITE_FCNTL_VFS_POINTER 27 #define SQLITE_FCNTL_JOURNAL_POINTER 28 /* deprecated names */ #define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE #define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE #define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO | > > | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 | #define SQLITE_FCNTL_COMMIT_PHASETWO 22 #define SQLITE_FCNTL_WIN32_SET_HANDLE 23 #define SQLITE_FCNTL_WAL_BLOCK 24 #define SQLITE_FCNTL_ZIPVFS 25 #define SQLITE_FCNTL_RBU 26 #define SQLITE_FCNTL_VFS_POINTER 27 #define SQLITE_FCNTL_JOURNAL_POINTER 28 #define SQLITE_FCNTL_WIN32_GET_HANDLE 29 #define SQLITE_FCNTL_PDB 30 /* deprecated names */ #define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE #define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE #define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO |
︙ | ︙ | |||
1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 | ** schema. ^The sole argument is a pointer to a constant UTF8 string ** which will become the new schema name in place of "main". ^SQLite ** does not make a copy of the new main schema name string, so the application ** must ensure that the argument passed into this DBCONFIG option is unchanged ** until after the database connection closes. ** </dd> ** ** </dl> */ #define SQLITE_DBCONFIG_MAINDBNAME 1000 /* const char* */ #define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ #define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER 1004 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION 1005 /* int int* */ /* ** CAPI3REF: Enable Or Disable Extended Result Codes ** METHOD: sqlite3 ** ** ^The sqlite3_extended_result_codes() routine enables or disables the | > > > > > > > > > > > > > | 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 | ** schema. ^The sole argument is a pointer to a constant UTF8 string ** which will become the new schema name in place of "main". ^SQLite ** does not make a copy of the new main schema name string, so the application ** must ensure that the argument passed into this DBCONFIG option is unchanged ** until after the database connection closes. ** </dd> ** ** <dt>SQLITE_DBCONFIG_NO_CKPT_ON_CLOSE</dt> ** <dd> Usually, when a database in wal mode is closed or detached from a ** database handle, SQLite checks if this will mean that there are now no ** connections at all to the database. If so, it performs a checkpoint ** operation before closing the connection. This option may be used to ** override this behaviour. The first parameter passed to this operation ** is an integer - non-zero to disable checkpoints-on-close, or zero (the ** default) to enable them. The second parameter is a pointer to an integer ** into which is written 0 or 1 to indicate whether checkpoints-on-close ** have been disabled - 0 if they are not disabled, 1 if they are. ** </dd> ** ** </dl> */ #define SQLITE_DBCONFIG_MAINDBNAME 1000 /* const char* */ #define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ #define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER 1004 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION 1005 /* int int* */ #define SQLITE_DBCONFIG_NO_CKPT_ON_CLOSE 1006 /* int int* */ /* ** CAPI3REF: Enable Or Disable Extended Result Codes ** METHOD: sqlite3 ** ** ^The sqlite3_extended_result_codes() routine enables or disables the |
︙ | ︙ |
Changes to src/stash.c.
︙ | ︙ | |||
23 24 25 26 27 28 29 | /* ** SQL code to implement the tables needed by the stash. */ static const char zStashInit[] = @ CREATE TABLE IF NOT EXISTS localdb.stash( @ stashid INTEGER PRIMARY KEY, -- Unique stash identifier | | | | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | /* ** SQL code to implement the tables needed by the stash. */ static const char zStashInit[] = @ CREATE TABLE IF NOT EXISTS localdb.stash( @ stashid INTEGER PRIMARY KEY, -- Unique stash identifier @ vid INTEGER, -- The baseline checkout for this stash @ comment TEXT, -- Comment for this stash. Or NULL @ ctime TIMESTAMP -- When the stash was created @ ); @ CREATE TABLE IF NOT EXISTS localdb.stashfile( @ stashid INTEGER REFERENCES stash, -- Stash that contains this file @ rid INTEGER, -- Baseline content in BLOB table or 0. @ isAdded BOOLEAN, -- True if this is an added file @ isRemoved BOOLEAN, -- True if this file is deleted @ isExec BOOLEAN, -- True if file is executable @ isLink BOOLEAN, -- True if file is a symlink @ origname TEXT, -- Original filename @ newname TEXT, -- New name for file at next check-in @ delta BLOB, -- Delta from baseline. Content if rid=0 @ PRIMARY KEY(newname, stashid) @ ); @ INSERT OR IGNORE INTO vvar(name, value) VALUES('stash-next', 1); ; /* ** Add zFName to the stash given by stashid. zFName might be the name of a ** file or a directory. If a directory, add all changed files contained |
︙ | ︙ | |||
194 195 196 197 198 199 200 | }else{ stash_add_file_or_dir(stashid, vid, g.zLocalRoot); } return stashid; } /* | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | }else{ stash_add_file_or_dir(stashid, vid, g.zLocalRoot); } return stashid; } /* ** Apply a stash to the current checkout. */ static void stash_apply(int stashid, int nConflict){ int vid; Stmt q; db_prepare(&q, "SELECT rid, isRemoved, isExec, isLink, origname, newname, delta" " FROM stashfile WHERE stashid=%d", |
︙ | ︙ | |||
276 277 278 279 280 281 282 283 284 285 286 287 288 289 | blob_reset(&b); blob_reset(&disk); } blob_reset(&delta); if( fossil_strcmp(zOrig,zNew)!=0 ){ undo_save(zOrig); file_delete(zOPath); } } stash_add_files_in_sfile(vid); db_finalize(&q); if( nConflict ){ fossil_print( "WARNING: %d merge conflicts - see messages above for details.\n", | > > > > > | 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | blob_reset(&b); blob_reset(&disk); } blob_reset(&delta); if( fossil_strcmp(zOrig,zNew)!=0 ){ undo_save(zOrig); file_delete(zOPath); db_multi_exec( "UPDATE vfile SET pathname='%q', origname='%q'" " WHERE pathname='%q' %s AND vid=%d", zNew, zOrig, zOrig, filename_collation(), vid ); } } stash_add_files_in_sfile(vid); db_finalize(&q); if( nConflict ){ fossil_print( "WARNING: %d merge conflicts - see messages above for details.\n", |
︙ | ︙ | |||
391 392 393 394 395 396 397 | /* ** If zStashId is non-NULL then interpret is as a stash number and ** return that number. Or throw a fatal error if it is not a valid ** stash number. If it is NULL, return the most recent stash or ** throw an error if the stash is empty. */ static int stash_get_id(const char *zStashId){ | | | | | < | | | | < | | | | | | | > > > > > > > > > > > > > > > | 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 | /* ** If zStashId is non-NULL then interpret is as a stash number and ** return that number. Or throw a fatal error if it is not a valid ** stash number. If it is NULL, return the most recent stash or ** throw an error if the stash is empty. */ static int stash_get_id(const char *zStashId){ int stashid; if( zStashId==0 ){ stashid = db_int(0, "SELECT max(stashid) FROM stash"); if( stashid==0 ) fossil_fatal("empty stash"); }else{ stashid = atoi(zStashId); if( !db_exists("SELECT 1 FROM stash WHERE stashid=%d", stashid) ){ fossil_fatal("no such stash: %s", zStashId); } } return stashid; } /* ** COMMAND: stash ** ** Usage: %fossil stash SUBCOMMAND ARGS... ** ** fossil stash ** fossil stash save ?-m|--comment COMMENT? ?FILES...? ** fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? ** ** Save the current changes in the working tree as a new stash. ** Then revert the changes back to the last check-in. If FILES ** are listed, then only stash and revert the named files. The ** "save" verb can be omitted if and only if there are no other ** arguments. The "snapshot" verb works the same as "save" but ** omits the revert, keeping the checkout unchanged. ** ** fossil stash list|ls ?-v|--verbose? ?-W|--width <num>? ** ** List all changes sets currently stashed. Show information about ** individual files in each changeset if -v or --verbose is used. ** ** fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? ** ** Show the contents of a stash. ** ** fossil stash pop ** fossil stash apply ?STASHID? ** ** Apply STASHID or the most recently create stash to the current ** working checkout. The "pop" command deletes that changeset from ** the stash after applying it but the "apply" command retains the ** changeset. ** ** fossil stash goto ?STASHID? ** ** Update to the baseline checkout for STASHID then apply the ** changes of STASHID. Keep STASHID so that it can be reused ** This command is undoable. ** ** fossil stash drop|rm ?STASHID? ?-a|--all? ** ** Forget everything about STASHID. Forget the whole stash if the ** -a|--all flag is used. Individual drops are undoable but -a|--all ** is not. ** ** fossil stash diff ?STASHID? ?DIFF-OPTIONS? ** fossil stash gdiff ?STASHID? ?DIFF-OPTIONS? ** ** Show diffs of the current working directory and what that ** directory would be if STASHID were applied. ** ** SUMMARY: ** fossil stash ** fossil stash save ?-m|--comment COMMENT? ?FILES...? ** fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? ** fossil stash list|ls ?-v|--verbose? ?-W|--width <num>? ** fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? ** fossil stash pop ** fossil stash apply|goto ?STASHID? ** fossil stash drop|rm ?STASHID? ?-a|--all? ** fossil stash diff ?STASHID? ?DIFF-OPTIONS? ** fossil stash gdiff ?STASHID? ?DIFF-OPTIONS? */ void stash_cmd(void){ const char *zCmd; int nCmd; int stashid = 0; int rc; undo_capture_command_line(); db_must_be_within_tree(); db_open_config(0, 0); db_begin_transaction(); db_multi_exec(zStashInit /*works-like:""*/); rc = db_exists("SELECT 1 FROM sqlite_master" " WHERE name='stashfile'" " AND sql GLOB '* PRIMARY KEY(origname, stashid)*'"); if( rc!=0 ){ db_multi_exec( "CREATE TABLE localdb.stashfile_tmp AS SELECT * FROM stashfile;" "DROP TABLE stashfile;" ); db_multi_exec(zStashInit /*works-like:""*/); db_multi_exec( "INSERT INTO stashfile SELECT * FROM stashfile_tmp;" "DROP TABLE stashfile_tmp;" ); } if( g.argc<=2 ){ zCmd = "save"; }else{ zCmd = g.argv[2]; } nCmd = strlen(zCmd); if( memcmp(zCmd, "save", nCmd)==0 ){ |
︙ | ︙ | |||
639 640 641 642 643 644 645 646 647 648 649 | || memcmp(zCmd, "gdiff", nCmd)==0 || memcmp(zCmd, "show", nCmd)==0 || memcmp(zCmd, "cat", nCmd)==0 ){ const char *zDiffCmd = 0; const char *zBinGlob = 0; int fIncludeBinary = 0; u64 diffFlags; if( find_option("tk",0,0)!=0 ){ db_close(0); | > < < < < < | < < < | | | 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 | || memcmp(zCmd, "gdiff", nCmd)==0 || memcmp(zCmd, "show", nCmd)==0 || memcmp(zCmd, "cat", nCmd)==0 ){ const char *zDiffCmd = 0; const char *zBinGlob = 0; int fIncludeBinary = 0; int fBaseline = zCmd[0]=='s' || zCmd[0]=='c'; u64 diffFlags; if( find_option("tk",0,0)!=0 ){ db_close(0); diff_tk(fBaseline ? "stash show" : "stash diff", 3); return; } if( find_option("internal","i",0)==0 ){ zDiffCmd = diff_command_external(memcmp(zCmd, "gdiff", nCmd)==0); } diffFlags = diff_options(); if( find_option("verbose","v",0)!=0 ) diffFlags |= DIFF_VERBOSE; if( g.argc>4 ) usage(mprintf("%s ?STASHID? ?DIFF-OPTIONS?", zCmd)); if( zDiffCmd ){ zBinGlob = diff_get_binary_glob(); fIncludeBinary = diff_include_binary_files(); } stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); stash_diff(stashid, zDiffCmd, zBinGlob, fBaseline, fIncludeBinary, diffFlags); }else if( memcmp(zCmd, "help", nCmd)==0 ){ g.argv[1] = "help"; g.argv[2] = "stash"; g.argc = 3; help_cmd(); }else { usage("SUBCOMMAND ARGS..."); } db_end_transaction(0); } |
Changes to src/stat.c.
︙ | ︙ | |||
69 70 71 72 73 74 75 | login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } brief = P("brief")!=0; style_header("Repository Statistics"); style_adunit_config(ADUNIT_RIGHT_OK); if( g.perm.Admin ){ | | | | | | | | | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } brief = P("brief")!=0; style_header("Repository Statistics"); style_adunit_config(ADUNIT_RIGHT_OK); if( g.perm.Admin ){ style_submenu_element("URLs", "urllist"); style_submenu_element("Schema", "repo_schema"); style_submenu_element("Web-Cache", "cachestat"); } style_submenu_element("Activity Reports", "reports"); style_submenu_element("SHA1 Collisions", "hash-collisions"); if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ style_submenu_element("Table Sizes", "repo-tabsize"); } if( g.perm.Admin || g.perm.Setup || db_get_boolean("test_env_enable",0) ){ style_submenu_element("Environment", "test_env"); } @ <table class="label-value"> @ <tr><th>Repository Size:</th><td> fsize = file_size(g.zRepositoryName); bigSizeName(sizeof(zBuf), zBuf, fsize); @ %s(zBuf) @ </td></tr> |
︙ | ︙ | |||
338 339 340 341 342 343 344 | Stmt q; int cnt; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("URLs and Checkouts"); style_adunit_config(ADUNIT_RIGHT_OK); | | | | 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | Stmt q; int cnt; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("URLs and Checkouts"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("Schema", "repo_schema"); @ <div class="section">URLs</div> @ <table border="0" width='100%%'> db_prepare(&q, "SELECT substr(name,9), datetime(mtime,'unixepoch')" " FROM config WHERE name GLOB 'baseurl:*' ORDER BY 2 DESC"); cnt = 0; while( db_step(&q)==SQLITE_ROW ){ @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> |
︙ | ︙ | |||
385 386 387 388 389 390 391 | void repo_schema_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Repository Schema"); style_adunit_config(ADUNIT_RIGHT_OK); | | | | | 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 | void repo_schema_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Repository Schema"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("URLs", "urllist"); if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ style_submenu_element("Table Sizes", "repo-tabsize"); } db_prepare(&q, "SELECT sql FROM repository.sqlite_master WHERE sql IS NOT NULL"); @ <pre> while( db_step(&q)==SQLITE_ROW ){ @ %h(db_column_text(&q, 0)); } |
︙ | ︙ | |||
415 416 417 418 419 420 421 | sqlite3_int64 fsize; char zBuf[100]; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Repository Table Sizes"); style_adunit_config(ADUNIT_RIGHT_OK); | | | | 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 | sqlite3_int64 fsize; char zBuf[100]; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Repository Table Sizes"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); if( g.perm.Admin ){ style_submenu_element("Schema", "repo_schema"); } db_multi_exec( "CREATE TEMP TABLE trans(name TEXT PRIMARY KEY,tabname TEXT)WITHOUT ROWID;" "INSERT INTO trans(name,tabname)" " SELECT name, tbl_name FROM repository.sqlite_master;" "CREATE TEMP TABLE piechart(amt REAL, label TEXT);" "INSERT INTO piechart(amt,label)" |
︙ | ︙ |
Changes to src/statrep.c.
︙ | ︙ | |||
704 705 706 707 708 709 710 | zUserName = P("user"); if( zUserName==0 ) zUserName = P("u"); if( zUserName && zUserName[0]==0 ) zUserName = 0; if( zView==0 ){ zView = "byuser"; cgi_replace_query_parameter("view","byuser"); } | | | | | | 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 | zUserName = P("user"); if( zUserName==0 ) zUserName = P("u"); if( zUserName && zUserName[0]==0 ) zUserName = 0; if( zView==0 ){ zView = "byuser"; cgi_replace_query_parameter("view","byuser"); } for(i=0; i<count(aViewType); i++){ if( fossil_strcmp(zView, aViewType[i].zVal)==0 ){ eType = aViewType[i].eType; break; } } if( eType!=RPT_NONE ){ int nView = 0; /* Slots used in azView[] */ for(i=0; i<count(aViewType); i++){ azView[nView++] = aViewType[i].zVal; azView[nView++] = aViewType[i].zName; } if( eType!=RPT_BYFILE ){ style_submenu_multichoice("type", count(azType)/2, azType, 0); } style_submenu_multichoice("view", nView/2, azView, 0); if( eType!=RPT_BYUSER ){ style_submenu_sql("user","User:", "SELECT '', 'All Users' UNION ALL " "SELECT x, x FROM (" " SELECT DISTINCT trim(coalesce(euser,user)) AS x FROM event %s" " ORDER BY 1 COLLATE nocase) WHERE x!=''", eType==RPT_BYFILE ? "WHERE type='ci'" : "" ); } } style_submenu_element("Stats", "%R/stat"); style_header("Activity Reports"); switch( eType ){ case RPT_BYYEAR: stats_report_by_month_year(0, 0, zUserName); break; case RPT_BYMONTH: stats_report_by_month_year(1, 0, zUserName); |
︙ | ︙ |
Changes to src/style.c.
︙ | ︙ | |||
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | ** structure and displayed below the main menu. ** ** Populate these structure with calls to ** ** style_submenu_element() ** style_submenu_entry() ** style_submenu_checkbox() ** style_submenu_multichoice() ** ** prior to calling style_footer(). The style_footer() routine ** will generate the appropriate HTML text just below the main ** menu. */ static struct Submenu { const char *zLabel; /* Button label */ | > > < | | | > | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | ** structure and displayed below the main menu. ** ** Populate these structure with calls to ** ** style_submenu_element() ** style_submenu_entry() ** style_submenu_checkbox() ** style_submenu_binary() ** style_submenu_multichoice() ** style_submenu_sql() ** ** prior to calling style_footer(). The style_footer() routine ** will generate the appropriate HTML text just below the main ** menu. */ static struct Submenu { const char *zLabel; /* Button label */ const char *zLink; /* Jump to this link when button is pressed */ } aSubmenu[30]; static int nSubmenu = 0; /* Number of buttons */ static struct SubmenuCtrl { const char *zName; /* Form query parameter */ const char *zLabel; /* Label. Might be NULL for FF_MULTI */ unsigned char eType; /* FF_ENTRY, FF_MULTI, FF_BINARY */ unsigned char isDisabled; /* True if this control is grayed out */ short int iSize; /* Width for FF_ENTRY. Count for FF_MULTI */ const char *const *azChoice;/* value/display pairs for FF_MULTI */ const char *zFalse; /* FF_BINARY label when false */ } aSubmenuCtrl[20]; static int nSubmenuCtrl = 0; #define FF_ENTRY 1 #define FF_MULTI 2 #define FF_BINARY 3 #define FF_CHECKBOX 4 /* ** Remember that the header has been generated. The footer is omitted ** if an error occurs before the header. */ static int headerHasBeenGenerated = 0; |
︙ | ︙ | |||
228 229 230 231 232 233 234 | } /* ** Add a new element to the submenu */ void style_submenu_element( const char *zLabel, | < | < | > > > > > > > > > > > > | | | 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 | } /* ** Add a new element to the submenu */ void style_submenu_element( const char *zLabel, const char *zLink, ... ){ va_list ap; assert( nSubmenu < count(aSubmenu) ); aSubmenu[nSubmenu].zLabel = zLabel; va_start(ap, zLink); aSubmenu[nSubmenu].zLink = vmprintf(zLink, ap); va_end(ap); nSubmenu++; } void style_submenu_entry( const char *zName, /* Query parameter name */ const char *zLabel, /* Label before the entry box */ int iSize, /* Size of the entry box */ int isDisabled /* True if disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].zLabel = zLabel; aSubmenuCtrl[nSubmenuCtrl].iSize = iSize; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_ENTRY; nSubmenuCtrl++; } void style_submenu_checkbox( const char *zName, /* Query parameter name */ const char *zLabel, /* Label to display after the checkbox */ int isDisabled /* True if disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].zLabel = zLabel; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_CHECKBOX; nSubmenuCtrl++; } void style_submenu_binary( const char *zName, /* Query parameter name */ const char *zTrue, /* Label to show when parameter is true */ const char *zFalse, /* Label to show when the parameter is false */ int isDisabled /* True if this control is disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].zLabel = zTrue; aSubmenuCtrl[nSubmenuCtrl].zFalse = zFalse; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_BINARY; nSubmenuCtrl++; } void style_submenu_multichoice( const char *zName, /* Query parameter name */ int nChoice, /* Number of options */ const char *const *azChoice,/* value/display pairs. 2*nChoice entries */ int isDisabled /* True if this control is disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].iSize = nChoice; aSubmenuCtrl[nSubmenuCtrl].azChoice = azChoice; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_MULTI; nSubmenuCtrl++; } |
︙ | ︙ | |||
398 399 400 401 402 403 404 405 406 407 408 409 410 411 | @ <!DOCTYPE html> if( g.thTrace ) Th_Trace("BEGIN_HEADER<br />\n", -1); /* Generate the header up through the main menu */ Th_Store("project_name", db_get("project-name","Unnamed Fossil Project")); Th_Store("title", zTitle); Th_Store("baseurl", g.zBaseURL); Th_Store("secureurl", login_wants_https_redirect()? g.zHttpsURL: g.zBaseURL); Th_Store("home", g.zTop); Th_Store("index_page", db_get("index-page","/home")); if( local_zCurrentPage==0 ) style_set_current_page("%T", g.zPath); Th_Store("current_page", local_zCurrentPage); | > | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | @ <!DOCTYPE html> if( g.thTrace ) Th_Trace("BEGIN_HEADER<br />\n", -1); /* Generate the header up through the main menu */ Th_Store("project_name", db_get("project-name","Unnamed Fossil Project")); Th_Store("project_description", db_get("project-description","")); Th_Store("title", zTitle); Th_Store("baseurl", g.zBaseURL); Th_Store("secureurl", login_wants_https_redirect()? g.zHttpsURL: g.zBaseURL); Th_Store("home", g.zTop); Th_Store("index_page", db_get("index-page","/home")); if( local_zCurrentPage==0 ) style_set_current_page("%T", g.zPath); Th_Store("current_page", local_zCurrentPage); |
︙ | ︙ | |||
521 522 523 524 525 526 527 | if( p->zLink==0 ){ @ <span class="label">%h(p->zLabel)</span> }else{ @ <a class="label" href="%h(p->zLink)">%h(p->zLabel)</a> } } } | < | | | | | | | | | < | > | > | | < | | > | < | < | | | | | | < | | < < | | < | | | > | < | | | | | | < | | < < < | > > > | < < | > > > | < | | | > > > > > | > > > | 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 | if( p->zLink==0 ){ @ <span class="label">%h(p->zLabel)</span> }else{ @ <a class="label" href="%h(p->zLink)">%h(p->zLabel)</a> } } } for(i=0; i<nSubmenuCtrl; i++){ const char *zQPN = aSubmenuCtrl[i].zName; const char *zDisabled = " disabled"; if( !aSubmenuCtrl[i].isDisabled ){ zDisabled = ""; cgi_tag_query_parameter(zQPN); } switch( aSubmenuCtrl[i].eType ){ case FF_ENTRY: @ <span class='submenuctrl'>\ @ %h(aSubmenuCtrl[i].zLabel)\ @ <input type='text' name='%s(zQPN)' value='%h(PD(zQPN, ""))' \ if( aSubmenuCtrl[i].iSize<0 ){ @ size='%d(-aSubmenuCtrl[i].iSize)' \ }else if( aSubmenuCtrl[i].iSize>0 ){ @ size='%d(aSubmenuCtrl[i].iSize)' \ @ maxlength='%d(aSubmenuCtrl[i].iSize)' \ } @ onchange='gebi("f01").submit();'%s(zDisabled)></span> break; case FF_MULTI: { int j; const char *zVal = P(zQPN); if( aSubmenuCtrl[i].zLabel ){ @ %h(aSubmenuCtrl[i].zLabel)\ } @ <select class='submenuctrl' size='1' name='%s(zQPN)' \ @ onchange='gebi("f01").submit();'%s(zDisabled)> for(j=0; j<aSubmenuCtrl[i].iSize*2; j+=2){ const char *zQPV = aSubmenuCtrl[i].azChoice[j]; @ <option value='%h(zQPV)'\ if( fossil_strcmp(zVal, zQPV)==0 ){ @ selected\ } @ >%h(aSubmenuCtrl[i].azChoice[j+1])</option> } @ </select> break; } case FF_BINARY: { int isTrue = PB(zQPN); @ <select class='submenuctrl' size='1' name='%s(zQPN)' \ @ onchange='gebi("f01").submit();'%s(zDisabled)> @ <option value='1'\ if( isTrue ){ @ selected\ } @ >%h(aSubmenuCtrl[i].zLabel)</option> @ <option value='0'\ if( !isTrue ){ @ selected\ } @ >%h(aSubmenuCtrl[i].zFalse)</option> @ </select> break; } case FF_CHECKBOX: @ <label class='submenuctrl'>\ @ <input type='checkbox' name='%s(zQPN)' value='1' \ if( PB(zQPN) ){ @ checked \ } @ onchange='gebi("f01").submit();'%s(zDisabled)>\ @ %h(aSubmenuCtrl[i].zLabel)</label> break; } } @ </div> if( nSubmenuCtrl ){ cgi_query_parameters_to_hidden(); cgi_tag_query_parameter(0); @ </form> |
︙ | ︙ | |||
701 702 703 704 705 706 707 | @ border-collapse: collapse; }, { "td.timelineTableCell", "the format for the timeline data cells", @ vertical-align: top; @ text-align: left; }, | | > | 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 | @ border-collapse: collapse; }, { "td.timelineTableCell", "the format for the timeline data cells", @ vertical-align: top; @ text-align: left; }, { "tr.timelineCurrent", "the format for the timeline data cell of the current checkout", @ padding: .1em .2em; @ border: 1px dashed #446979; @ box-shadow: 1px 1px 4px #888; }, { "tr.timelineSelected", "The row in the timeline table that contains the entry of interest", @ padding: .1em .2em; @ border: 2px solid lightgray; @ background-color: #ffc; @ box-shadow: 4px 4px 2px #888; |
︙ | ︙ | |||
1572 1573 1574 1575 1576 1577 1578 | login_check_credentials(); if( !g.perm.Admin && !g.perm.Setup && !db_get_boolean("test_env_enable",0) ){ login_needed(0); return; } for(i=0; i<count(azCgiVars); i++) (void)P(azCgiVars[i]); style_header("Environment Test"); | < | < < | < | | 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 | login_check_credentials(); if( !g.perm.Admin && !g.perm.Setup && !db_get_boolean("test_env_enable",0) ){ login_needed(0); return; } for(i=0; i<count(azCgiVars); i++) (void)P(azCgiVars[i]); style_header("Environment Test"); showAll = PB("showall"); style_submenu_checkbox("showall", "Cookies", 0); style_submenu_element("Stats", "%R/stat"); #if !defined(_WIN32) @ uid=%d(getuid()), gid=%d(getgid())<br /> #endif @ g.zBaseURL = %h(g.zBaseURL)<br /> @ g.zHttpsURL = %h(g.zHttpsURL)<br /> @ g.zTop = %h(g.zTop)<br /> |
︙ | ︙ |
Changes to src/tag.c.
︙ | ︙ | |||
652 653 654 655 656 657 658 | login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); } login_anonymous_available(); style_header("Tags"); style_adunit_config(ADUNIT_RIGHT_OK); | | | 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 | login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); } login_anonymous_available(); style_header("Tags"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Timeline", "tagtimeline"); @ <h2>Non-propagating tags:</h2> db_prepare(&q, "SELECT substr(tagname,5)" " FROM tag" " WHERE EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=tag.tagid" " AND tagtype=1)" |
︙ | ︙ | |||
691 692 693 694 695 696 697 | void tagtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Tagged Check-ins"); | | | 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | void tagtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Tagged Check-ins"); style_submenu_element("List", "taglist"); login_anonymous_available(); @ <h2>Check-ins with non-propagating tags:</h2> db_prepare(&q, "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype=1 AND srcid>0" " AND tagid IN (SELECT tagid FROM tag " " WHERE tagname GLOB 'sym-*'))" |
︙ | ︙ |
Changes to src/th_main.c.
︙ | ︙ | |||
1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 | return TH_ERROR; } }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } #ifdef _WIN32 # include <windows.h> #else # include <sys/time.h> # include <sys/resource.h> #endif | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 | return TH_ERROR; } }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } /* ** TH1 command: unversioned content FILENAME ** ** Attempts to locate the specified unversioned file and return its contents. ** An error is generated if the repository is not open or the unversioned file ** cannot be found. */ static int unversionedContentCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ if( argc!=3 ){ return Th_WrongNumArgs(interp, "unversioned content FILENAME"); } if( Th_IsRepositoryOpen() ){ Blob content; if( unversioned_content(argv[2], &content)==0 ){ Th_SetResult(interp, blob_str(&content), blob_size(&content)); blob_reset(&content); return TH_OK; }else{ return TH_ERROR; } }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } /* ** TH1 command: unversioned list ** ** Returns a list of the names of all unversioned files held in the local ** repository. An error is generated if the repository is not open. */ static int unversionedListCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ if( argc!=2 ){ return Th_WrongNumArgs(interp, "unversioned list"); } if( Th_IsRepositoryOpen() ){ Stmt q; char *zList = 0; int nList = 0; db_prepare(&q, "SELECT name FROM unversioned WHERE hash IS NOT NULL" " ORDER BY name"); while( db_step(&q)==SQLITE_ROW ){ Th_ListAppend(interp, &zList, &nList, db_column_text(&q,0), -1); } db_finalize(&q); Th_SetResult(interp, zList, nList); Th_Free(interp, zList); return TH_OK; }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } static int unversionedCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ static const Th_SubCommand aSub[] = { { "content", unversionedContentCmd }, { "list", unversionedListCmd }, { 0, 0 } }; return Th_CallSubCommand(interp, p, argc, argv, argl, aSub); } #ifdef _WIN32 # include <windows.h> #else # include <sys/time.h> # include <sys/resource.h> #endif |
︙ | ︙ | |||
1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 | {"setParameter", setParameterCmd, 0}, {"setting", settingCmd, 0}, {"styleHeader", styleHeaderCmd, 0}, {"styleFooter", styleFooterCmd, 0}, {"tclReady", tclReadyCmd, 0}, {"trace", traceCmd, 0}, {"stime", stimeCmd, 0}, {"utime", utimeCmd, 0}, {"verifyCsrf", verifyCsrfCmd, 0}, {"wiki", wikiCmd, (void*)&aFlags[0]}, {0, 0, 0} }; if( g.thTrace ){ Th_Trace("th1-init 0x%x => 0x%x<br />\n", g.th1Flags, flags); | > | 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 | {"setParameter", setParameterCmd, 0}, {"setting", settingCmd, 0}, {"styleHeader", styleHeaderCmd, 0}, {"styleFooter", styleFooterCmd, 0}, {"tclReady", tclReadyCmd, 0}, {"trace", traceCmd, 0}, {"stime", stimeCmd, 0}, {"unversioned", unversionedCmd, 0}, {"utime", utimeCmd, 0}, {"verifyCsrf", verifyCsrfCmd, 0}, {"wiki", wikiCmd, (void*)&aFlags[0]}, {0, 0, 0} }; if( g.thTrace ){ Th_Trace("th1-init 0x%x => 0x%x<br />\n", g.th1Flags, flags); |
︙ | ︙ | |||
1920 1921 1922 1923 1924 1925 1926 | db_get_boolean("tcl", 0) ){ if( !g.tcl.setup ){ g.tcl.setup = db_get("tcl-setup", 0); /* Grab Tcl setup script. */ } th_register_tcl(g.interp, &g.tcl); /* Tcl integration commands. */ } #endif | | | 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 | db_get_boolean("tcl", 0) ){ if( !g.tcl.setup ){ g.tcl.setup = db_get("tcl-setup", 0); /* Grab Tcl setup script. */ } th_register_tcl(g.interp, &g.tcl); /* Tcl integration commands. */ } #endif for(i=0; i<count(aCommand); i++){ if ( !aCommand[i].zName || !aCommand[i].xProc ) continue; Th_CreateCommand(g.interp, aCommand[i].zName, aCommand[i].xProc, aCommand[i].pContext, 0); } }else{ wasInit = 1; } |
︙ | ︙ | |||
2591 2592 2593 2594 2595 2596 2597 | }else if( fossil_stricmp(g.argv[2], "cmdnotify")==0 ){ rc = Th_CommandNotify(g.argv[3], (unsigned int)atoi(g.argv[4])); }else if( fossil_stricmp(g.argv[2], "webhook")==0 ){ rc = Th_WebpageHook(g.argv[3], (unsigned int)atoi(g.argv[4])); }else if( fossil_stricmp(g.argv[2], "webnotify")==0 ){ rc = Th_WebpageNotify(g.argv[3], (unsigned int)atoi(g.argv[4])); }else{ | | | 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 | }else if( fossil_stricmp(g.argv[2], "cmdnotify")==0 ){ rc = Th_CommandNotify(g.argv[3], (unsigned int)atoi(g.argv[4])); }else if( fossil_stricmp(g.argv[2], "webhook")==0 ){ rc = Th_WebpageHook(g.argv[3], (unsigned int)atoi(g.argv[4])); }else if( fossil_stricmp(g.argv[2], "webnotify")==0 ){ rc = Th_WebpageNotify(g.argv[3], (unsigned int)atoi(g.argv[4])); }else{ fossil_fatal("Unknown TH1 hook %s", g.argv[2]); } if( g.interp ){ zResult = (char*)Th_GetResult(g.interp, &nResult); } sendText("RESULT (", -1, 0); sendText(Th_ReturnCodeName(rc, 0), -1, 0); sendText(")", -1, 0); |
︙ | ︙ |
Changes to src/th_tcl.c.
︙ | ︙ | |||
831 832 833 834 835 836 837 | Tcl_Interp *interp ){ int i; Th_Interp *th1Interp = (Th_Interp *)clientData; if( !th1Interp ) return; /* Remove the Tcl integration commands. */ | | | 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 | Tcl_Interp *interp ){ int i; Th_Interp *th1Interp = (Th_Interp *)clientData; if( !th1Interp ) return; /* Remove the Tcl integration commands. */ for(i=0; i<count(aCommand); i++){ Th_RenameCommand(th1Interp, aCommand[i].zName, -1, NULL, 0); } } /* ** When Tcl stubs support is enabled, attempts to dynamically load the Tcl ** shared library and fetch the function pointers necessary to create an |
︙ | ︙ | |||
1259 1260 1261 1262 1263 1264 1265 | int th_register_tcl( Th_Interp *interp, void *pContext ){ int i; /* Add the Tcl integration commands to TH1. */ | | | 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 | int th_register_tcl( Th_Interp *interp, void *pContext ){ int i; /* Add the Tcl integration commands to TH1. */ for(i=0; i<count(aCommand); i++){ void *ctx; if( !aCommand[i].zName || !aCommand[i].xProc ) continue; ctx = aCommand[i].pContext; /* Use Tcl interpreter for context? */ if( !ctx ) ctx = pContext; Th_CreateCommand(interp, aCommand[i].zName, aCommand[i].xProc, ctx, 0); } return TH_OK; } #endif /* FOSSIL_ENABLE_TCL */ |
Changes to src/timeline.c.
︙ | ︙ | |||
388 389 390 391 392 393 394 | static Stmt qparent; db_static_prepare(&qparent, "SELECT pid FROM plink" " WHERE cid=:rid AND pid NOT IN phantom" " ORDER BY isprim DESC /*sort*/" ); db_bind_int(&qparent, ":rid", rid); | | | 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 | static Stmt qparent; db_static_prepare(&qparent, "SELECT pid FROM plink" " WHERE cid=:rid AND pid NOT IN phantom" " ORDER BY isprim DESC /*sort*/" ); db_bind_int(&qparent, ":rid", rid); while( db_step(&qparent)==SQLITE_ROW && nParent<count(aParent) ){ aParent[nParent++] = db_column_int(&qparent, 0); } db_reset(&qparent); gidx = graph_add_row(pGraph, rid, nParent, aParent, zBr, zBgClr, zUuid, isLeaf); db_reset(&qbranch); @ <div id="m%d(gidx)" class="tl-nodemark"></div> |
︙ | ︙ | |||
735 736 737 738 739 740 741 | pRow->iRail, /* r */ pRow->bDescender, /* d */ pRow->mergeOut, /* mo */ pRow->mergeUpto, /* mu */ pRow->aiRiser[pRow->iRail], /* u */ pRow->isLeaf ? 1 : 0 /* f */ ); | | | | 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 | pRow->iRail, /* r */ pRow->bDescender, /* d */ pRow->mergeOut, /* mo */ pRow->mergeUpto, /* mu */ pRow->aiRiser[pRow->iRail], /* u */ pRow->isLeaf ? 1 : 0 /* f */ ); /* au */ cSep = '['; for(i=0; i<GR_MAX_RAIL; i++){ if( i==pRow->iRail ) continue; if( pRow->aiRiser[i]>0 ){ cgi_printf("%c%d,%d", cSep, i, pRow->aiRiser[i]); cSep = ','; } } if( cSep=='[' ) cgi_printf("["); cgi_printf("],"); if( colorGraph && pRow->zBgClr[0]=='#' ){ cgi_printf("fg:\"%s\",", bg_to_fg(pRow->zBgClr)); } /* mi */ cgi_printf("mi:"); cSep = '['; for(i=0; i<GR_MAX_RAIL; i++){ if( pRow->mergeIn[i] ){ int mi = i; if( (pRow->mergeDown >> i) & 1 ) mi = -mi; cgi_printf("%c%d", cSep, mi); cSep = ','; } } if( cSep=='[' ) cgi_printf("["); cgi_printf("],h:\"%!S\"}%s", pRow->zUuid, pRow->pNext ? ",\n" : "];\n"); } |
︙ | ︙ | |||
1081 1082 1083 1084 1085 1086 1087 | static void timeline_submenu( HQuery *pUrl, /* Base URL */ const char *zMenuName, /* Submenu name */ const char *zParam, /* Parameter value to add or change */ const char *zValue, /* Value of the new parameter */ const char *zRemove /* Parameter to omit */ ){ | | | 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 | static void timeline_submenu( HQuery *pUrl, /* Base URL */ const char *zMenuName, /* Submenu name */ const char *zParam, /* Parameter value to add or change */ const char *zValue, /* Value of the new parameter */ const char *zRemove /* Parameter to omit */ ){ style_submenu_element(zMenuName, "%s", url_render(pUrl, zParam, zValue, zRemove, 0)); } /* ** Convert a symbolic name used as an argument to the a=, b=, or c= ** query parameters of timeline into a julianday mtime value. |
︙ | ︙ | |||
1184 1185 1186 1187 1188 1189 1190 | az[i++] = "t"; az[i++] = "Tickets"; } if( g.perm.RdWiki ){ az[i++] = "w"; az[i++] = "Wiki"; } | | | 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 | az[i++] = "t"; az[i++] = "Tickets"; } if( g.perm.RdWiki ){ az[i++] = "w"; az[i++] = "Wiki"; } assert( i<=count(az) ); } if( i>2 ){ style_submenu_multichoice("y", i/2, az, isDisabled); } } /* |
︙ | ︙ | |||
1222 1223 1224 1225 1226 1227 1228 | /* ** WEBPAGE: timeline ** ** Query parameters: ** | | | | | | | | | | > | | | > > | 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 | /* ** WEBPAGE: timeline ** ** Query parameters: ** ** a=TIMEORTAG After this event ** b=TIMEORTAG Before this event ** c=TIMEORTAG "Circa" this event ** m=TIMEORTAG Mark this event ** n=COUNT Suggested number of events in output ** p=CHECKIN Parents and ancestors of CHECKIN ** d=CHECKIN Descendants of CHECIN ** dp=CHECKIN The same as d=CHECKIN&p=CHECKIN ** t=TAG Show only check-ins with the given TAG ** r=TAG Show check-ins related to TAG ** mionly Limit r=TAG to show ancestors but not descendants ** u=USER Only show items associated with USER ** y=TYPE 'ci', 'w', 't', 'e', or (default) 'all' ** ng No Graph. ** nd Do not highlight the focus check-in ** v Show details of files changed ** f=CHECKIN Show family (immediate parents and children) of CHECKIN ** from=CHECKIN Path from... ** to=CHECKIN ... to this ** shortest ... show only the shortest path ** uf=FILE_SHA1 Show only check-ins that contain the given file version ** chng=GLOBLIST Show only check-ins that involve changes to a file whose ** name matches one of the comma-separate GLOBLIST ** brbg Background color from branch name ** ubg Background color from user ** namechng Show only check-ins that have filename changes ** forks Show only forks and their children ** ym=YYYY-MM Show only events for the given year/month ** yw=YYYY-WW Show only events for the given week of the given year ** ymd=YYYY-MM-DD Show only events on the given day ** datefmt=N Override the date format ** bisect Show the check-ins that are in the current bisect ** showid Show RIDs ** showsql Show the SQL text ** ** p= and d= can appear individually or together. If either p= or d= ** appear, then u=, y=, a=, and b= are ignored. ** ** If both a= and b= appear then both upper and lower bounds are honored. */ void page_timeline(void){ |
︙ | ︙ | |||
1340 1341 1342 1343 1344 1345 1346 | return; } url_initialize(&url, "timeline"); cgi_query_parameters_to_url(&url); if( zTagName && g.perm.Read ){ tagid = db_int(-1,"SELECT tagid FROM tag WHERE tagname='sym-%q'",zTagName); zThisTag = zTagName; | | < | < | 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 | return; } url_initialize(&url, "timeline"); cgi_query_parameters_to_url(&url); if( zTagName && g.perm.Read ){ tagid = db_int(-1,"SELECT tagid FROM tag WHERE tagname='sym-%q'",zTagName); zThisTag = zTagName; timeline_submenu(&url, "Related", "r", zTagName, "t"); }else if( zBrName && g.perm.Read ){ tagid = db_int(-1,"SELECT tagid FROM tag WHERE tagname='sym-%q'",zBrName); zThisTag = zBrName; timeline_submenu(&url, "Branch Only", "t", zBrName, "r"); }else{ tagid = 0; } if( zMark && zMark[0]==0 ){ if( zAfter ) zMark = zAfter; if( zBefore ) zMark = zBefore; if( zCirca ) zMark = zCirca; |
︙ | ︙ | |||
1476 1477 1478 1479 1480 1481 1482 | p = p->u.pTo; } blob_append(&sql, ")", -1); path_reset(); addFileGlobExclusion(zChng, &sql); tmFlags |= TIMELINE_DISJOINT; db_multi_exec("%s", blob_sql_text(&sql)); | < | | 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 | p = p->u.pTo; } blob_append(&sql, ")", -1); path_reset(); addFileGlobExclusion(zChng, &sql); tmFlags |= TIMELINE_DISJOINT; db_multi_exec("%s", blob_sql_text(&sql)); style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); blob_appendf(&desc, "%d check-ins going from ", db_int(0, "SELECT count(*) FROM timeline")); blob_appendf(&desc, "%z[%h]</a>", href("%R/info/%h", zFrom), zFrom); blob_append(&desc, " to ", -1); blob_appendf(&desc, "%z[%h]</a>", href("%R/info/%h",zTo), zTo); addFileGlobDescription(zChng, &desc); }else if( (p_rid || d_rid) && g.perm.Read ){ |
︙ | ︙ | |||
1527 1528 1529 1530 1531 1532 1533 1534 1535 | href("%R/info/%!S", zUuid), zUuid); if( d_rid ){ if( p_rid ){ /* If both p= and d= are set, we don't have the uuid of d yet. */ zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", d_rid); } } style_submenu_entry("n","Max:",4,0); timeline_y_submenu(1); | > < < | | < < < | 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 | href("%R/info/%!S", zUuid), zUuid); if( d_rid ){ if( p_rid ){ /* If both p= and d= are set, we don't have the uuid of d yet. */ zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", d_rid); } } style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); style_submenu_entry("n","Max:",4,0); timeline_y_submenu(1); }else if( f_rid && g.perm.Read ){ /* If f= is present, ignore all other parameters other than n= */ char *zUuid; db_multi_exec( "CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY);" "INSERT INTO ok VALUES(%d);" "INSERT OR IGNORE INTO ok SELECT pid FROM plink WHERE cid=%d;" "INSERT OR IGNORE INTO ok SELECT cid FROM plink WHERE pid=%d;", f_rid, f_rid, f_rid ); blob_append_sql(&sql, " AND event.objid IN ok"); db_multi_exec("%s", blob_sql_text(&sql)); if( useDividers ) selectedRid = f_rid; blob_appendf(&desc, "Parents and children of check-in "); zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", f_rid); blob_appendf(&desc, "%z[%S]</a>", href("%R/info/%!S", zUuid), zUuid); tmFlags |= TIMELINE_DISJOINT; style_submenu_checkbox("unhide", "Unhide", 0); style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); }else{ /* Otherwise, a timeline based on a span of time */ int n; const char *zEType = "timeline item"; char *zDate; Blob cond; blob_zero(&cond); |
︙ | ︙ | |||
1821 1822 1823 1824 1825 1826 1827 | rDate+ONE_SECOND, blob_sql_text(&cond)) ){ timeline_submenu(&url, "Newer", "a", zDate, "b"); } free(zDate); } if( zType[0]=='a' || zType[0]=='c' ){ | < | | < > < < | | 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 | rDate+ONE_SECOND, blob_sql_text(&cond)) ){ timeline_submenu(&url, "Newer", "a", zDate, "b"); } free(zDate); } if( zType[0]=='a' || zType[0]=='c' ){ style_submenu_checkbox("unhide", "Unhide", 0); } style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); style_submenu_entry("n","Max:",4,0); timeline_y_submenu(disableY); } blob_zero(&cond); } if( PB("showsql") ){ @ <pre>%h(blob_sql_text(&sql))</pre> } if( search_restrict(SRCH_CKIN)!=0 ){ style_submenu_element("Search", "%R/search?y=c"); } if( PB("showid") ) tmFlags |= TIMELINE_SHOWRID; if( useDividers && zMark && zMark[0] ){ double r = symbolic_name_to_mtime(zMark); if( r>0.0 ) selectedRid = timeline_add_divider(r); } blob_zero(&sql); |
︙ | ︙ |
Changes to src/tkt.c.
︙ | ︙ | |||
214 215 216 217 218 219 220 | for(i=0; i<p->nField; i++){ const char *zName = p->aField[i].zName; const char *zBaseName = zName[0]=='+' ? zName+1 : zName; j = fieldId(zBaseName); if( j<0 ) continue; aUsed[j] = 1; if( aField[j].mUsed & USEDBY_TICKET ){ | > | | | | > > > > | | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 | for(i=0; i<p->nField; i++){ const char *zName = p->aField[i].zName; const char *zBaseName = zName[0]=='+' ? zName+1 : zName; j = fieldId(zBaseName); if( j<0 ) continue; aUsed[j] = 1; if( aField[j].mUsed & USEDBY_TICKET ){ const char *zUsedByName = zName; if( zUsedByName[0]=='+' ){ zUsedByName++; blob_append_sql(&sql1,", \"%w\"=coalesce(\"%w\",'') || %Q", zUsedByName, zUsedByName, p->aField[i].zValue); }else{ blob_append_sql(&sql1,", \"%w\"=%Q", zUsedByName, p->aField[i].zValue); } } if( aField[j].mUsed & USEDBY_TICKETCHNG ){ const char *zUsedByName = zName; if( zUsedByName[0]=='+' ){ zUsedByName++; } blob_append_sql(&sql2, ",\"%w\"", zUsedByName); blob_append_sql(&sql3, ",%Q", p->aField[i].zValue); } if( rid>0 ){ wiki_extract_links(p->aField[i].zValue, rid, 1, p->rDate, i==0, 0); } } blob_append_sql(&sql1, " WHERE tkt_id=%d", tktid); |
︙ | ︙ | |||
449 450 451 452 453 454 455 | const char *zScript; char *zFullName; const char *zUuid = PD("name",""); login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } if( g.anon.WrTkt || g.anon.ApndTkt ){ | < | | < | < | < | < | < | | < | 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 | const char *zScript; char *zFullName; const char *zUuid = PD("name",""); login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } if( g.anon.WrTkt || g.anon.ApndTkt ){ style_submenu_element("Edit", "%s/tktedit?name=%T", g.zTop, PD("name","")); } if( g.perm.Hyperlink ){ style_submenu_element("History", "%s/tkthistory/%T", g.zTop, zUuid); style_submenu_element("Timeline", "%s/tkttimeline/%T", g.zTop, zUuid); style_submenu_element("Check-ins", "%s/tkttimeline/%T?y=ci", g.zTop, zUuid); } if( g.anon.NewTkt ){ style_submenu_element("New Ticket", "%s/tktnew", g.zTop); } if( g.anon.ApndTkt && g.anon.Attach ){ style_submenu_element("Attach", "%s/attachadd?tkt=%T&from=%s/tktview/%t", g.zTop, zUuid, g.zTop, zUuid); } if( P("plaintext") ){ style_submenu_element("Formatted", "%R/tktview/%s", zUuid); }else{ style_submenu_element("Plaintext", "%R/tktview/%s?plaintext", zUuid); } style_header("View Ticket"); if( g.thTrace ) Th_Trace("BEGIN_TKTVIEW<br />\n", -1); ticket_init(); initializeVariablesFromCGI(); getAllTicketFields(); initializeVariablesFromDb(); |
︙ | ︙ | |||
544 545 546 547 548 549 550 | */ static int ticket_put( Blob *pTicket, /* The text of the ticket change record */ const char *zTktId, /* The ticket to which this change is applied */ int needMod /* True if moderation is needed */ ){ int result; | > > | < | 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 | */ static int ticket_put( Blob *pTicket, /* The text of the ticket change record */ const char *zTktId, /* The ticket to which this change is applied */ int needMod /* True if moderation is needed */ ){ int result; int rid; manifest_crosslink_begin(); rid = content_put_ex(pTicket, 0, 0, 0, needMod); if( rid==0 ){ fossil_fatal("trouble committing ticket: %s", g.zErrMsg); } if( needMod ){ moderation_table_create(); db_multi_exec( "INSERT INTO modreq(objid, tktid) VALUES(%d,%Q)", rid, zTktId ); }else{ db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d);", rid); db_multi_exec("INSERT OR IGNORE INTO unclustered VALUES(%d);", rid); } result = (manifest_crosslink(rid, pTicket, MC_NONE)==0); assert( blob_is_reset(pTicket) ); if( !result ){ result = manifest_crosslink_end(MC_PERMIT_HOOKS); }else{ manifest_crosslink_end(MC_NONE); } |
︙ | ︙ | |||
848 849 850 851 852 853 854 | if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zType = PD("y","a"); if( zType[0]!='c' ){ | | | | < | < | < | 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 | if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zType = PD("y","a"); if( zType[0]!='c' ){ style_submenu_element("Check-ins", "%s/tkttimeline?name=%T&y=ci", g.zTop, zUuid); }else{ style_submenu_element("Timeline", "%s/tkttimeline?name=%T", g.zTop, zUuid); } style_submenu_element("History", "%s/tkthistory/%s", g.zTop, zUuid); style_submenu_element("Status", "%s/info/%s", g.zTop, zUuid); if( zType[0]=='c' ){ zTitle = mprintf("Check-ins Associated With Ticket %h", zUuid); }else{ zTitle = mprintf("Timeline Of Ticket %h", zUuid); } style_header("%z", zTitle); |
︙ | ︙ | |||
923 924 925 926 927 928 929 | login_check_credentials(); if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zTitle = mprintf("History Of Ticket %h", zUuid); | | < | | | < | < | < | 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 | login_check_credentials(); if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zTitle = mprintf("History Of Ticket %h", zUuid); style_submenu_element("Status", "%s/info/%s", g.zTop, zUuid); style_submenu_element("Check-ins", "%s/tkttimeline?name=%s&y=ci", g.zTop, zUuid); style_submenu_element("Timeline", "%s/tkttimeline?name=%s", g.zTop, zUuid); if( P("plaintext")!=0 ){ style_submenu_element("Formatted", "%R/tkthistory/%s", zUuid); }else{ style_submenu_element("Plaintext", "%R/tkthistory/%s?plaintext", zUuid); } style_header("%z", zTitle); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'tkt-%q*'",zUuid); if( tagid==0 ){ @ No such ticket: %h(zUuid) style_footer(); |
︙ | ︙ | |||
1384 1385 1386 1387 1388 1389 1390 | } } blob_appendf(&tktchng, "K %s\n", zTktUuid); blob_appendf(&tktchng, "U %F\n", zUser); md5sum_blob(&tktchng, &cksum); blob_appendf(&tktchng, "Z %b\n", &cksum); if( ticket_put(&tktchng, zTktUuid, ticket_need_moderation(1)) ){ | | | 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 | } } blob_appendf(&tktchng, "K %s\n", zTktUuid); blob_appendf(&tktchng, "U %F\n", zUser); md5sum_blob(&tktchng, &cksum); blob_appendf(&tktchng, "Z %b\n", &cksum); if( ticket_put(&tktchng, zTktUuid, ticket_need_moderation(1)) ){ fossil_fatal("%s", g.zErrMsg); }else{ fossil_print("ticket %s succeeded for %s\n", (eCmd==set?"set":"add"),zTktUuid); } } } } |
︙ | ︙ | |||
1408 1409 1410 1411 1412 1413 1414 | #endif /* ** Add some standard submenu elements for ticket screens. */ void ticket_standard_submenu(unsigned int ok){ if( (ok & T_SRCH)!=0 && search_restrict(SRCH_TKT)!=0 ){ | | | | | 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 | #endif /* ** Add some standard submenu elements for ticket screens. */ void ticket_standard_submenu(unsigned int ok){ if( (ok & T_SRCH)!=0 && search_restrict(SRCH_TKT)!=0 ){ style_submenu_element("Search", "%R/tktsrch"); } if( (ok & T_REPLIST)!=0 ){ style_submenu_element("Reports", "%R/reportlist"); } if( (ok & T_NEW)!=0 && g.anon.NewTkt ){ style_submenu_element("New", "%R/tktnew"); } } /* ** WEBPAGE: ticket ** ** This is intended to be the primary "Ticket" page. Render as |
︙ | ︙ |
Changes to src/translate.c.
︙ | ︙ | |||
44 45 46 47 48 49 50 51 52 53 54 55 56 57 | ** ** Comments of the form: "|* @-comment: CC" (where "|" is really "/") ** cause CC to become a comment character for the @-substitution. ** Typical values for CC are "--" (for SQL text) or "#" (for Tcl script) ** or "//" (for C++ code). Lines of subsequent @-blocks that begin with ** CC are omitted from the output. ** */ #include <stdio.h> #include <ctype.h> #include <stdlib.h> #include <string.h> /* | > > > > > | 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | ** ** Comments of the form: "|* @-comment: CC" (where "|" is really "/") ** cause CC to become a comment character for the @-substitution. ** Typical values for CC are "--" (for SQL text) or "#" (for Tcl script) ** or "//" (for C++ code). Lines of subsequent @-blocks that begin with ** CC are omitted from the output. ** ** Enhancement #3: ** ** If a non-enhancement #1 line ends in backslash, the backslash and the ** newline (\n) are not included in the argument to cgi_printf(). This ** is used to split one long output line across multiple source lines. */ #include <stdio.h> #include <ctype.h> #include <stdlib.h> #include <string.h> /* |
︙ | ︙ | |||
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | }else{ /* Otherwise (if the last non-whitespace was not '=') then generate ** a cgi_printf() statement whose format is the text following the '@'. ** Substrings of the form "%C(...)" (where C is any sequence of ** characters other than \000 and '(') will put "%C" in the ** format and add the "(...)" as an argument to the cgi_printf call. */ int indent; int nC; char c; i++; if( isspace(zLine[i]) ){ i++; } indent = i; for(j=0; zLine[i] && zLine[i]!='\r' && zLine[i]!='\n'; i++){ if( zLine[i]=='"' || zLine[i]=='\\' ){ zOut[j++] = '\\'; } zOut[j++] = zLine[i]; if( zLine[i]!='%' || zLine[i+1]=='%' || zLine[i+1]==0 ) continue; for(nC=1; zLine[i+nC] && zLine[i+nC]!='('; nC++){} if( zLine[i+nC]!='(' || !isalpha(zLine[i+nC-1]) ) continue; while( --nC ) zOut[j++] = zLine[++i]; zArg[nArg++] = ','; | > > > > > > | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | }else{ /* Otherwise (if the last non-whitespace was not '=') then generate ** a cgi_printf() statement whose format is the text following the '@'. ** Substrings of the form "%C(...)" (where C is any sequence of ** characters other than \000 and '(') will put "%C" in the ** format and add the "(...)" as an argument to the cgi_printf call. */ const char *zNewline = "\\n"; int indent; int nC; char c; i++; if( isspace(zLine[i]) ){ i++; } indent = i; for(j=0; zLine[i] && zLine[i]!='\r' && zLine[i]!='\n'; i++){ if( zLine[i]=='\\' && (!zLine[i+1] || zLine[i+1]=='\r' || zLine[i+1]=='\n') ){ zNewline = ""; break; } if( zLine[i]=='"' || zLine[i]=='\\' ){ zOut[j++] = '\\'; } zOut[j++] = zLine[i]; if( zLine[i]!='%' || zLine[i+1]=='%' || zLine[i+1]==0 ) continue; for(nC=1; zLine[i+nC] && zLine[i+nC]!='('; nC++){} if( zLine[i+nC]!='(' || !isalpha(zLine[i+nC-1]) ) continue; while( --nC ) zOut[j++] = zLine[++i]; zArg[nArg++] = ','; |
︙ | ︙ | |||
167 168 169 170 171 172 173 | k++; } i++; } } zOut[j] = 0; if( !inPrint ){ | | | | 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | k++; } i++; } } zOut[j] = 0; if( !inPrint ){ fprintf(out,"%*scgi_printf(\"%s%s\"",indent-2,"", zOut, zNewline); inPrint = 1; }else{ fprintf(out,"\n%*s\"%s%s\"",indent+5, "", zOut, zNewline); } } } } int main(int argc, char **argv){ if( argc==2 ){ |
︙ | ︙ |
Changes to src/unicode.c.
︙ | ︙ | |||
142 143 144 145 146 147 148 | }; if( (unsigned int)c<128 ){ return ( (aAscii[c >> 5] & (1 << (c & 0x001F)))==0 ); }else if( (unsigned int)c<(1<<22) ){ unsigned int key = (((unsigned int)c)<<10) | 0x000003FF; int iRes = 0; | | | 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | }; if( (unsigned int)c<128 ){ return ( (aAscii[c >> 5] & (1 << (c & 0x001F)))==0 ); }else if( (unsigned int)c<(1<<22) ){ unsigned int key = (((unsigned int)c)<<10) | 0x000003FF; int iRes = 0; int iHi = count(aEntry) - 1; int iLo = 0; while( iHi>=iLo ){ int iTest = (iHi + iLo) / 2; if( key >= aEntry[iTest] ){ iRes = iTest; iLo = iTest+1; }else{ |
︙ | ︙ | |||
199 200 201 202 203 204 205 | 'h', 'i', 'k', 'l', 'l', 'm', 'n', 'p', 'r', 'r', 's', 't', 'u', 'v', 'w', 'w', 'x', 'y', 'z', 'h', 't', 'w', 'y', 'a', 'e', 'i', 'o', 'u', 'y', }; unsigned int key = (((unsigned int)c)<<3) | 0x00000007; int iRes = 0; | | | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | 'h', 'i', 'k', 'l', 'l', 'm', 'n', 'p', 'r', 'r', 's', 't', 'u', 'v', 'w', 'w', 'x', 'y', 'z', 'h', 't', 'w', 'y', 'a', 'e', 'i', 'o', 'u', 'y', }; unsigned int key = (((unsigned int)c)<<3) | 0x00000007; int iRes = 0; int iHi = count(aDia) - 1; int iLo = 0; while( iHi>=iLo ){ int iTest = (iHi + iLo) / 2; if( key >= aDia[iTest] ){ iRes = iTest; iLo = iTest+1; }else{ |
︙ | ︙ | |||
346 347 348 349 350 351 352 | assert( sizeof(unsigned short)==2 && sizeof(unsigned char)==1 ); if( c<128 ){ if( c>='A' && c<='Z' ) ret = c + ('a' - 'A'); }else if( c<65536 ){ const struct TableEntry *p; | | | 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 | assert( sizeof(unsigned short)==2 && sizeof(unsigned char)==1 ); if( c<128 ){ if( c>='A' && c<='Z' ) ret = c + ('a' - 'A'); }else if( c<65536 ){ const struct TableEntry *p; int iHi = count(aEntry) - 1; int iLo = 0; int iRes = -1; assert( c>aEntry[0].iCode ); while( iHi>=iLo ){ int iTest = (iHi + iLo) / 2; int cmp = (c - aEntry[iTest].iCode); |
︙ | ︙ |
Changes to src/unversioned.c.
︙ | ︙ | |||
450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 | /* ** WEBPAGE: uvlist ** ** Display a list of all unversioned files in the repository. ** Query parameters: ** ** byage=1 Order the initial display be decreasing age */ void uvstat_page(void){ Stmt q; sqlite3_int64 iNow; sqlite3_int64 iTotalSz = 0; int cnt = 0; int n = 0; const char *zOrderBy = "name"; char zSzName[100]; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Unversioned Files"); if( !db_table_exists("repository","unversioned") ){ @ No unversioned files on this server style_footer(); return; } if( PB("byage") ) zOrderBy = "mtime DESC"; db_prepare(&q, "SELECT" " name," " mtime," " hash," " sz," " (SELECT login FROM rcvfrom, user" " WHERE user.uid=rcvfrom.uid AND rcvfrom.rcvid=unversioned.rcvid)," " rcvid" | > > > | > | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 | /* ** WEBPAGE: uvlist ** ** Display a list of all unversioned files in the repository. ** Query parameters: ** ** byage=1 Order the initial display be decreasing age ** showdel=0 Show deleted files */ void uvstat_page(void){ Stmt q; sqlite3_int64 iNow; sqlite3_int64 iTotalSz = 0; int cnt = 0; int n = 0; const char *zOrderBy = "name"; int showDel = 0; char zSzName[100]; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Unversioned Files"); if( !db_table_exists("repository","unversioned") ){ @ No unversioned files on this server style_footer(); return; } if( PB("byage") ) zOrderBy = "mtime DESC"; if( PB("showdel") ) showDel = 1; db_prepare(&q, "SELECT" " name," " mtime," " hash," " sz," " (SELECT login FROM rcvfrom, user" " WHERE user.uid=rcvfrom.uid AND rcvfrom.rcvid=unversioned.rcvid)," " rcvid" " FROM unversioned %s ORDER BY %s", showDel ? "" : "WHERE hash IS NOT NULL" /*safe-for-%s*/, zOrderBy/*safe-for-%s*/ ); iNow = db_int64(0, "SELECT strftime('%%s','now');"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); sqlite3_int64 mtime = db_column_int(&q, 1); const char *zHash = db_column_text(&q, 2); |
︙ | ︙ |
Changes to src/user.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Commands and procedures used for creating, processing, editing, and ** querying information about users. */ #include "config.h" #include "user.h" | < < < < | > | > > > | > > > > > > > > > > > | | | | | | | | > | > > > > > | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | ** ** Commands and procedures used for creating, processing, editing, and ** querying information about users. */ #include "config.h" #include "user.h" /* ** Strip leading and trailing space from a string and add the string ** onto the end of a blob. */ static void strip_string(Blob *pBlob, char *z){ int i; blob_reset(pBlob); while( fossil_isspace(*z) ){ z++; } for(i=0; z[i]; i++){ if( z[i]=='\r' || z[i]=='\n' ){ while( i>0 && fossil_isspace(z[i-1]) ){ i--; } z[i] = 0; break; } if( z[i]>0 && z[i]<' ' ) z[i] = ' '; } blob_append(pBlob, z, -1); } #if defined(_WIN32) || defined(__BIONIC__) #ifdef _WIN32 #include <conio.h> #endif /* ** getpass() for Windows and Android. */ static char *zPwdBuffer = 0; static size_t nPwdBuffer = 0; static char *getpass(const char *prompt){ char *zPwd; size_t nPwd; size_t i; if( zPwdBuffer==0 ){ zPwdBuffer = fossil_secure_alloc_page(&nPwdBuffer); assert( zPwdBuffer ); }else{ fossil_secure_zero(zPwdBuffer, nPwdBuffer); } zPwd = zPwdBuffer; nPwd = nPwdBuffer; fputs(prompt,stderr); fflush(stderr); assert( zPwd!=0 ); assert( nPwd>0 ); for(i=0; i<nPwd-1; ++i){ #if defined(_WIN32) zPwd[i] = _getch(); #else zPwd[i] = getc(stdin); #endif if(zPwd[i]=='\r' || zPwd[i]=='\n'){ break; } /* BS or DEL */ else if(i>0 && (zPwd[i]==8 || zPwd[i]==127)){ i -= 2; continue; } /* CTRL-C */ else if(zPwd[i]==3) { i=0; break; } /* ESC */ else if(zPwd[i]==27){ i=0; break; } else{ fputc('*',stderr); } } zPwd[i]='\0'; fputs("\n", stderr); assert( zPwd==zPwdBuffer ); return zPwd; } void freepass(){ if( !zPwdBuffer ) return; assert( nPwdBuffer>0 ); fossil_secure_free_page(zPwdBuffer, nPwdBuffer); } #endif #if defined(_WIN32) || defined(WIN32) # include <io.h> # include <fcntl.h> # undef popen |
︙ | ︙ | |||
551 552 553 554 555 556 557 558 559 560 561 562 563 564 | sqlite3_create_function(g.db, "shared_secret", 2, SQLITE_UTF8, 0, sha1_shared_secret_sql_function, 0, 0); db_multi_exec( "UPDATE user SET pw=shared_secret(pw,login), mtime=now()" " WHERE length(pw)>0 AND length(pw)!=40" ); } /* ** WEBPAGE: access_log ** ** Show login attempts, including timestamp and IP address. ** Requires Admin privileges. ** | > > > > > > > > > > > > > > > | 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 | sqlite3_create_function(g.db, "shared_secret", 2, SQLITE_UTF8, 0, sha1_shared_secret_sql_function, 0, 0); db_multi_exec( "UPDATE user SET pw=shared_secret(pw,login), mtime=now()" " WHERE length(pw)>0 AND length(pw)!=40" ); } /* ** COMMAND: test-prompt-user ** ** Usage: %fossil test-prompt-user PROMPT ** ** Prompts the user for input and then prints it verbatim (i.e. without ** a trailing line terminator). */ void test_prompt_user_cmd(void){ Blob answer; if( g.argc!=3 ) usage("PROMPT"); prompt_user(g.argv[2], &answer); fossil_print("%s", blob_str(&answer)); } /* ** WEBPAGE: access_log ** ** Show login attempts, including timestamp and IP address. ** Requires Admin privileges. ** |
︙ | ︙ | |||
614 615 616 617 618 619 620 | if( y==1 ){ blob_append(&sql, " WHERE success", -1); }else if( y==2 ){ blob_append(&sql, " WHERE NOT success", -1); } blob_append_sql(&sql," ORDER BY rowid DESC LIMIT %d OFFSET %d", n+1, skip); if( skip ){ | | | < | | | < | 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 | if( y==1 ){ blob_append(&sql, " WHERE success", -1); }else if( y==2 ){ blob_append(&sql, " WHERE NOT success", -1); } blob_append_sql(&sql," ORDER BY rowid DESC LIMIT %d OFFSET %d", n+1, skip); if( skip ){ style_submenu_element("Newer", "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip>=n ? skip-n : 0, n, y); } rc = db_prepare_ignore_error(&q, "%s", blob_sql_text(&sql)); fLogEnabled = db_get_boolean("access-log", 0); @ <div align="center">Access logging is %s(fLogEnabled?"on":"off"). @ (Change this on the <a href="setup_settings">settings</a> page.)</div> @ <table border="1" cellpadding="5" id="logtable" align="center"> @ <thead><tr><th width="33%%">Date</th><th width="34%%">User</th> @ <th width="33%%">IP Address</th></tr></thead><tbody> while( rc==SQLITE_OK && db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zIP = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); int bSuccess = db_column_int(&q, 3); cnt++; if( cnt>n ){ style_submenu_element("Older", "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip+n, n, y); break; } if( bSuccess ){ @ <tr> }else{ @ <tr bgcolor="#ffacc0"> } @ <td>%s(zDate)</td><td>%h(zName)</td><td>%h(zIP)</td></tr> } if( skip>0 || cnt>n ){ style_submenu_element("All", "%s/access_log?n=10000000", g.zTop); } @ </tbody></table> db_finalize(&q); @ <hr /> @ <form method="post" action="%s(g.zTop)/access_log"> @ <label><input type="checkbox" name="delold"> @ Delete all but the most recent 200 entries</input></label> |
︙ | ︙ |
Changes to src/utf8.c.
︙ | ︙ | |||
315 316 317 318 319 320 321 322 323 324 325 326 327 328 | #ifdef _WIN32 int nChar, written = 0; wchar_t *zUnicode; /* Unicode version of zUtf8 */ DWORD dummy; Blob blob; static int istty[2] = { -1, -1 }; if( istty[toStdErr]==-1 ){ istty[toStdErr] = _isatty(toStdErr + 1) != 0; } if( !istty[toStdErr] ){ /* stdout/stderr is not a console. */ return -1; } | > | 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | #ifdef _WIN32 int nChar, written = 0; wchar_t *zUnicode; /* Unicode version of zUtf8 */ DWORD dummy; Blob blob; static int istty[2] = { -1, -1 }; assert( toStdErr==0 || toStdErr==1 ); if( istty[toStdErr]==-1 ){ istty[toStdErr] = _isatty(toStdErr + 1) != 0; } if( !istty[toStdErr] ){ /* stdout/stderr is not a console. */ return -1; } |
︙ | ︙ |
Changes to src/util.c.
︙ | ︙ | |||
53 54 55 56 57 58 59 60 61 62 63 64 65 66 | void fossil_free(void *p){ free(p); } void *fossil_realloc(void *p, size_t n){ p = realloc(p, n); if( p==0 ) fossil_panic("out of memory"); return p; } /* ** This function implements a cross-platform "system()" interface. */ int fossil_system(const char *zOrigCmd){ int rc; | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | void fossil_free(void *p){ free(p); } void *fossil_realloc(void *p, size_t n){ p = realloc(p, n); if( p==0 ) fossil_panic("out of memory"); return p; } void fossil_secure_zero(void *p, size_t n){ volatile unsigned char *vp = (volatile unsigned char *)p; size_t i; if( p==0 ) return; assert( n>0 ); if( n==0 ) return; for(i=0; i<n; i++){ vp[i] ^= 0xFF; } for(i=0; i<n; i++){ vp[i] ^= vp[i]; } } void fossil_get_page_size(size_t *piPageSize){ #if defined(_WIN32) SYSTEM_INFO sysInfo; memset(&sysInfo, 0, sizeof(SYSTEM_INFO)); GetSystemInfo(&sysInfo); *piPageSize = (size_t)sysInfo.dwPageSize; #else *piPageSize = 4096; /* FIXME: What for POSIX? */ #endif } void *fossil_secure_alloc_page(size_t *pN){ void *p; size_t pageSize; fossil_get_page_size(&pageSize); assert( pageSize>0 ); assert( pageSize%2==0 ); #if defined(_WIN32) p = VirtualAlloc(NULL, pageSize, MEM_COMMIT|MEM_RESERVE, PAGE_READWRITE); if( p==NULL ){ fossil_fatal("VirtualAlloc failed: %lu\n", GetLastError()); } if( !VirtualLock(p, pageSize) ){ fossil_fatal("VirtualLock failed: %lu\n", GetLastError()); } #else p = fossil_malloc(pageSize); #endif fossil_secure_zero(p, pageSize); if( pN ) *pN = pageSize; return p; } void fossil_secure_free_page(void *p, size_t n){ if( !p ) return; assert( n>0 ); fossil_secure_zero(p, n); #if defined(_WIN32) if( !VirtualUnlock(p, n) ){ fossil_fatal("VirtualUnlock failed: %lu\n", GetLastError()); } if( !VirtualFree(p, 0, MEM_RELEASE) ){ fossil_fatal("VirtualFree failed: %lu\n", GetLastError()); } #else fossil_free(p); #endif } /* ** This function implements a cross-platform "system()" interface. */ int fossil_system(const char *zOrigCmd){ int rc; |
︙ | ︙ |
Changes to src/vfile.c.
︙ | ︙ | |||
348 349 350 351 352 353 354 | blob_reset(&content); continue; } } if( verbose ) fossil_print("%s\n", &zName[nRepos]); if( file_wd_isdir(zName)==1 ){ /*TODO(dchest): remove directories? */ | | | 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 | blob_reset(&content); continue; } } if( verbose ) fossil_print("%s\n", &zName[nRepos]); if( file_wd_isdir(zName)==1 ){ /*TODO(dchest): remove directories? */ fossil_fatal("%s is directory, cannot overwrite", zName); } if( file_wd_size(zName)>=0 && (isLink || file_wd_islink(0)) ){ file_delete(zName); } if( isLink ){ symlink_create(blob_str(&content), zName); }else{ |
︙ | ︙ | |||
433 434 435 436 437 438 439 | if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ if( zName[0]=='/' && sqlite3_strglob("/ci-comment-????????????.txt", zName)==0 ){ return 1; } if( zName[0]!='-' ) continue; | | | 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 | if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ if( zName[0]=='/' && sqlite3_strglob("/ci-comment-????????????.txt", zName)==0 ){ return 1; } if( zName[0]!='-' ) continue; for(i=0; i<count(azTemp); i++){ n = (int)strlen(azTemp[i]); if( memcmp(azTemp[i], zName+1, n) ) continue; if( zName[n+1]==0 ) return 1; if( zName[n+1]=='-' ){ for(j=n+2; zName[j] && fossil_isdigit(zName[j]); j++){} if( zName[j]==0 ) return 1; } |
︙ | ︙ | |||
916 917 918 919 920 921 922 | blob_zero(&err); if( pManOut ){ blob_zero(pManOut); } db_must_be_within_tree(); pManifest = manifest_get(vid, CFTYPE_MANIFEST, &err); if( pManifest==0 ){ | | | 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 | blob_zero(&err); if( pManOut ){ blob_zero(pManOut); } db_must_be_within_tree(); pManifest = manifest_get(vid, CFTYPE_MANIFEST, &err); if( pManifest==0 ){ fossil_fatal("manifest file (%d) is malformed:\n%s", vid, blob_str(&err)); } manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ if( pFile->zUuid==0 ) continue; fid = uuid_to_rid(pFile->zUuid, 0); md5sum_step_text(pFile->zName, -1); |
︙ | ︙ |
Changes to src/wiki.c.
︙ | ︙ | |||
136 137 138 139 140 141 142 | /* ** Only allow certain mimetypes through. ** All others become "text/x-fossil-wiki" */ const char *wiki_filter_mimetypes(const char *zMimetype){ if( zMimetype!=0 ){ int i; | | | 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | /* ** Only allow certain mimetypes through. ** All others become "text/x-fossil-wiki" */ const char *wiki_filter_mimetypes(const char *zMimetype){ if( zMimetype!=0 ){ int i; for(i=0; i<count(azStyles); i+=3){ if( fossil_strcmp(zMimetype,azStyles[i+2])==0 ){ return azStyles[i]; } } if( fossil_strcmp(zMimetype, "text/x-markdown")==0 || fossil_strcmp(zMimetype, "text/plain")==0 ){ return zMimetype; |
︙ | ︙ | |||
181 182 183 184 185 186 187 | ** Show a summary of the Markdown wiki formatting rules. */ void markdown_rules_page(void){ Blob x; int fTxt = P("txt")!=0; style_header("Markdown Formatting Rules"); if( fTxt ){ | | | | 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | ** Show a summary of the Markdown wiki formatting rules. */ void markdown_rules_page(void){ Blob x; int fTxt = P("txt")!=0; style_header("Markdown Formatting Rules"); if( fTxt ){ style_submenu_element("Formatted", "%R/md_rules"); }else{ style_submenu_element("Plain-Text", "%R/md_rules?txt=1"); } blob_init(&x, builtin_text("markdown.md"), -1); wiki_render_by_mimetype(&x, fTxt ? "text/plain" : "text/x-markdown"); blob_reset(&x); style_footer(); } |
︙ | ︙ | |||
229 230 231 232 233 234 235 | #define W_ALL_BUT(x) (W_ALL&~(x)) /* ** Add some standard submenu elements for wiki screens. */ static void wiki_standard_submenu(unsigned int ok){ if( (ok & W_SRCH)!=0 && search_restrict(SRCH_WIKI)!=0 ){ | | | | | | | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 | #define W_ALL_BUT(x) (W_ALL&~(x)) /* ** Add some standard submenu elements for wiki screens. */ static void wiki_standard_submenu(unsigned int ok){ if( (ok & W_SRCH)!=0 && search_restrict(SRCH_WIKI)!=0 ){ style_submenu_element("Search", "%R/wikisrch"); } if( (ok & W_LIST)!=0 ){ style_submenu_element("List", "%R/wcontent"); } if( (ok & W_HELP)!=0 ){ style_submenu_element("Help", "%R/wikihelp"); } if( (ok & W_NEW)!=0 && g.anon.NewWiki ){ style_submenu_element("New", "%R/wikinew"); } #if 0 if( (ok & W_BLOG)!=0 #endif if( (ok & W_SANDBOX)!=0 ){ style_submenu_element("Sandbox", "%R/wiki?name=Sandbox"); } } /* ** WEBPAGE: wikihelp ** A generic landing page for wiki. */ |
︙ | ︙ | |||
364 365 366 367 368 369 370 | zBody = pWiki->zWiki; zMimetype = pWiki->zMimetype; } } zMimetype = wiki_filter_mimetypes(zMimetype); if( !g.isHome ){ if( rid ){ | | < | < | < | < < | | < | | 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 | zBody = pWiki->zWiki; zMimetype = pWiki->zMimetype; } } zMimetype = wiki_filter_mimetypes(zMimetype); if( !g.isHome ){ if( rid ){ style_submenu_element("Diff", "%R/wdiff?name=%T&a=%d", zPageName, rid); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); style_submenu_element("Details", "%R/info/%s", zUuid); } if( (rid && g.anon.WrWiki) || (!rid && g.anon.NewWiki) ){ if( db_get_boolean("wysiwyg-wiki", 0) ){ style_submenu_element("Edit", "%s/wikiedit?name=%T&wysiwyg=1", g.zTop, zPageName); }else{ style_submenu_element("Edit", "%s/wikiedit?name=%T", g.zTop, zPageName); } } if( rid && g.anon.ApndWiki && g.anon.Attach ){ style_submenu_element("Attach", "%s/attachadd?page=%T&from=%s/wiki%%3fname=%T", g.zTop, zPageName, g.zTop, zPageName); } if( rid && g.anon.ApndWiki ){ style_submenu_element("Append", "%s/wikiappend?name=%T&mimetype=%s", g.zTop, zPageName, zMimetype); } if( g.perm.Hyperlink ){ style_submenu_element("History", "%s/whistory?name=%T", g.zTop, zPageName); } } style_set_current_page("%T?name=%T", g.zPath, zPageName); style_header("%s", zPageName); wiki_standard_submenu(submenuFlags); blob_init(&wiki, zBody, -1); |
︙ | ︙ | |||
432 433 434 435 436 437 438 | /* ** Output a selection box from which the user can select the ** wiki mimetype. */ void mimetype_option_menu(const char *zMimetype){ unsigned i; @ <select name="mimetype" size="1"> | | | 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 | /* ** Output a selection box from which the user can select the ** wiki mimetype. */ void mimetype_option_menu(const char *zMimetype){ unsigned i; @ <select name="mimetype" size="1"> for(i=0; i<count(azStyles); i+=3){ if( fossil_strcmp(zMimetype,azStyles[i])==0 ){ @ <option value="%s(azStyles[i])" selected>%s(azStyles[i+1])</option> }else{ @ <option value="%s(azStyles[i])">%s(azStyles[i+1])</option> } } @ </select> |
︙ | ︙ | |||
949 950 951 952 953 954 955 | Stmt q; int showAll = P("all")!=0; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } style_header("Available Wiki Pages"); if( showAll ){ | | | | 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 | Stmt q; int showAll = P("all")!=0; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } style_header("Available Wiki Pages"); if( showAll ){ style_submenu_element("Active", "%s/wcontent", g.zTop); }else{ style_submenu_element("All", "%s/wcontent?all=1", g.zTop); } wiki_standard_submenu(W_ALL_BUT(W_LIST)); @ <ul> wiki_prepare_page_list(&q); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); int size = db_column_int(&q, 1); |
︙ | ︙ |
Changes to src/wikiformat.c.
︙ | ︙ | |||
143 144 145 146 147 148 149 | /* ** Use binary search to locate a tag in the aAttribute[] table. */ static int findAttr(const char *z){ int i, c, first, last; first = 1; | | | 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 | /* ** Use binary search to locate a tag in the aAttribute[] table. */ static int findAttr(const char *z){ int i, c, first, last; first = 1; last = count(aAttribute) - 1; while( first<=last ){ i = (first+last)/2; c = fossil_strcmp(aAttribute[i].zName, z); if( c==0 ){ return i; }else if( c<0 ){ first = i+1; |
︙ | ︙ | |||
370 371 372 373 374 375 376 | { "var", MARKUP_VAR, MUTYPE_FONT, AMSK_STYLE }, { "verbatim", MARKUP_VERBATIM, MUTYPE_SPECIAL, AMSK_ID|AMSK_TYPE }, }; void show_allowed_wiki_markup( void ){ int i; /* loop over allowedAttr */ | | | | 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 | { "var", MARKUP_VAR, MUTYPE_FONT, AMSK_STYLE }, { "verbatim", MARKUP_VERBATIM, MUTYPE_SPECIAL, AMSK_ID|AMSK_TYPE }, }; void show_allowed_wiki_markup( void ){ int i; /* loop over allowedAttr */ for( i=1 ; i<=count(aMarkup) - 1 ; i++ ){ @ <%s(aMarkup[i].zName)> } } /* ** Use binary search to locate a tag in the aMarkup[] table. */ static int findTag(const char *z){ int i, c, first, last; first = 1; last = count(aMarkup) - 1; while( first<=last ){ i = (first+last)/2; c = fossil_strcmp(aMarkup[i].zName, z); if( c==0 ){ assert( aMarkup[i].iCode==i ); return i; }else if( c<0 ){ |
︙ | ︙ | |||
945 946 947 948 949 950 951 | while( z[i] && z[i]!='<' ){ i++; } if( fossil_strnicmp(&z[i], "</a>",4)!=0 ) return 0; for(j=*pN; fossil_isspace(z[j]); j++){} zTag = mprintf("%.*s", i-j, &z[j]); j = (int)strlen(zTag); while( j>0 && fossil_isspace(zTag[j-1]) ){ j--; } if( j==0 ) return 0; | | | 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 | while( z[i] && z[i]!='<' ){ i++; } if( fossil_strnicmp(&z[i], "</a>",4)!=0 ) return 0; for(j=*pN; fossil_isspace(z[j]); j++){} zTag = mprintf("%.*s", i-j, &z[j]); j = (int)strlen(zTag); while( j>0 && fossil_isspace(zTag[j-1]) ){ j--; } if( j==0 ) return 0; style_submenu_element(zTag, "%s", zHref); *pN = i+4; return 1; } /* ** Pop a single element off of the stack. As the element is popped, ** output its end tag if it is not a </div> tag. |
︙ | ︙ | |||
2185 2186 2187 2188 2189 2190 2191 | static const struct { int n; char c; char *z; } aEntity[] = { { 5, '&', "&" }, { 4, '<', "<" }, { 4, '>', ">" }, { 6, ' ', " " }, }; int jj; | | | 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 | static const struct { int n; char c; char *z; } aEntity[] = { { 5, '&', "&" }, { 4, '<', "<" }, { 4, '>', ">" }, { 6, ' ', " " }, }; int jj; for(jj=0; jj<count(aEntity); jj++){ if( aEntity[jj].n==n && strncmp(aEntity[jj].z,zIn,n)==0 ){ c = aEntity[jj].c; break; } } } if( fossil_isspace(c) ){ |
︙ | ︙ |
Changes to src/winhttp.c.
︙ | ︙ | |||
304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 | WSADATA wd; SOCKET s = INVALID_SOCKET; SOCKADDR_IN addr; int idCnt = 0; int iPort = mnPort; Blob options; wchar_t zTmpPath[MAX_PATH]; blob_zero(&options); if( zBaseUrl ){ blob_appendf(&options, " --baseurl %s", zBaseUrl); } if( zNotFound ){ blob_appendf(&options, " --notfound %s", zNotFound); } if( zFileGlob ){ blob_appendf(&options, " --files-urlenc %T", zFileGlob); } if( g.useLocalauth ){ blob_appendf(&options, " --localauth"); } if( g.thTrace ){ blob_appendf(&options, " --th-trace"); } if( flags & HTTP_SERVER_REPOLIST ){ blob_appendf(&options, " --repolist"); } if( WSAStartup(MAKEWORD(1,1), &wd) ){ fossil_fatal("unable to initialize winsock"); } while( iPort<=mxPort ){ s = socket(AF_INET, SOCK_STREAM, 0); if( s==INVALID_SOCKET ){ fossil_fatal("unable to create a socket"); | > > > > > > > > > > > > | 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 | WSADATA wd; SOCKET s = INVALID_SOCKET; SOCKADDR_IN addr; int idCnt = 0; int iPort = mnPort; Blob options; wchar_t zTmpPath[MAX_PATH]; #if USE_SEE const char *zSavedKey = 0; size_t savedKeySize = 0; #endif blob_zero(&options); if( zBaseUrl ){ blob_appendf(&options, " --baseurl %s", zBaseUrl); } if( zNotFound ){ blob_appendf(&options, " --notfound %s", zNotFound); } if( zFileGlob ){ blob_appendf(&options, " --files-urlenc %T", zFileGlob); } if( g.useLocalauth ){ blob_appendf(&options, " --localauth"); } if( g.thTrace ){ blob_appendf(&options, " --th-trace"); } if( flags & HTTP_SERVER_REPOLIST ){ blob_appendf(&options, " --repolist"); } #if USE_SEE zSavedKey = db_get_saved_encryption_key(); savedKeySize = db_get_saved_encryption_key_size(); if( zSavedKey!=0 && savedKeySize>0 ){ blob_appendf(&options, " --usepidkey %lu:%p:%u", GetCurrentProcessId(), zSavedKey, savedKeySize); } #endif if( WSAStartup(MAKEWORD(1,1), &wd) ){ fossil_fatal("unable to initialize winsock"); } while( iPort<=mxPort ){ s = socket(AF_INET, SOCK_STREAM, 0); if( s==INVALID_SOCKET ){ fossil_fatal("unable to create a socket"); |
︙ | ︙ | |||
652 653 654 655 656 657 658 | }else{ fossil_fatal("error from StartServiceCtrlDispatcher()"); } } return 0; } | | > > | 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 | }else{ fossil_fatal("error from StartServiceCtrlDispatcher()"); } } return 0; } /* Duplicate #ifdef needed for mkindex */ #ifdef _WIN32 /* ** COMMAND: winsrv* ** ** Usage: %fossil winsrv METHOD ?SERVICE-NAME? ?OPTIONS? ** ** Where METHOD is one of: create delete show start stop. ** ** The winsrv command manages Fossil as a Windows service. This allows |
︙ | ︙ | |||
1101 1102 1103 1104 1105 1106 1107 | }else { fossil_fatal("METHOD should be one of:" " create delete show start stop"); } return; } | > | | 1115 1116 1117 1118 1119 1120 1121 1122 1123 | }else { fossil_fatal("METHOD should be one of:" " create delete show start stop"); } return; } #endif /* _WIN32 -- dupe needed for mkindex */ #endif /* _WIN32 -- This code is for win32 only */ |
Changes to src/xfer.c.
︙ | ︙ | |||
2138 2139 2140 2141 2142 2143 2144 | if( iStatus==4 ) iStatus = 2; if( iStatus==5 ) iStatus = 1; } if( syncFlags & (SYNC_UV_TRACE|SYNC_UV_DRYRUN) ){ const char *zMsg = 0; switch( iStatus ){ case 0: | | | 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 | if( iStatus==4 ) iStatus = 2; if( iStatus==5 ) iStatus = 1; } if( syncFlags & (SYNC_UV_TRACE|SYNC_UV_DRYRUN) ){ const char *zMsg = 0; switch( iStatus ){ case 0: case 1: zMsg = "UV-PULL"; break; case 2: zMsg = "UV-PULL-MTIME-ONLY"; break; case 4: zMsg = "UV-PUSH-MTIME-ONLY"; break; case 5: zMsg = "UV-PUSH"; break; } if( zMsg ) fossil_print("\r%s: %s\n", zMsg, zName); if( syncFlags & SYNC_UV_DRYRUN ){ iStatus = 99; /* Prevent any changes or reply messages */ |
︙ | ︙ |
Added test/diff.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests for the diff command. # require_no_open_checkout test_setup; set rootDir [file normalize [pwd]] ################################### # Tests of binary file detection. # ################################### file mkdir .fossil-settings write_file [file join .fossil-settings binary-glob] "*" write_file file0.dat ""; # no content. write_file file1.dat "test file 1 (one line no term)." write_file file2.dat "test file 2 (NUL character).\0" write_file file3.dat "test file 3 (long line).[string repeat x 16384]" write_file file4.dat "test file 4 (long line).[string repeat y 16384]\ntwo" write_file file5.dat "[string repeat z 16384]\ntest file 5 (long line)." fossil add $rootDir fossil commit -m "c1" ############################################################################### fossil ls test diff-ls-1 {[normalize_result] eq \ "file0.dat\nfile1.dat\nfile2.dat\nfile3.dat\nfile4.dat\nfile5.dat"} ############################################################################### write_file file0.dat "\0" fossil diff file0.dat test diff-file0-1 {[normalize_result] eq {Index: file0.dat ================================================================== --- file0.dat +++ file0.dat cannot compute difference between binary files}} ############################################################################### write_file file1.dat [string repeat z 16384] fossil diff file1.dat test diff-file1-1 {[normalize_result] eq {Index: file1.dat ================================================================== --- file1.dat +++ file1.dat cannot compute difference between binary files}} ############################################################################### write_file file2.dat "test file 2 (no NUL character)." fossil diff file2.dat test diff-file2-1 {[normalize_result] eq {Index: file2.dat ================================================================== --- file2.dat +++ file2.dat cannot compute difference between binary files}} ############################################################################### write_file file3.dat "test file 3 (not a long line)." fossil diff file3.dat test diff-file3-1 {[normalize_result] eq {Index: file3.dat ================================================================== --- file3.dat +++ file3.dat cannot compute difference between binary files}} ############################################################################### write_file file4.dat "test file 4 (not a long line).\ntwo" fossil diff file4.dat test diff-file4-1 {[normalize_result] eq {Index: file4.dat ================================================================== --- file4.dat +++ file4.dat cannot compute difference between binary files}} ############################################################################### write_file file5.dat "[string repeat 0 16]\ntest file 5 (not a long line)." fossil diff file5.dat test diff-file5-1 {[normalize_result] eq {Index: file5.dat ================================================================== --- file5.dat +++ file5.dat cannot compute difference between binary files}} ############################################################################### test_cleanup |
Changes to test/graph-test-1.wiki.
︙ | ︙ | |||
64 65 66 67 68 69 70 71 72 73 74 75 76 77 | Merge on the same branch does not result in a leaf. </a> * <a href="../../../timeline?c=20015206bc" target="testwindow"> This timeline has a hidden commit.</a> Click Unhide to reveal. * <a href="../../../timeline?y=ci&n=15&b=2a4e4cf03e" target="testwindow">Isolated check-ins.</a> External: * <a href="http://www.sqlite.org/src/timeline?c=2010-09-29&nd" target="testwindow">Timewarp due to a mis-configured system clock.</a> * <a href="http://core.tcl.tk/tk/finfo?name=tests/id.test" target="testwindow">Show all three separate deletions of "id.test". | > > > | 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | Merge on the same branch does not result in a leaf. </a> * <a href="../../../timeline?c=20015206bc" target="testwindow"> This timeline has a hidden commit.</a> Click Unhide to reveal. * <a href="../../../timeline?y=ci&n=15&b=2a4e4cf03e" target="testwindow">Isolated check-ins.</a> * <a href="../../../timeline?b=0fa60142&n=50" target="testwindow">Single branch raiser from bottom of page up to checkins 057e4b and d3cc6d</a> External: * <a href="http://www.sqlite.org/src/timeline?c=2010-09-29&nd" target="testwindow">Timewarp due to a mis-configured system clock.</a> * <a href="http://core.tcl.tk/tk/finfo?name=tests/id.test" target="testwindow">Show all three separate deletions of "id.test". |
︙ | ︙ |
Changes to test/stash.test.
︙ | ︙ | |||
183 184 185 186 187 188 189 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 | | < < | 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 RENAMED f3n } -addremove { DELETED f1 } -exists {f0 f2 f3n} -notexists {f1 f3} # Confirm there is no longer a stash saved fossil stash list test stash-2-list {[first_data_line] eq "empty stash"} |
︙ | ︙ | |||
309 310 311 312 313 314 315 | test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]} stash-test 3-2-pop {pop} { UPDATE f1 UPDATE f2n } -changes { RENAMED f2n } -addremove { | < < | | 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 | test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]} stash-test 3-2-pop {pop} { UPDATE f1 UPDATE f2n } -changes { RENAMED f2n } -addremove { } -exists {f1 f2n} -notexists {f2} ######## # fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? test_setup |
︙ | ︙ |
Changes to test/tester.tcl.
︙ | ︙ | |||
747 748 749 750 751 752 753 | } append out \n$line } return [string range $out 1 end] } # This procedure executes the "fossil server" command. The return value | > | | | > > > | > > > > > > | > | > | | | > > > | | > > > | 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 | } append out \n$line } return [string range $out 1 end] } # This procedure executes the "fossil server" command. The return value # is a list comprised of the new process identifier and the port on which # the server started. The varName argument refers to a variable # where the "stop argument" is to be stored. This value must eventually be # passed to the [test_stop_server] procedure. proc test_start_server { repository {varName ""} } { global fossilexe tempPath set command [list exec $fossilexe server --localhost] if {[string length $varName] > 0} { upvar 1 $varName stopArg } if {$::tcl_platform(platform) eq "windows"} { set stopArg [file join [getTemporaryPath] [appendArgs \ [string trim [clock seconds] -] _ [getSeqNo] .stopper]] lappend command --stopper $stopArg } set outFileName [file join $tempPath [appendArgs \ fossil_server_ [string trim [clock seconds] -] _ \ [getSeqNo]]].out lappend command $repository >&$outFileName & set pid [eval $command] if {$::tcl_platform(platform) ne "windows"} { set stopArg $pid } after 1000; # output might not be there yet set output [read_file $outFileName] if {![regexp {Listening.*TCP port (\d+)} $output dummy port]} { puts stdout "Could not detect Fossil server port, using default..." set port 8080; # return the default port just in case } return [list $pid $port $outFileName] } # This procedure stops a Fossil server instance that was previously started # by the [test_start_server] procedure. The value of the "stop argument" # will vary by platform as will the exact method used to stop the server. # The fileName argument is the name of a temporary output file to delete. proc test_stop_server { stopArg pid fileName } { if {$::tcl_platform(platform) eq "windows"} { # # NOTE: On Windows, the "stop argument" must be the name of a file # that does NOT already exist. # if {[string length $stopArg] > 0 && \ ![file exists $stopArg] && \ [catch {write_file $stopArg [clock seconds]}] == 0} { while {1} { if {[catch { # # NOTE: Using the TaskList utility requires Windows XP or # later. # exec tasklist.exe /FI "PID eq $pid" } result] != 0 || ![regexp -- " $pid " $result]} { break } after 1000; # wait a bit... } file delete $stopArg if {[string length $fileName] > 0} { file delete $fileName } return true } } else { # # NOTE: On Unix, the "stop argument" must be an integer identifier # that refers to an existing process. # if {[regexp {^(?:-)?\d+$} $stopArg] && \ [catch {exec kill -TERM $stopArg}] == 0} { while {1} { if {[catch { # # TODO: Is this portable to all the supported variants of # Unix? It should be, it's POSIX. # exec ps -p $pid } result] != 0 || ![regexp -- "(?:^$pid| $pid) " $result]} { break } after 1000; # wait a bit... } if {[string length $fileName] > 0} { file delete $fileName } return true } } return false } |
︙ | ︙ |
Changes to test/th1.test.
︙ | ︙ | |||
1036 1037 1038 1039 1040 1041 1042 | set base_commands {anoncap anycap array artifact break breakpoint catch\ checkout combobox continue date decorate dir enable_output encode64\ error expr for getParameter glob_match globalState hascap hasfeature\ html htmlize http httpize if info insertCsrf lindex linecount list\ llength lsearch markdown proc puts query randhex redirect regexp\ reinitialize rename render repository return searchable set\ setParameter setting stime string styleFooter styleHeader tclReady\ | | | 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 | set base_commands {anoncap anycap array artifact break breakpoint catch\ checkout combobox continue date decorate dir enable_output encode64\ error expr for getParameter glob_match globalState hascap hasfeature\ html htmlize http httpize if info insertCsrf lindex linecount list\ llength lsearch markdown proc puts query randhex redirect regexp\ reinitialize rename render repository return searchable set\ setParameter setting stime string styleFooter styleHeader tclReady\ trace unset unversioned uplevel upvar utime verifyCsrf wiki} set tcl_commands {tclEval tclExpr tclInvoke tclIsSafe tclMakeSafe} if {$th1Tcl} { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands $tcl_commands"]} } else { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands"]} } |
︙ | ︙ | |||
1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 | return [string trim $x] set y; # NOTE: Never hit. } fossil test-th-source $th1FileName test th1-source-1 {$RESULT eq {TH_RETURN: 0 1 2 3 4 5 6 7 8 9}} file delete $th1FileName ############################################################################### test_cleanup | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 | return [string trim $x] set y; # NOTE: Never hit. } fossil test-th-source $th1FileName test th1-source-1 {$RESULT eq {TH_RETURN: 0 1 2 3 4 5 6 7 8 9}} file delete $th1FileName ############################################################################### # # TODO: Modify the result of this test if the list of unversioned files # changes. # run_in_checkout { fossil test-th-eval --open-config "unversioned list" } test th1-unversioned-1 {[normalize_result] eq \ {build-icons/linux.gif build-icons/linux64.gif build-icons/mac.gif\ build-icons/openbsd.gif build-icons/src.gif build-icons/win32.gif\ download.html download/fossil-linux-x86-1.32.zip\ download/fossil-linux-x86-1.33.zip download/fossil-linux-x86-1.34.zip\ download/fossil-linux-x86-1.35.zip download/fossil-macosx-x86-1.32.zip\ download/fossil-macosx-x86-1.33.zip download/fossil-macosx-x86-1.34.zip\ download/fossil-macosx-x86-1.35.zip download/fossil-openbsd-x86-1.32.zip\ download/fossil-openbsd-x86-1.33.zip download/fossil-openbsd-x86-1.34.tar.gz\ download/fossil-openbsd-x86-1.35.tar.gz download/fossil-src-1.32.tar.gz\ download/fossil-src-1.33.tar.gz download/fossil-src-1.34.tar.gz\ download/fossil-src-1.35.tar.gz download/fossil-w32-1.32.zip\ download/fossil-w32-1.33.zip download/fossil-w32-1.34.zip\ download/fossil-w32-1.35.zip download/releasenotes-1.32.html\ download/releasenotes-1.33.html download/releasenotes-1.34.html\ download/releasenotes-1.35.html index.wiki}} ############################################################################### run_in_checkout { fossil test-th-eval --open-config \ {string length [unversioned content build-icons/src.gif]} } test th1-unversioned-2 {$RESULT eq {4592}} ############################################################################### test_cleanup |
Changes to test/unversioned.test.
︙ | ︙ | |||
303 304 305 306 307 308 309 | {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30 unversioned2\.txt [0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 28 28\ unversioned5\.txt$} [normalize_result]]} ############################################################################### set password [string trim [clock seconds] -] | < | | > | 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30 unversioned2\.txt [0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 28 28\ unversioned5\.txt$} [normalize_result]]} ############################################################################### set password [string trim [clock seconds] -] fossil user new uvtester "Unversioned Test User" $password fossil user capabilities uvtester oy ############################################################################### foreach {pid port outTmpFile} [test_start_server $repository stopArg] {} puts [appendArgs "Started Fossil server, pid \"" $pid \" ", port \"" $port \".] set remote [appendArgs http://uvtester: $password @localhost: $port /] ############################################################################### set clientDir [file join $tempPath [appendArgs \ uvtest_ [string trim [clock seconds] -] _ [getSeqNo]]] set savedPwd [pwd] |
︙ | ︙ | |||
425 426 427 428 429 430 431 | cd $savedPwd; unset savedPwd file delete -force $clientDir puts [appendArgs "Now in server directory \"" [pwd] \".] ############################################################################### | | | 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 | cd $savedPwd; unset savedPwd file delete -force $clientDir puts [appendArgs "Now in server directory \"" [pwd] \".] ############################################################################### set stopped [test_stop_server $stopArg $pid $outTmpFile] puts [appendArgs \ [expr {$stopped ? "Stopped" : "Could not stop"}] \ " Fossil server, pid \"" $pid "\", using argument \"" \ $stopArg \".] ############################################################################### |
︙ | ︙ |
Changes to www/aboutcgi.wiki.
︙ | ︙ | |||
64 65 66 67 68 69 70 | webpage that shows some of the CGI environment variables that Fossil pays attention to. <p> In addition to setting various CGI environment variables, if the HTTP request contains POST content, then the web server relays the POST content to standard input of the CGI script. <p> | | | 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | webpage that shows some of the CGI environment variables that Fossil pays attention to. <p> In addition to setting various CGI environment variables, if the HTTP request contains POST content, then the web server relays the POST content to standard input of the CGI script. <p> In summary, the task of the CGI script is to read the various CGI environment variables and the POST content on standard input (if any), figure out an appropriate reply, then write that reply on standard output. The web server will read the output from the CGI script, reformat it into an appropriate HTTP reply, and relay the result back to the requesting application. The CGI script exits as soon as it generates a single reply. |
︙ | ︙ | |||
87 88 89 90 91 92 93 | <blockquote> An appropriate CGI script for running Fossil will look something like the following: <blockquote><pre> #!/usr/bin/fossil repository: /home/www/repos/project.fossil </pre></blockquote> | | | | | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | <blockquote> An appropriate CGI script for running Fossil will look something like the following: <blockquote><pre> #!/usr/bin/fossil repository: /home/www/repos/project.fossil </pre></blockquote> The first line of the script is a "[https://en.wikipedia.org/wiki/Shebang_%28Unix%29|shebang]" that tells the operating system what program to use as the interpreter for this script. On unix, when you execute a script that starts with a shebang, the operating system runs the program identified by the shebang with a single argument that is the full pathname of the script itself. In our example, the interpreter is Fossil, and the argument might be something like "/var/www/cgi-bin/one/two" (depending on how your particular web server is configured). <p> The Fossil program that is run as the script interpreter is the same Fossil that runs when you type ordinary Fossil commands like "fossil sync" or "fossil commit". But in this case, as soon as it launches, the Fossil program recognizes that the GATEWAY_INTERFACE environment variable is set to "CGI/1.0" and it therefore knows that it is being used as CGI rather than as an ordinary command-line tool, and behaves accordingly. <p> When Fossil recognizes that it is being run as CGI, it opens and reads the file identified by its sole argument (the file named by <code>argv[1]</code>). In our example, the second line of that file tells Fossil the location of the repository it will be serving. Fossil then starts looking at the CGI environment variables to figure out what web page is being requested, generates that one web page, then exits. <p> Usually, the webpage being requested is the first term of the PATH_INFO environment variable. (Exceptions to this rule are noted in the sequel.) For our example, the first term of PATH_INFO is "timeline", which means that Fossil will generate the [/help?cmd=/timeline|/timeline] webpage. <p> With Fossil, terms of PATH_INFO beyond the webpage name are converted into the "name" query parameter. Hence, the following two URLs mean exactly the same thing to Fossil: <ol type='A'> <li> [https://www.fossil-scm.org/fossil/info/c14ecc43] <li> [https://www.fossil-scm.org/fossil/info?name=c14ecc43] </ol> In both cases, the CGI script is called "/fossil". For case (A), the PATH_INFO variable will be "info/c14ecc43" and so the "[/help?cmd=/info|/info]" webpage will be generated and the suffix of PATH_INFO will be converted into the "name" query parameter, which identifies the artifact about which information is requested. In case (B), the PATH_INFO is just "info", but the same "name" query parameter is set explicitly by the URL itself. </blockquote> <h2>Serving Multiple Fossil Repositories From One CGI Script</h2> |
︙ | ︙ | |||
162 163 164 165 166 167 168 | of the webserver document area) is "cgis/example2". Then to see the timeline for the "three.fossil" repository, the URL would be: <blockquote> <b>http://example.com/cgis/example2/subdir/three/timeline</b> </blockquote> Here is what happens: <ol> | | | | 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | of the webserver document area) is "cgis/example2". Then to see the timeline for the "three.fossil" repository, the URL would be: <blockquote> <b>http://example.com/cgis/example2/subdir/three/timeline</b> </blockquote> Here is what happens: <ol> <li> The input URI on the HTTP request is <b>/cgis/example2/subdir/three/timeline</b> <li> The web server searches prefixes of the input URI until it finds the "cgis/example2" script. The web server then sets PATH_INFO to the "subdir/three/timeline" suffix and invokes the "cgis/example2" script. <li> Fossil runs and sees the "directory:" line pointing to "/home/www/repos". Fossil then starts pulling terms off the front of the PATH_INFO looking for a repository. It first looks at "/home/www/resps/subdir.fossil" but there is no such repository. So then it looks at "/home/www/repos/subdir/three.fossil" and finds a repository. The PATH_INFO is shortened by removing "subdir/three/" leaving it at just "timeline". <li> Fossil looks at the rest of PATH_INFO to see that the webpage requested is "timeline". </ol> </blockquote> <h2>Additional Observations</h2> <blockquote><ol type="I"> |
︙ | ︙ | |||
199 200 201 202 203 204 205 | Fossil does not care where the value of each property comes from (POST content, cookies, or query parameters) only that the property exists and has a value. <li><p> The "[/help?cmd=ui|fossil ui]" and "[/help?cmd=server|fossil server]" commands are implemented using a simple built-in web server that accepts incoming HTTP requests, translates each request into a CGI invocation, then creates a | | | | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | Fossil does not care where the value of each property comes from (POST content, cookies, or query parameters) only that the property exists and has a value. <li><p> The "[/help?cmd=ui|fossil ui]" and "[/help?cmd=server|fossil server]" commands are implemented using a simple built-in web server that accepts incoming HTTP requests, translates each request into a CGI invocation, then creates a separate child Fossil process to handle each request. In other words, CGI is used internally to implement "fossil ui/server". <p> SCGI is processed using the same built-in web server, just modified to parse SCGI requests instead of HTTP requests. Each SCGI request is converted into CGI, then Fossil creates a separate child Fossil process to handle each CGI request. </ol> </blockquote> |
Changes to www/adding_code.wiki.
︙ | ︙ | |||
20 21 22 23 24 25 26 | for special comments that contain "help" text and which identify routines that implement specific commands or which generate particular web pages. 2. The <b>makeheaders</b> preprocessor generates all the ".h" files automatically. Fossil programmers write ".c" files only and let the makeheaders preprocessor create the ".h" files. | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | for special comments that contain "help" text and which identify routines that implement specific commands or which generate particular web pages. 2. The <b>makeheaders</b> preprocessor generates all the ".h" files automatically. Fossil programmers write ".c" files only and let the makeheaders preprocessor create the ".h" files. 3. The <b>translate</b> preprocessor converts source code lines that begin with "@" into string literals, or into print statements that generate web page output, depending on context. The [./makefile.wiki|Makefile] for Fossil takes care of running these preprocessors with all the right arguments and in the right order. So it is not necessary to understand the details of how these preprocessors work. (Though, the sources for all three preprocessors are included in the source tree and are well commented, if you want to dig deeper.) It is only necessary to know that these preprocessors exist and hence will effect the way you write code. <h2>3.0 Adding New Source Code Files</h2> New source code files are added in the "src/" subdirectory of the Fossil source tree. Suppose one wants to add a new source code file named "xyzzy.c". The first step is to add this file to the various makefiles. Do so by editing the file src/makemake.tcl and adding "xyzzy" (without the final ".c") to the list of source modules at the top of that script. Save the result and then run the makemake.tcl script using a TCL interpreter. The command to run the makemake.tcl script is: <b>tclsh makemake.tcl</b> The working directory must be src/ when the command above is run. Note that TCL is not normally required to build Fossil, but it is required for this step. If you do not have a TCL interpreter on your system already, they are easy to install. A popular choice is the [http://www.activestate.com/activetcl|Active Tcl] installation from ActiveState. After the makefiles have been updated, create the xyzzy.c source file from the following template: <blockquote><verbatim> /* ** Copyright boilerplate goes here. ***************************************************** ** High-level description of what this module goes ** here. */ #include "config.h" #include "xyzzy.h" #if INTERFACE /* Exported object (structure) definitions or #defines |
︙ | ︙ | |||
81 82 83 84 85 86 87 | normal Fossil source file must have a #include at the top that imports its private header file. (Some source files, such as "sqlite3.c" are exceptions to this rule. Don't worry about those exceptions. The files you write will require this #include line.) The "#if INTERFACE ... #endif" section is optional and is only needed if there are structure definitions or typedefs or macros that need to | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | normal Fossil source file must have a #include at the top that imports its private header file. (Some source files, such as "sqlite3.c" are exceptions to this rule. Don't worry about those exceptions. The files you write will require this #include line.) The "#if INTERFACE ... #endif" section is optional and is only needed if there are structure definitions or typedefs or macros that need to be used by other source code files. The makeheaders preprocessor uses definitions in the INTERFACE section to help it generate header files. See [../src/makeheaders.html | makeheaders.html] for additional information. After creating a template file such as shown above, and after updating the makefiles, you should be able to recompile Fossil and have it include your new source file, even before you source file contains any code. |
︙ | ︙ |
Changes to www/antibot.wiki.
1 2 3 4 | <title>Defense Against Spiders</title> The website presented by a Fossil server has many hyperlinks. Even a modest project can have millions of pages in its | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | <title>Defense Against Spiders</title> The website presented by a Fossil server has many hyperlinks. Even a modest project can have millions of pages in its tree, and many of those pages (for example diffs and annotations and ZIP archive of older check-ins) can be expensive to compute. If a spider or bot tries to walk a website implemented by Fossil, it can present a crippling bandwidth and CPU load. The website presented by a Fossil server is intended to be used interactively by humans, not walked by spiders. This article describes the techniques used by Fossil to try to welcome human users while keeping out spiders. <h2>The "hyperlink" user capability</h2> Every Fossil web session has a "user". For random passers-by on the internet (and for spiders) that user is "nobody". The "anonymous" user is also available for humans who do not wish to identify themselves. The difference is that "anonymous" requires a login (using a password supplied via a CAPTCHA) whereas "nobody" does not require a login. The site administrator can also create logins with passwords for specific individuals. The "h" or "hyperlink" capability is a permission that can be granted to users that enables the display of hyperlinks. Most of the hyperlinks generated by Fossil are suppressed if this capability is missing. So one simple defense against spiders is to disable the "h" permission for the "nobody" user. This means that users must log in (perhaps as "anonymous") before they can see any of the hyperlinks. Spiders do not normally attempt to log into websites and will therefore not see most of the hyperlinks and will not try to walk the millions of historical check-ins and diffs available on a Fossil-generated website. If the "h" capability is missing from user "nobody" but is present for user "anonymous", then a message automatically appears at the top of each page inviting the user to log in as anonymous in order to activate hyperlinks. Removing the "h" capability from user "nobody" is an effective means of preventing spiders from walking a Fossil-generated website. But it can also be annoying to humans, since it requires them to log in. Hence, Fossil provides other techniques for blocking spiders which are less cumbersome to humans. <h2>Automatic hyperlinks based on UserAgent</h2> Fossil has the ability to selectively enable hyperlinks for users that lack the "h" capability based on their UserAgent string in the HTTP request header and on the browsers ability to run Javascript. |
︙ | ︙ | |||
92 93 94 95 96 97 98 | UserAgent string. Most spiders do not bother to run javascript and so to the spider the empty anchor tag will be useless. But all modern web browsers implement javascript, so hyperlinks will appears normally for human users. <h2>Further defenses</h2> | | | | | | 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | UserAgent string. Most spiders do not bother to run javascript and so to the spider the empty anchor tag will be useless. But all modern web browsers implement javascript, so hyperlinks will appears normally for human users. <h2>Further defenses</h2> Recently (as of this writing, in the spring of 2013) the Fossil server on the SQLite website ([http://www.sqlite.org/src/]) has been hit repeatedly by Chinese spiders that use forged UserAgent strings to make them look like normal web browsers and which interpret javascript. We do not believe these attacks to be nefarious since SQLite is public domain and the attackers could obtain all information they ever wanted to know about SQLite simply by cloning the repository. Instead, we believe these "attacks" are coming from "script kiddies". But regardless of whether or not malice is involved, these attacks do present an unnecessary load on the server which reduces the responsiveness of the SQLite website for well-behaved and socially responsible users. For this reason, additional defenses against spiders have been put in place. On the Admin/Access page of Fossil, just below the "<b>Enable hyperlinks for "nobody" based on User-Agent and Javascript</b>" setting, there are now two additional subsettings that can be optionally enabled to control hyperlinks. The first subsetting waits to run the javascript that sets the "href=" attributes on anchor tags until after at least one "mouseover" event has been detected on the <body> |
︙ | ︙ | |||
135 136 137 138 139 140 141 | <h2>The ongoing struggle</h2> Fossil currently does a very good job of providing easy access to humans while keeping out troublesome robots and spiders. However, spiders and bots continue to grow more sophisticated, requiring ever more advanced defenses. This "arms race" is unlikely to ever end. The developers of Fossil will continue to try improve the spider defenses of Fossil so | | | | 135 136 137 138 139 140 141 142 143 144 145 146 147 | <h2>The ongoing struggle</h2> Fossil currently does a very good job of providing easy access to humans while keeping out troublesome robots and spiders. However, spiders and bots continue to grow more sophisticated, requiring ever more advanced defenses. This "arms race" is unlikely to ever end. The developers of Fossil will continue to try improve the spider defenses of Fossil so check back from time to time for the latest releases and updates. Readers of this page who have suggestions on how to improve the spider defenses in Fossil are invited to submit your ideas to the Fossil Users mailing list: [mailto:fossil-users@lists.fossil-scm.org | fossil-users@lists.fossil-scm.org]. |
Changes to www/blame.wiki.
︙ | ︙ | |||
16 17 18 19 20 21 22 | <ol type='1'> <li>Locate the check-in that contains the file that is to be annotated. Call this check-in C0. <li>Find all direct ancestors of C0. A direct ancestor is the closure of the primary parent of C0. Merged in branches are not part of the direct ancestors of C0. | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | <ol type='1'> <li>Locate the check-in that contains the file that is to be annotated. Call this check-in C0. <li>Find all direct ancestors of C0. A direct ancestor is the closure of the primary parent of C0. Merged in branches are not part of the direct ancestors of C0. <li>Prune the list of ancestors of C0 so that it contains only check-in in which the file to be annotated was modified. <li>Load the complete text of the file to be annotated from check-in C0. Call this version of the file F0. <li>Parse F0 into lines. Mark each line as "unchanged". <li>For each ancestor of C0 on the pruned list (call the ancestor CX), beginning with the most recent ancestor and moving toward the oldest ancestor, do the following steps: <ol type='a'> <li>Load the text for the file to be annotated as it existed in check-in CX. Call this text FX. <li>Compute a diff going from FX to F0. <li>For each line of F0 that is changed in the diff and which was previously marked "unchanged", update the mark to indicated that line was modified by CX. </ol> <li>Show each line of F0 together with its change mark, appropriately formatted. </ol> <h2>3.0 Discussion and Notes</h2> The time-consuming part of this algorithm is step 6b - computing the diff from all historical versions of the file to the version of the file under analysis. For a large file that has many historical changes, this can take several seconds. For this reason, the default [/help?cmd=/annotate|/annotate] webpage only shows those lines that where changed by the 20 most recent modifications to the file. This allows the loop on step 6 to terminate after only 19 diffs instead of the hundreds or thousands of diffs that might be required for a frequently modified file. As currently implemented (as of 2015-12-12) the annotate algorithm does not follow files across name changes. File name change information is available in the database, and so the algorithm could be enhanced to follow files across name changes by modifications to step 3. Step 2 is interesting in that it is [/artifact/6cb824a0417?ln=196-201 | implemented] using a [https://www.sqlite.org/lang_with.html#recursivecte|recursive common table expression]. |
Changes to www/bugtheory.wiki.
︙ | ︙ | |||
25 26 27 28 29 30 31 | be permitted to create tickets. Recall that a fossil repository consists of an unordered collection of <i>artifacts</i>. (See the <a href="fileformat.wiki">file format document</a> for details.) Some artifacts have a special format, and among those are <a href="fileformat.wiki#tktchng">Ticket Change Artifacts</a>. | | | 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | be permitted to create tickets. Recall that a fossil repository consists of an unordered collection of <i>artifacts</i>. (See the <a href="fileformat.wiki">file format document</a> for details.) Some artifacts have a special format, and among those are <a href="fileformat.wiki#tktchng">Ticket Change Artifacts</a>. One or more ticket change artifacts are associated with each ticket. A ticket is created by a ticket change artifact. Each subsequent modification of the ticket is a separate artifact. The "push", "pull", and "sync" algorithms share ticket change artifacts between repositories in the same way as every other artifact. In fact, the sync algorithm has no knowledge of the meaning of the artifacts it is syncing. As far as the sync algorithm is concerned, all artifacts are |
︙ | ︙ |
Changes to www/changes.wiki.
1 2 | <title>Change Log</title> | > > > > > > > > > > > > > > > > > > > | | | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | <title>Change Log</title> <a name='v1_37'></a> <h2>Changes for Version 1.37 (2017-XX-YY)</h2> * Fix a C99-ism that prevents the 1.36 release from building with MSVC. * Fix [/help?cmd=ticket|ticket set] when using the "+" prefix with fields from the "ticketchng" table. * Enhance the "brlist" page to make use of branch colors. * Remove the "fusefs" command from builds that do not have the underlying support enabled. * Fixes for incremental git import/export. * Minor security enhancements to [./encryptedrepos.wiki|encrypted repositories]. * TH1 enhancements: <ul><li>Add <nowiki>[unversioned content]</nowiki> command.</li> <li>Add <nowiki>[unversioned list]</nowiki> command.</li> <li>Add project_description variable.</li> </ul> <a name='v1_36'></a> <h2>Changes for Version 1.36 (2016-10-24)</h2> * Add support for [./unvers.wiki|unversioned content], the [/help?cmd=unversioned|fossil unversioned] command and the [/help?cmd=/uv|/uv] and [/uvlist] web pages. * The [/uv/download.html|download page] is moved into [./unvers.wiki|unversioned content] so that the self-hosting Fossil websites no longer uses any external content. * Added the "Search" button to the graphical diff generated by the --tk option on the [/help?cmd=diff|diff] command. * Added the "--checkin VERSION" option to the [/help?cmd=diff|diff] command. * Various performance enhancements to the [/help?cmd=diff|diff] command. * Update internal Unicode character tables, used in regular expression handling, from version 8.0 to 9.0. * Update the built-in SQLite to version 3.15. Fossil now requires the SQLITE_DBCONFIG_MAINDBNAME interface of SQLite which is only available in SQLite version 3.15 and later and so Fossil will not work with earlier SQLite versions. * Fix [https://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg23618.html|multi-line timeline bug] * Enhance the [/help?cmd=purge|fossil purge] command. * New command [/help?cmd=shell|fossil shell]. * SQL parameters whose names are all lower-case in Ticket Report SQL queries are filled in using HTTP query parameter values. * Added support for [./childprojects.wiki|child projects] that are able to pull from their parent but not push. * Added the -nocomplain option to the TH1 "query" command. * Added support for the chng=GLOBLIST query parameter on the [/help?cmd=/timeline|/timeline] webpage. <a name='v1_35'></a> <h2>Changes for Version 1.35 (2016-06-14)</h2> * Enable symlinks by default on all non-Windows platforms. * Enhance the [/md_rules|Markdown formatting] so that hyperlinks that begin with "/" are relative to the root of the Fossil repository. * Rework the [/help?cmd=/setup_ulist|/setup_list page] (the User List page) to display all users in a click-to-sort table. |
︙ | ︙ | |||
69 70 71 72 73 74 75 76 77 78 79 80 81 82 | * Add support for [./encryptedrepos.wiki|encrypted Fossil repositories]. * If the FOSSIL_PWREADER environment variable is set, then use the program it names in place of getpass() to read passwords and passphrases * Option --baseurl now works on Windows. * Numerious documentation improvements. * Update the built-in SQLite to version 3.13.0. <h2>Changes for Version 1.34 (2015-11-02)</h2> * Make the [/help?cmd=clean|fossil clean] command undoable for files less than 10MiB. * Update internal Unicode character tables, used in regular expression handling, from version 7.0 to 8.0. * Add the new [/help?cmd=amend|amend] command which is used to modify | > | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 | * Add support for [./encryptedrepos.wiki|encrypted Fossil repositories]. * If the FOSSIL_PWREADER environment variable is set, then use the program it names in place of getpass() to read passwords and passphrases * Option --baseurl now works on Windows. * Numerious documentation improvements. * Update the built-in SQLite to version 3.13.0. <a name='v1_34'></a> <h2>Changes for Version 1.34 (2015-11-02)</h2> * Make the [/help?cmd=clean|fossil clean] command undoable for files less than 10MiB. * Update internal Unicode character tables, used in regular expression handling, from version 7.0 to 8.0. * Add the new [/help?cmd=amend|amend] command which is used to modify |
︙ | ︙ | |||
104 105 106 107 108 109 110 111 112 113 114 115 116 117 | * Fix --hard option to [/help?cmd=mv|fossil mv] and [/help?cmd=rm|fossil rm] to enable them to work properly with certain relative paths. * Change the mimetype for ".n" and ".man" files to text/plain. * Display improvements in the [/help?cmd=bisect|fossil bisect chart] command. * Updated the built-in SQLite to version 3.9.1 and activated JSON1 and FTS5 support (both currently unused within Fossil). <h2>Changes for Version 1.33 (2015-05-23)</h2> * Improved fork detection on [/help?cmd=update|fossil update], [/help?cmd=status|fossil status] and related commands. * Change the default skin to what used to be called "San Francisco Modern". * Add the [/repo-tabsize] web page * Add [/help?cmd=import|fossil import --svn], for importing a subversion repository into fossil which was exported using "svnadmin dump". | > | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | * Fix --hard option to [/help?cmd=mv|fossil mv] and [/help?cmd=rm|fossil rm] to enable them to work properly with certain relative paths. * Change the mimetype for ".n" and ".man" files to text/plain. * Display improvements in the [/help?cmd=bisect|fossil bisect chart] command. * Updated the built-in SQLite to version 3.9.1 and activated JSON1 and FTS5 support (both currently unused within Fossil). <a name='v1_33'></a> <h2>Changes for Version 1.33 (2015-05-23)</h2> * Improved fork detection on [/help?cmd=update|fossil update], [/help?cmd=status|fossil status] and related commands. * Change the default skin to what used to be called "San Francisco Modern". * Add the [/repo-tabsize] web page * Add [/help?cmd=import|fossil import --svn], for importing a subversion repository into fossil which was exported using "svnadmin dump". |
︙ | ︙ | |||
153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 | * Permit filtering weekday and file [/help?cmd=/reports|reports] by user. Also ensure the user parameter is preserved when changing types. Add a field for direct entry of the user name to each applicable report. * Create parent directories of [/help?cmd=settings|empty-dirs] if they don't already exist. * Inhibit timeline links to wiki pages that have been deleted. <h2>Changes for Version 1.32 (2015-03-14)</h2> * When creating a new repository using [/help?cmd=init|fossil init], ensure that the new repository is fully compatible with historical versions of Fossil by having a valid manifest as RID 1. * Anti-aliased rendering of arrowheads on timeline graphs. * Added vi/less-style key bindings to the --tk diff GUI. * Documentation updates to fix spellings and changes all "checkins" to "check-ins". * Add the --repolist option to server commands such as [/help?cmd=server|fossil server] or [/help?cmd=http|fossil http]. * Added the "Xekri" skin. * Enhance the "ln=" query parameter on artifact displays to accept multiple ranges, separate by spaces (or "+" when URL-encoded). * Added [/help?cmd=forget|fossil forget] as an alias for [/help?cmd=rm|fossil rm]. <h2>Changes For Version 1.31 (2015-02-23)</h2> * Change the auxiliary schema by adding columns MLINK.ISAUX and MLINK.PMID columns to the schema, to support better drawing of file change graphs. A [/help?cmd=rebuild|fossil rebuild] is recommended but is not required. so that the new graph drawing logic can work effectively. * Added [/search|search] over Check-in comments, Documents, Tickets and Wiki. Disabled by default. The search can be either a full-scan or it | > > | 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | * Permit filtering weekday and file [/help?cmd=/reports|reports] by user. Also ensure the user parameter is preserved when changing types. Add a field for direct entry of the user name to each applicable report. * Create parent directories of [/help?cmd=settings|empty-dirs] if they don't already exist. * Inhibit timeline links to wiki pages that have been deleted. <a name='v1_33'></a> <h2>Changes for Version 1.32 (2015-03-14)</h2> * When creating a new repository using [/help?cmd=init|fossil init], ensure that the new repository is fully compatible with historical versions of Fossil by having a valid manifest as RID 1. * Anti-aliased rendering of arrowheads on timeline graphs. * Added vi/less-style key bindings to the --tk diff GUI. * Documentation updates to fix spellings and changes all "checkins" to "check-ins". * Add the --repolist option to server commands such as [/help?cmd=server|fossil server] or [/help?cmd=http|fossil http]. * Added the "Xekri" skin. * Enhance the "ln=" query parameter on artifact displays to accept multiple ranges, separate by spaces (or "+" when URL-encoded). * Added [/help?cmd=forget|fossil forget] as an alias for [/help?cmd=rm|fossil rm]. <a name='v1_31'></a> <h2>Changes For Version 1.31 (2015-02-23)</h2> * Change the auxiliary schema by adding columns MLINK.ISAUX and MLINK.PMID columns to the schema, to support better drawing of file change graphs. A [/help?cmd=rebuild|fossil rebuild] is recommended but is not required. so that the new graph drawing logic can work effectively. * Added [/search|search] over Check-in comments, Documents, Tickets and Wiki. Disabled by default. The search can be either a full-scan or it |
︙ | ︙ | |||
220 221 222 223 224 225 226 227 228 229 230 231 232 233 | * Added the [/mimetype_list] page. * Added the [/hash-collisions] page. * Allow the user of Common Table Expressions in the SQL that defaults ticket reports. * Break out the components (css, footer, and header) for the various built-in skins into separate files in the source tree. <h2>Changes For Version 1.30 (2015-01-19)</h2> * Added the [/help?cmd=bundle|fossil bundle] command. * Added the [/help?cmd=purge|fossil purge] command. * Added the [/help?cmd=publish|fossil publish] command. * Added the [/help?cmd=unpublished|fossil unpublished] command. * Enhance the [/tree] webpage to show the age of each file with the option to sort by age. | > | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 | * Added the [/mimetype_list] page. * Added the [/hash-collisions] page. * Allow the user of Common Table Expressions in the SQL that defaults ticket reports. * Break out the components (css, footer, and header) for the various built-in skins into separate files in the source tree. <a name='v1_30'></a> <h2>Changes For Version 1.30 (2015-01-19)</h2> * Added the [/help?cmd=bundle|fossil bundle] command. * Added the [/help?cmd=purge|fossil purge] command. * Added the [/help?cmd=publish|fossil publish] command. * Added the [/help?cmd=unpublished|fossil unpublished] command. * Enhance the [/tree] webpage to show the age of each file with the option to sort by age. |
︙ | ︙ | |||
290 291 292 293 294 295 296 297 298 299 300 301 302 303 | diff option in a separate file for easier editing. * (Internal:) Implement a system of compile-time checks to help ensure the correctness of printf-style formatting strings. * Fix CVE-2014-3566, also known as the POODLE SSL 3.0 vulnerability. * Numerous documentation fixes and improvements. * Other obscure and minor bug fixes - see the timeline for details. <h2>Changes For Version 1.29 (2014-06-12)</h2> * Add the ability to display content, diffs and annotations for UTF16 text files in the web interface. * Add the "SaveAs..." and "Invert" buttons to the graphical diff display that results from using the --tk option with the [/help/diff | fossil diff] command. * The [/reports] page now requires Read ("o") permissions. The "byweek" | > | 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | diff option in a separate file for easier editing. * (Internal:) Implement a system of compile-time checks to help ensure the correctness of printf-style formatting strings. * Fix CVE-2014-3566, also known as the POODLE SSL 3.0 vulnerability. * Numerous documentation fixes and improvements. * Other obscure and minor bug fixes - see the timeline for details. <a name='v1_29'></a> <h2>Changes For Version 1.29 (2014-06-12)</h2> * Add the ability to display content, diffs and annotations for UTF16 text files in the web interface. * Add the "SaveAs..." and "Invert" buttons to the graphical diff display that results from using the --tk option with the [/help/diff | fossil diff] command. * The [/reports] page now requires Read ("o") permissions. The "byweek" |
︙ | ︙ |
Changes to www/checkin_names.wiki.
︙ | ︙ | |||
53 54 55 56 57 58 59 | <blockquote><pre> fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9 </pre></blockquote> The full 40-character SHA1 hash is unwieldy to remember and type, though, so Fossil also accepts a unique prefix of the hash, using any combination of upper and lower case letters, as long as the prefix is at least 4 | | | | 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | <blockquote><pre> fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9 </pre></blockquote> The full 40-character SHA1 hash is unwieldy to remember and type, though, so Fossil also accepts a unique prefix of the hash, using any combination of upper and lower case letters, as long as the prefix is at least 4 characters long. Hence the following commands all accomplish the same thing as the above: <blockquote><pre> fossil info e5a734a19a9 fossil info E5a734A fossil info e5a7 </blockquote> Many web-interface screens identify check-ins by 10- or 16-character prefix of canonical name. <h2>Tags And Branch Names</h2> Using a tag or branch name where a check-in name is expected causes Fossil to choose the most recent check-in with that tag or branch name. So, for example, as of this writing the most recent check-in that |
︙ | ︙ | |||
110 111 112 113 114 115 116 | name? In such cases, you can prefix the tag name with "tag:". For example: <blockquote><tt> fossil info tag:deed2 </tt></blockquote> | | | 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | name? In such cases, you can prefix the tag name with "tag:". For example: <blockquote><tt> fossil info tag:deed2 </tt></blockquote> The "tag:deed2" name will refer to the most recent check-in tagged with "deed2" not to the check-in whose canonical name begins with "deed2". <h2>Whole Branches</h2> Usually when a branch name is specified, it means the latest check-in on that branch. But for some commands (ex: [/help/purge|purge]) a branch name |
︙ | ︙ | |||
145 146 147 148 149 150 151 | check-in that occurs no later than the timestamp given: * <i>YYYY-MM-DD</i> * <i>YYYY-MM-DD HH:MM</i> * <i>YYYY-MM-DD HH:MM:SS</i> * <i>YYYY-MM-DD HH:MM:SS.SSS</i> | | | | | | 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | check-in that occurs no later than the timestamp given: * <i>YYYY-MM-DD</i> * <i>YYYY-MM-DD HH:MM</i> * <i>YYYY-MM-DD HH:MM:SS</i> * <i>YYYY-MM-DD HH:MM:SS.SSS</i> The space between the day and the year can optionally be replaced by an uppercase <b>T</b> and the entire timestamp can optionally be followed by "<b>z</b>" or "<b>Z</b>". In the fourth form with fractional seconds, any number of digits may follow the decimal point, though due to precision limits only the first three digits will be significant. In its default configuration, Fossil interprets and displays all dates in Universal Coordinated Time (UTC). This tends to work the best for distributed projects where participants are scattered around the globe. But there is an option on the Admin/Timeline page of the web-interface to switch to local time. The "<b>Z</b>" suffix on a timestamp check-in name is meaningless if Fossil is in the default mode of using UTC for everything, but if Fossil has been switched to local time mode, then the "<b>Z</b>" suffix means to interpret that particular timestamp using UTC instead of local time. For an example of how timestamps are useful, consider the homepage for the Fossil website itself: <blockquote> http://www.fossil-scm.org/fossil/doc/<b>trunk</b>/www/index.wiki </blockquote> The bold component of that URL is a check-in name. To see what the |
︙ | ︙ | |||
189 190 191 192 193 194 195 | the timestamp. So, for example: <blockquote> fossil update trunk:2010-07-01T14:30 </blockquote> Would cause Fossil to update the working check-out to be the most recent | | | 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | the timestamp. So, for example: <blockquote> fossil update trunk:2010-07-01T14:30 </blockquote> Would cause Fossil to update the working check-out to be the most recent check-in on the trunk that is not more recent that 14:30 (UTC) on July 1, 2010. <h2>Root Of A Branch</h2> A branch name that begins with the "<tt>root:</tt>" prefix refers to the last check-in in the parent branch prior to the beginning of the branch. Such a label is useful, for example, in computing all diffs for a single |
︙ | ︙ | |||
218 219 220 221 222 223 224 | repository) then a few extra tags apply. The "current" tag means the current check-out. The "next" tag means the youngest child of the current check-out. And the "previous" or "prev" tag means the primary (non-merge) parent of the current check-out. For embedded documentation, the tag "ckout" means the version as present in the local source tree on disk, provided that the web server is started using | | | 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 | repository) then a few extra tags apply. The "current" tag means the current check-out. The "next" tag means the youngest child of the current check-out. And the "previous" or "prev" tag means the primary (non-merge) parent of the current check-out. For embedded documentation, the tag "ckout" means the version as present in the local source tree on disk, provided that the web server is started using "fossil ui" or "fossil server" from within the source tree. This tag can be used to preview local changes to documentation before committing them. It does not apply to CLI commands. <h2>Additional Examples</h2> To view the changes in the most recent check-in prior to the version currently checked out: |
︙ | ︙ |
Changes to www/childprojects.wiki.
︙ | ︙ | |||
35 36 37 38 39 40 41 | UPDATE config SET name='parent-project-name' WHERE name='project-name'; INSERT INTO config(name,value) VALUES('project-code',lower(hex(randomblob(20)))); INSERT INTO config(name,value) VALUES('project-name','CHILD-PROJECT-NAME'); </verbatim></blockquote> | | | | | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | UPDATE config SET name='parent-project-name' WHERE name='project-name'; INSERT INTO config(name,value) VALUES('project-code',lower(hex(randomblob(20)))); INSERT INTO config(name,value) VALUES('project-name','CHILD-PROJECT-NAME'); </verbatim></blockquote> Modify the CHILD-PROJECT-NAME in the last statement to be the name of the child project, of course. The repository is now a separate project, independent from its parent. Clone the new project to the developers as needed. The child project and the parent project will not normally be able to sync with one another, since they are now separate projects with distinct project codes. However, if the "--from-parent-project" command-line option is provided to the "[/help?cmd=pull|fossil pull]" command in the child, and the URL of parent repository is also provided on the command-line, then updates to the parent project that occurred after the child was created will be added to the child repository. Thus, by periodically doing a pull --from-parent-project, the child project is able to stay up to date with all the latest changes in the parent. |
Changes to www/concepts.wiki.
︙ | ︙ | |||
129 130 131 132 133 134 135 | a software project. <h3>2.2 Manifests</h3> Associated with every check-in is a special file called the [./fileformat.wiki#manifest| "manifest"]. The manifest is a listing of all other files in | | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | a software project. <h3>2.2 Manifests</h3> Associated with every check-in is a special file called the [./fileformat.wiki#manifest| "manifest"]. The manifest is a listing of all other files in that source tree. The manifest contains the (complete) artifact ID of the file and the name of the file as it appears on disk, and thus serves as a mapping from artifact ID to disk name. The artifact ID of the manifest is the identifier for the entire check-in. When you look at a "timeline" of changes in Fossil, the ID associated with each check-in or commit is really just the artifact ID of the manifest for that check-in. <p>The manifest file is not normally a real file on disk. Instead, the manifest is computed in memory by Fossil whenever it needs it. However, the "fossil setting manifest on" command will cause the manifest file to be materialized to disk, if desired. Both Fossil itself, and SQLite cause the manifest file to be materialized to disk so that the makefiles for these project can read the manifest and embed version information in generated binaries. <p>Fossil automatically generates a manifest whenever you "commit" a new check-in. So this is not something that you, the developer, need to worry with. The format of a manifest is intentionally designed to be simple to parse, so that if you want to read and interpret a manifest, either by hand or with a script, that is easy to do. But you will probably never need to do so.</p> |
︙ | ︙ | |||
198 199 200 201 202 203 204 | CVS, gzip, diff, rsync, Python, Perl, Tcl, Java, apache, PostgreSQL, MySQL, SQLite, patch, or any similar software on your system in order to use Fossil effectively. You will want to have some kind of text editor for entering check-in comments. Fossil will use whatever text editor is identified by your VISUAL environment variable. Fossil will also use GPG to clearsign your manifests if you happen to have it installed, but Fossil will skip that step if GPG missing from your system. | | | | 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | CVS, gzip, diff, rsync, Python, Perl, Tcl, Java, apache, PostgreSQL, MySQL, SQLite, patch, or any similar software on your system in order to use Fossil effectively. You will want to have some kind of text editor for entering check-in comments. Fossil will use whatever text editor is identified by your VISUAL environment variable. Fossil will also use GPG to clearsign your manifests if you happen to have it installed, but Fossil will skip that step if GPG missing from your system. You can optionally set up Fossil to use external "diff" programs, though Fossil has an excellent built-in "diff" algorithm that works fine for most people. If you happen to have Tcl/Tk installed on your system, Fossil will use it to generate a graphical "diff" display when you use the --tk option to the "diff" command, but this too is entirely optional. To uninstall Fossil, simply delete the executable. To upgrade an older version of Fossil to a newer version, just replace the old executable with the new one. You might need to run "<b>fossil all rebuild</b>" to restructure your repositories after an upgrade. Running "all rebuild" never hurts, so when upgrading it is a good policy to run it even if it is not strictly necessary. To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example: |
︙ | ︙ | |||
264 265 266 267 268 269 270 | <h3>4.1 Autosync Workflow</h3> <ol> <li> Establish a local repository using either the <b>new</b> command to start a new project, or the <b>clone</b> command to make a clone | | | | 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 | <h3>4.1 Autosync Workflow</h3> <ol> <li> Establish a local repository using either the <b>new</b> command to start a new project, or the <b>clone</b> command to make a clone of a repository for an existing project. </li> <li> Establish one or more source trees using the <b>open</b> command with the name of the repository file as its argument. </li> <li> The <b>open</b> command in the previous step populates your local source tree with a copy of the latest check-in. Usually this is what you want. In the rare cases where it is not, use the <b>update</b> command to switch to a different check-in. Use the <b>timeline</b> or <b>leaves</b> commands to identify alternative check-ins to switch to. </li> <li> Edit the code. Add new files to the source tree using the <b>add</b> command. Omit files from future check-ins using the <b>rm</b> command. |
︙ | ︙ | |||
300 301 302 303 304 305 306 | tree into your local repository. After your commit completes, Fossil will automatically <b>push</b> your changes back to the server you cloned from or whatever server you most recently synced with. </li> <li> When your coworkers make their own changes, you can merge those changes | | | | 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | tree into your local repository. After your commit completes, Fossil will automatically <b>push</b> your changes back to the server you cloned from or whatever server you most recently synced with. </li> <li> When your coworkers make their own changes, you can merge those changes into your local local source tree using the <b>update</b> command. In autosync mode, <b>update</b> will first go back to the server you cloned from or with which you most recently synced, and pull down all recent changes into your local repository. Then it will merge recent changes into your local source tree. If you do an <b>update</b> and find that it messes something up in your source tree (perhaps a co-worker checked in incompatible changes) you can use the <b>undo</b> command to back out the changes. </li> <li> Repeat all of the above until you have generated great software. </li> </ol> |
︙ | ︙ |
Changes to www/contribute.wiki.
︙ | ︙ | |||
8 9 10 11 12 13 14 | In order to accept your contributions, we <u>must</u> have a [./copyright-release.pdf | Contributor Agreement (PDF)] (or [./copyright-release.html | as HTML]) on file for you. We require this in order to maintain clear title to the Fossil code and prevent the introduction of code with incompatible licenses or other entanglements that might cause legal problems for Fossil users. Many larger companies | | | | | | 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | In order to accept your contributions, we <u>must</u> have a [./copyright-release.pdf | Contributor Agreement (PDF)] (or [./copyright-release.html | as HTML]) on file for you. We require this in order to maintain clear title to the Fossil code and prevent the introduction of code with incompatible licenses or other entanglements that might cause legal problems for Fossil users. Many larger companies and other lawyer-rich organizations require this as a precondition to using Fossil. If you do not wish to submit a Contributor Agreement, we would still welcome your suggestions and example code, but we will not use your code directly - we will be forced to re-implement your changes from scratch which might take longer. <h2>2.0 Submitting Patches</h2> Suggested changes or bug fixes can be submitted by creating a patch against the current source tree. Email patches to <a href="mailto:drh@sqlite.org">drh@sqlite.org</a>. Be sure to describe in detail what the patch does and which version of Fossil it is written against. A contributor agreement is not strictly necessary to submit a patch. However, without a contributor agreement on file, your patch will be used for reference only - it will not be applied to the code. This may delay acceptance of your patch. Your patches or changes might not be accepted even if you do have |
︙ | ︙ | |||
51 52 53 54 55 56 57 | Contributors are asked to make all non-trivial changes on a branch. The Fossil Architect (Richard Hipp) will merge changes onto the trunk.</p> Contributors are required to following the [./checkin.wiki | pre-checkin checklist] prior to every check-in to the Fossil self-hosting repository. This checklist is short and succinct | | | | | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | Contributors are asked to make all non-trivial changes on a branch. The Fossil Architect (Richard Hipp) will merge changes onto the trunk.</p> Contributors are required to following the [./checkin.wiki | pre-checkin checklist] prior to every check-in to the Fossil self-hosting repository. This checklist is short and succinct and should only require a few seconds to follow. Contributors should print out a copy of the pre-checkin checklist and keep it on a notecard beside their workstations, for quick reference. Contributors should review the [./style.wiki | Coding Style Guidelines] and mimic the coding style used through the rest of the Fossil source code. Your code should blend in. A third-party reader should be unable to distinguish your code from any other code in the source corpus. <h2>4.0 Testing</h2> Fossil has the beginnings of a [../test/release-checklist.wiki | release checklist] but this is an area that needs further work. (Your contributions here are welcomed!) Contributors with check-in privileges are expected to run the release checklist on any major changes they contribute, and if appropriate expand the checklist and/or the automated test scripts to cover their additions. <h2>5.0 See Also</h2> * [./build.wiki | How To Compile And Install Fossil] * [./makefile.wiki | The Fossil Build Process] * [./tech_overview.wiki | A Technical Overview of Fossil] * [./adding_code.wiki | Adding Features To Fossil] |
Changes to www/custom_ticket.wiki.
︙ | ︙ | |||
63 64 65 66 67 68 69 | <td><u>Not publicly visible</u>. Used by developers to contact you with questions.</td> </tr> <th1>enable_output 1</th1> </pre> This bit of code will get rid of the "email" field entry for logged-in users. Since we know the user's information, we don't have to ask for it. NOTE: it | | | | 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | <td><u>Not publicly visible</u>. Used by developers to contact you with questions.</td> </tr> <th1>enable_output 1</th1> </pre> This bit of code will get rid of the "email" field entry for logged-in users. Since we know the user's information, we don't have to ask for it. NOTE: it might be good to automatically scoop up the user's email and put it here. </p> </blockquote> <h2>Modify the 'view ticket' page</h2><blockquote> <p> Look for the text "Contact:" (about halfway through). Then insert these lines after the closing tr tag and before the "enable_output" line: <pre> <tr> <td align="right">Assigned to:</td><td bgcolor="#d0d0d0"> $<assigned_to> </td> <td align="right">Opened by:</td><td bgcolor="#d0d0d0"> $<opened_by> </td> </pre> This will add a row which displays these two fields, in the event the user has "edit" capability. </p> </blockquote> <h2>Modify the 'edit ticket' page</h2><blockquote> <p> Before the "Severity:" line, add this: <pre> |
︙ | ︙ |
Changes to www/customskin.md.
︙ | ︙ | |||
142 143 144 145 146 147 148 149 150 151 152 153 154 155 | Before expanding the TH1 within the header and footer, Fossil first initializes a number of TH1 variables to values that depend on respository settings and the specific page being generated. * **project_name** - The project_name variable is filled with the name of the project as configured under the Admin/Configuration menu. * **title** - The title variable holds the title of the page being generated. The title variable is special in that it is deleted after the header script runs and before the footer script. This is necessary to avoid a conflict with a variable by the same name used | > > > > | 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | Before expanding the TH1 within the header and footer, Fossil first initializes a number of TH1 variables to values that depend on respository settings and the specific page being generated. * **project_name** - The project_name variable is filled with the name of the project as configured under the Admin/Configuration menu. * **project_description** - The project_description variable is filled with the description of the project as configured under the Admin/Configuration menu. * **title** - The title variable holds the title of the page being generated. The title variable is special in that it is deleted after the header script runs and before the footer script. This is necessary to avoid a conflict with a variable by the same name used |
︙ | ︙ |
Changes to www/delta_encoder_algorithm.wiki.
︙ | ︙ | |||
151 152 153 154 155 156 157 | needed more bytes to encode the range than there were bytes in the range, then no instructions are emitted and the window is moved one byte forward. The "base" is left unchanged in that case.</p> <p>The processing loop stops at one of two conditions: <ol> <li>The encoder decided to move the window forward, but the end of the | | | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | needed more bytes to encode the range than there were bytes in the range, then no instructions are emitted and the window is moved one byte forward. The "base" is left unchanged in that case.</p> <p>The processing loop stops at one of two conditions: <ol> <li>The encoder decided to move the window forward, but the end of the window reached the end of the "target". </li> <li>After the emission of instructions the new "base" location is within NHASH bytes of end of the "target", i.e. there are no more than at most NHASH bytes left. </li> </ol> </p> |
︙ | ︙ |
Changes to www/delta_format.wiki.
︙ | ︙ | |||
159 160 161 162 163 164 165 | <tr><td> </td> <td>~E@Y0, </td><td>Copy </td><td> 4046 @ 2176 </td></tr> <tr><td>Trailer</td><td>2zMM3E </td><td>Checksum</td><td> -1101438770 </td></tr> </table> <p>The unified diff behind the above delta is</p> <table border=1><tr><td><pre> | | | | | | | | | | | 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 | <tr><td> </td> <td>~E@Y0, </td><td>Copy </td><td> 4046 @ 2176 </td></tr> <tr><td>Trailer</td><td>2zMM3E </td><td>Checksum</td><td> -1101438770 </td></tr> </table> <p>The unified diff behind the above delta is</p> <table border=1><tr><td><pre> bluepeak:(761) ~/Projects/Tcl/Fossil/Devel/devel > diff -u ../DELTA/old ../DELTA/new --- ../DELTA/old 2007-08-23 21:14:40.000000000 -0700 +++ ../DELTA/new 2007-08-23 21:14:33.000000000 -0700 @@ -5,7 +5,7 @@ * If the server does not have write permission on the database file, or on the directory containing the database file (and - it is thus unable to update database because it cannot create + it is thus unable to update the database because it cannot create a rollback journal) then it currently fails silently on a push. It needs to return a helpful error. @@ -27,8 +27,8 @@ * Additional information displayed for the "vinfo" page: + All leaves of this version that are not included in the - descendant list. With date, user, comment, and hyperlink. - Leaves in the descendant table should be marked as such. + descendant list. With date, user, comment, and hyperlink. + Leaves in the descendant table should be marked as such. See the compute_leaves() function to see how to find all leaves. + Add file diff links to the file change list. @@ -37,7 +37,7 @@ * The /xfer handler (for push, pull, and clone) does not do delta compression. This results in excess bandwidth usage. - There are some code in xfer.c that are sketches of ideas on + There are some pieces in xfer.c that are sketches of ideas on how to do delta compression, but nothing has been implemented. * Enhancements to the diff and tkdiff commands in the cli. @@ -45,7 +45,7 @@ single file. Allow diffs against any two arbitrary versions, not just diffs against the current check-out. Allow configuration options to replace tkdiff with some other - visual differ of the users choice. + visual differ of the users choice. Example: eskil. * Ticketing interface (expand this bullet) </pre></td></tr></table> <a name="notes"></a><h2>Notes</h2> |
︙ | ︙ |
Changes to www/embeddeddoc.wiki.
︙ | ︙ | |||
40 41 42 43 44 45 46 | For example, the <i><baseurl></i> for the fossil project itself is either <b>http://www.fossil-scm.org/fossil</b> or <b>http://www.hwaci.com/cgi-bin/fossil</b>. If you launch the web server using the "<b>fossil server</b>" command line, then the <i><baseurl></i> is usually <b>http://localhost:8080/</b>. | | | | | | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | For example, the <i><baseurl></i> for the fossil project itself is either <b>http://www.fossil-scm.org/fossil</b> or <b>http://www.hwaci.com/cgi-bin/fossil</b>. If you launch the web server using the "<b>fossil server</b>" command line, then the <i><baseurl></i> is usually <b>http://localhost:8080/</b>. The <i><version></i> is any unique prefix of the check-in ID for the check-in containing the documentation you want to access. Or <i><version></i> can be the name of a [./branching.wiki | branch] in order to show the documentation for the latest version of that branch. Or <i><version></i> can be one of the keywords "<b>tip</b>" or "<b>ckout</b>". The "<b>tip</b>" keyword means to use the most recent check-in. This is useful if you want to see the very latest version of the documentation. The "<b>ckout</b>" keywords means to pull the documentation file from the local source tree on disk, not from the any check-in. The "<b>ckout</b>" keyword normally only works when you start your server using the "<b>fossil server</b>" or "<b>fossil ui</b>" command line and is intended to show what the documentation you are currently editing looks like before you check it in. Finally, the <i><filename></i> element of the URL is the pathname of the documentation file relative to the root of the source tree. The mimetype (and thus the rendering) of documentation files is determined by the file suffix. Fossil currently understands [/mimetype_list|many different file suffixes], including all the popular ones such as ".css", ".gif", ".htm", ".html", ".jpg", ".jpeg", ".png", and ".txt". Documentation files whose names end in ".wiki" use the [/wiki_rules | fossil wiki markup] - a safe subset of HTML together with some wiki rules for paragraph breaks, lists, and hyperlinks. Documentation files ending in ".md" or ".markdown" use the [/md_rules | Markdown markup langauge]. Documentation files ending in ".txt" are plain text. Wiki, markdown, and plain text documentation files are rendered with the standard fossil header and footer added. Most other mimetypes are delivered directly to the requesting web browser without interpretation, additions, or changes. Files with the mimetype "text/html" (the .html or .htm suffix) are usually rendered directly to the browser without interpretation. However, if the file begins with a <div> element like this: <b><div class='fossil-doc' data-title='<i>Title Text</i>'></b> Then the standard Fossil header and footer are added to the document prior to being displayed. The "class='fossil-doc'" attribute is required for this to occur. The "data-title='...'" attribute is |
︙ | ︙ | |||
115 116 117 118 119 120 121 | CGI mode. The "index.html" CGI script looks like this: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> | | | | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | CGI mode. The "index.html" CGI script looks like this: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> This is one of four ways to set up a <a href="./server.wiki">fossil web server</a>. The "<b>/trunk/</b>" part of the URL tells fossil to use the documentation files from the most recent trunk check-in. If you wanted to see an historical version of this document, you could substitute the name of a check-in for "<b>/trunk/</b>". For example, to see the version of this document associated with check-in [9be1b00392], simply replace the "<b>/trunk/</b>" with "<b>/9be1b00392/</b>". You can also substitute the symbolic name for a particular version or branch. For example, you might replace "<b>/trunk/</b>" with "<b>/experimental/</b>" to get the latest version of this document in the "experimental" branch. The symbolic name can also be a date and time string in any of the following formats:</p> <ul> <li> <i>YYYY-MM-DD</i> <li> <i>YYYY-MM-DD</i><b>T</b><i>HH:MM</i> <li> <i>YYYY-MM-DD</i><b>T</b><i>HH:MM:SS</i> </ul> When the symbolic name is a date and time, fossil shows the version of the document that was most recently checked in as of the date and time specified. So, for example, to see what the fossil website looked like at the beginning of 2010, enter: <blockquote> <a href="http://www.fossil-scm.org/index.html/doc/2010-01-01/www/index.wiki"> http://www.fossil-scm.org/index.html/doc/<b>2010-01-01</b>/www/index.wiki |
︙ | ︙ |
Changes to www/encryptedrepos.wiki.
1 2 3 4 5 6 7 | <title>How To Use Encrypted Repositories</title> <h2>Introduction</h2><blockquote> Fossil can be compiled so that it works with encrypted repositories using the [https://www.sqlite.org/see/doc/trunk/www/readme.wiki|SQLite Encryption Extension]. This technical note explains the process. </blockquote> <h2>Building An Encryption-Enabled Fossil</h2><blockquote> | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | <title>How To Use Encrypted Repositories</title> <h2>Introduction</h2><blockquote> Fossil can be compiled so that it works with encrypted repositories using the [https://www.sqlite.org/see/doc/trunk/www/readme.wiki|SQLite Encryption Extension]. This technical note explains the process. </blockquote> <h2>Building An Encryption-Enabled Fossil</h2><blockquote> The SQLite Encryption Extension (SEE) is proprietary software and requires [http://www.hwaci.com/cgi-bin/see-step1|purchasing a license]. <p> Assuming you have an SEE license, the first step of compiling Fossil to use SEE is to create an SEE-enabled version of the SQLite database source code. This alternative SQLite database source file should be called "sqlite3-see.c" and should be placed in the src/ subfolder of the Fossil sources, right beside the public-domain "sqlite3.c" source file. Also make a copy of the SEE-enabled |
︙ | ︙ |
Changes to www/env-opts.md.
︙ | ︙ | |||
86 87 88 89 90 91 92 | processed. `--sqlstats`: (Sets `g.fSqlStats`.) Print a number of performance statistics about each SQLite database used when it is closed. `--sshtrace`: (Sets `g.fSshTrace`.) | | > > > > | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | processed. `--sqlstats`: (Sets `g.fSqlStats`.) Print a number of performance statistics about each SQLite database used when it is closed. `--sshtrace`: (Sets `g.fSshTrace`.) `--ssl-identity`: The fully qualified name of the file containing the client certificate and private key to use, in PEM format. It can be created by concatenating the client certificate and private key files. This identity will be presented to SSL servers to authenticate the client, in addition to the normal password authentication. `--systemtrace`: (Sets `g.fSystemTrace`.) Trace all commands launched as sub processes. `--user LOGIN`: (Sets `g.zLogin`) Also `-U LOGIN`. Set the user name used with the repository. |
︙ | ︙ |
Changes to www/event.wiki.
︙ | ︙ | |||
21 22 23 24 25 26 27 | * <b>Milestones</b>. Project milestones, such as releases or beta-test cycles, can be recorded as technotes. The timeline entry for the technote can be something simple like "Version 1.2.3" perhaps with a bright color background to draw attention to the entry and the wiki content can contain release notes, for example. * <b>Blog Entries</b>. Blog entries from developers describing the current | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | * <b>Milestones</b>. Project milestones, such as releases or beta-test cycles, can be recorded as technotes. The timeline entry for the technote can be something simple like "Version 1.2.3" perhaps with a bright color background to draw attention to the entry and the wiki content can contain release notes, for example. * <b>Blog Entries</b>. Blog entries from developers describing the current state of a project, or rational for various design decisions, or roadmaps for future development, can be entered as technotes. * <b>Process Checkpoints</b>. For projects that have a formal process, technotes can be used to record the completion or the initiation of various process steps. For example, a technote can be used to record the successful completion of a long-running test, perhaps with performance results and details of where the test was run and who |
︙ | ︙ | |||
47 48 49 50 51 52 53 | No project is required to use technotes. But technotes can help many projects stay better organized and provide a better historical record of the development progress. <h2>Viewing Technotes</h2> | | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | No project is required to use technotes. But technotes can help many projects stay better organized and provide a better historical record of the development progress. <h2>Viewing Technotes</h2> Because technotes are considered a special kind of wiki, users must have permission to read wiki in order read technotes. Enable the "j" permission under the /Setup/Users menu in order to give specific users or user classes the ability to view wiki and technotes. Technotes show up on the timeline. Click on the hyperlink beside the technote title to see the complete text. <h2>Creating And Editing Technotes</h2> There is a hyperlink under the /wikihelp menu that can be used to create new technotes. And there is a submenu hyperlink on technote displays for editing existing technotes. Users must have check-in privileges (permission "i") in order to create or edit technotes. In addition, users must have create-wiki privilege (permission "f") to create new technotes and edit-wiki privilege (permission "k") in order to edit existing technotes. Technote content may be formatted as [/wiki_rules | Fossil wiki], [/md_rules | Markdown], or a plain text. |
Changes to www/faq.wiki.
︙ | ︙ | |||
60 61 62 63 64 65 66 | off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" | | | | 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. On the "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button.</blockquote></li> <a name="q4"></a> <p><b>(4) How do I tag a check-in?</b></p> <blockquote>There are several ways: |
︙ | ︙ | |||
87 88 89 90 91 92 93 | <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the | | | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the timeline, then click on the link to go the detailed information page for that check-in. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. There are controls on the edit page that allow new tags to be added and existing tags to be removed.</blockquote></li> <a name="q5"></a> <p><b>(5) How do I create a private branch that won't get pushed back to the main repository.</b></p> <blockquote>Use the <b>--private</b> command-line option on the <b>commit</b> command. The result will be a check-in which exists on your local repository only and is never pushed to other repositories. All descendants of a private check-in are also private. Unless you specify something different using the <b>--branch</b> and/or <b>--bgcolor</b> options, the new private check-in will be put on a branch named "private" with an orange background color. You can merge from the trunk into your private branch in order to keep |
︙ | ︙ |
Changes to www/fileformat.wiki.
1 2 3 4 5 6 | <title>Fossil File Formats</title> <h1 align="center"> Fossil File Formats </h1> The global state of a fossil repository is kept simple so that it can | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | <title>Fossil File Formats</title> <h1 align="center"> Fossil File Formats </h1> The global state of a fossil repository is kept simple so that it can endure in useful form for decades or centuries. A fossil repository is intended to be readable, searchable, and extensible by people not yet born. The global state of a fossil repository is an unordered set of <i>artifacts</i>. An artifact might be a source code file, the text of a wiki page, part of a trouble ticket, or one of several special control artifacts used to show the relationships between other artifacts within the project. Each artifact is normally represented on disk as a separate file. Artifacts can be text or binary. In addition to the global state, each fossil repository also contains local state. The local state consists of web-page formatting preferences, authorized users, ticket display and reporting formats, and so forth. The global state is shared in common among all repositories for the same project, whereas the local state is often different in separate repositories. The local state is not versioned and is not synchronized with the global state. The local state is not composed of artifacts and is not intended to be enduring. This document is concerned with global state only. Local state is only mentioned here in order to distinguish it from global state. Each artifact in the repository is named by its SHA1 hash. No prefixes or meta information is added to an artifact before its hash is computed. The name of an artifact in the repository is exactly the same SHA1 hash that is computed by sha1sum on the file as it exists in your source tree.</p> Some artifacts have a particular format which gives them special meaning to fossil. Fossil recognizes: <ul> <li> [#manifest | Manifests] </li> |
︙ | ︙ | |||
82 83 84 85 86 87 88 | A manifest is a text file. Newline characters (ASCII 0x0a) separate the file into "cards". Each card begins with a single character "card type". Zero or more arguments may follow the card type. All arguments are separated from each other and from the card-type character by a single space character. There is no surplus white space between arguments | | | 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | A manifest is a text file. Newline characters (ASCII 0x0a) separate the file into "cards". Each card begins with a single character "card type". Zero or more arguments may follow the card type. All arguments are separated from each other and from the card-type character by a single space character. There is no surplus white space between arguments and no leading or trailing whitespace except for the newline character that acts as the card separator. All cards of the manifest occur in strict sorted lexicographical order. No card may be duplicated. The entire manifest may be PGP clear-signed, but otherwise it may contain no additional text or data beyond what is described here. |
︙ | ︙ | |||
112 113 114 115 116 117 118 | A manifest may optionally have a single B-card. The B-card specifies another manifest that serves as the "baseline" for this manifest. A manifest that has a B-card is called a delta-manifest and a manifest that omits the B-card is a baseline-manifest. The other manifest identified by the argument of the B-card must be a baseline-manifest. A baseline-manifest records the complete contents of a check-in. | | | | 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | A manifest may optionally have a single B-card. The B-card specifies another manifest that serves as the "baseline" for this manifest. A manifest that has a B-card is called a delta-manifest and a manifest that omits the B-card is a baseline-manifest. The other manifest identified by the argument of the B-card must be a baseline-manifest. A baseline-manifest records the complete contents of a check-in. A delta-manifest records only changes from its baseline. A manifest must have exactly one C-card. The sole argument to the C-card is a check-in comment that describes the check-in that the manifest defines. The check-in comment is text. The following escape sequences are applied to the text: A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73). A newline (ASCII 0x0a) is "\n" (ASCII 0x5C, x6E). A backslash (ASCII 0x5C) is represented as two backslashes "\\". Apart from space and newline, no other whitespace characters are allowed in the check-in comment. Nor are any unprintable characters allowed in the comment. A manifest must have exactly one D-card. The sole argument to the D-card is a date-time stamp in the ISO8601 format. The |
︙ | ︙ | |||
165 166 167 168 169 170 171 | A manifest has zero or one N-cards. The N-card specifies the mimetype for the text in the comment of the C-card. If the N-card is omitted, a default mimetype is used. A manifest has zero or one P-cards. Most manifests have one P-card. The P-card has a varying number of arguments that define other manifests from which the current manifest | | | | | | | | 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | A manifest has zero or one N-cards. The N-card specifies the mimetype for the text in the comment of the C-card. If the N-card is omitted, a default mimetype is used. A manifest has zero or one P-cards. Most manifests have one P-card. The P-card has a varying number of arguments that define other manifests from which the current manifest is derived. Each argument is a 40-character lowercase hexadecimal SHA1 of a predecessor manifest. All arguments to the P-card must be unique within that card. The first argument is the SHA1 of the direct ancestor of the manifest. Other arguments define manifests with which the first was merged to yield the current manifest. Most manifests have a P-card with a single argument. The first manifest in the project has no ancestors and thus has no P-card or (depending on the Fossil version) an empty P-card (no arguments). A manifest has zero or more Q-cards. A Q-card is similar to a P-card in that it defines a predecessor to the current check-in. But whereas a P-card defines the immediate ancestor or a merge ancestor, the Q-card is used to identify a single check-in or a small range of check-ins which were cherry-picked for inclusion in or exclusion from the current manifest. The first argument of the Q-card is the artifact ID of another manifest (the "target") which has had its changes included or excluded in the current manifest. The target is preceded by "+" or "-" to show inclusion or exclusion, respectively. The optional second argument to the Q-card is another manifest artifact ID which is the "baseline" for the cherry-pick. If omitted, the baseline is the primary parent of the target. The changes included or excluded consist of all changes moving from the baseline to the target. The Q-card was added to the interface specification on 2011-02-26. Older versions of Fossil will reject manifests that contain Q-cards. A manifest may optionally have a single R-card. The R-card has a single argument which is the MD5 checksum of all files in the check-in except the manifest itself. The checksum is expressed as 32 characters of lowercase hexadecimal. The checksum is computed as follows: For each file in the check-in (except for the manifest itself) in strict sorted lexicographical order, take the pathname of the file relative to the root of the repository, append a single space (ASCII 0x20), the size of the file in ASCII decimal, a single newline character (ASCII 0x0A), and the complete text of the file. Compute the MD5 checksum of the result. A manifest might contain one or more T-cards used to set |
︙ | ︙ | |||
226 227 228 229 230 231 232 | Each manifest has a single U-card. The argument to the U-card is the login of the user who created the manifest. The login name is encoded using the same character escapes as is used for the check-in comment argument to the C-card. A manifest must have a single Z-card as its last line. The argument to the Z-card is a 32-character lowercase hexadecimal MD5 hash | | | | | | | | 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | Each manifest has a single U-card. The argument to the U-card is the login of the user who created the manifest. The login name is encoded using the same character escapes as is used for the check-in comment argument to the C-card. A manifest must have a single Z-card as its last line. The argument to the Z-card is a 32-character lowercase hexadecimal MD5 hash of all prior lines of the manifest up to and including the newline character that immediately precedes the "Z". The Z-card is a sanity check to prove that the manifest is well-formed and consistent. A sample manifest from Fossil itself can be seen [/artifact/28987096ac | here]. <a name="cluster"></a> <h2>2.0 Clusters</h2> A cluster is an artifact that declares the existence of other artifacts. Clusters are used during repository synchronization to help reduce network traffic. As such, clusters are an optimization and may be removed from a repository without loss or damage to the underlying project code. Clusters follow a syntax that is very similar to manifests. A cluster is a line-oriented text file. Newline characters (ASCII 0x0a) separate the artifact into cards. Each card begins with a single character "card type". Zero or more arguments may follow the card type. All arguments are separated from each other and from the card-type character by a single space character. There is no surplus white space between arguments and no leading or trailing whitespace except for the newline character that acts as the card separator. All cards of a cluster occur in strict sorted lexicographical order. No card may be duplicated. The cluster may not contain additional text or data beyond what is described here. Unlike manifests, clusters are never PGP signed. Allowed cards in the cluster are as follows: <blockquote> <b>M</b> <i>artifact-id</i><br /> <b>Z</b> <i>checksum</i> </blockquote> A cluster contains one or more "M" cards followed by a single "Z" card. Each M card has a single argument which is the artifact ID of another artifact in the repository. The Z card works exactly like the Z card of a manifest. The argument to the Z card is the lower-case hexadecimal representation of the MD5 checksum of all prior cards in the cluster. The Z-card is required. An example cluster from Fossil can be seen [/artifact/d03dbdd73a2a8 | here]. |
︙ | ︙ | |||
313 314 315 316 317 318 319 | second argument is the 40 character lowercase artifact ID of the artifact to which the tag is to be applied. The first value is the tag name. The first character of the tag is either "+", "-", or "*". The "+" means the tag should be added to the artifact. The "-" means the tag should be removed. The "*" character means the tag should be added to the artifact and all direct descendants (but not descendants through a merge) down | | | | 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | second argument is the 40 character lowercase artifact ID of the artifact to which the tag is to be applied. The first value is the tag name. The first character of the tag is either "+", "-", or "*". The "+" means the tag should be added to the artifact. The "-" means the tag should be removed. The "*" character means the tag should be added to the artifact and all direct descendants (but not descendants through a merge) down to but not including the first descendant that contains a more recent "-", "*", or "+" tag with the same name. The optional third argument is the value of the tag. A tag without a value is a Boolean. When two or more tags with the same name are applied to the same artifact, the tag with the latest (most recent) date is used. Some tags have special meaning. The "comment" tag when applied to a check-in will override the check-in comment of that check-in for display purposes. The "user" tag overrides the name of the check-in user. The "date" tag overrides the check-in date. The "branch" tag sets the name of the branch that at check-in belongs to. Symbolic tags begin with the "sym-" prefix. The U card is the name of the user that created the control artifact. The Z card is the usual required artifact checksum. An example control artifacts can be seen [/info/9d302ccda8 | here]. <a name="wikichng"></a> <h2>4.0 Wiki Pages</h2> |
︙ | ︙ | |||
358 359 360 361 362 363 364 | <b>Z</b> <i>checksum</i> </blockquote> The D card is the date and time when the wiki page was edited. The P card specifies the parent wiki pages, if any. The L card gives the name of the wiki page. The optional N card specifies the mimetype of the wiki text. If the N card is omitted, the | | | 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 | <b>Z</b> <i>checksum</i> </blockquote> The D card is the date and time when the wiki page was edited. The P card specifies the parent wiki pages, if any. The L card gives the name of the wiki page. The optional N card specifies the mimetype of the wiki text. If the N card is omitted, the mimetype is assumed to be text/x-fossil-wiki. The U card specifies the login of the user who made this edit to the wiki page. The Z card is the usual checksum over the entire artifact and is required. The W card is used to specify the text of the wiki page. The argument to the W card is an integer which is the number of bytes of text in the wiki page. That text follows the newline character |
︙ | ︙ | |||
403 404 405 406 407 408 409 | J cards specify changes to the "value" of "fields" in the ticket. If the <i>value</i> parameter of the J card is omitted, then the field is set to an empty string. Each fossil server has a ticket configuration which specifies the fields its understands. The ticket configuration is part of the local state for the repository and thus can vary from one repository to another. | | | | | 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 | J cards specify changes to the "value" of "fields" in the ticket. If the <i>value</i> parameter of the J card is omitted, then the field is set to an empty string. Each fossil server has a ticket configuration which specifies the fields its understands. The ticket configuration is part of the local state for the repository and thus can vary from one repository to another. Hence a J card might specify a <i>field</i> that do not exist in the local ticket configuration. If a J card specifies a <i>field</i> that is not in the local configuration, then that J card is simply ignored. The first argument of the J card is the field name. The second value is the field value. If the field name begins with "+" then the value is appended to the prior value. Otherwise, the value on the J card replaces any previous value of the field. The field name and value are both encoded using the character escapes defined for the C card of a manifest. An example ticket-change artifact can be seen [/artifact/91f1ec6af053 | here]. <a name="attachment"></a> <h2>6.0 Attachments</h2> An attachment artifact associates some other artifact that is the attachment (the source artifact) with a ticket or wiki page or technical note to which the attachment is connected (the target artifact). The following cards are allowed on an attachment artifact: <blockquote> <b>A</b> <i>filename target</i> ?<i>source</i>?<br /> <b>C</b> <i>comment</i><br /> <b>D</b> <i>time-and-date-stamp</i><br /> <b>N</b> <i>mimetype</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </blockquote> The A card specifies a filename for the attachment in its first argument. The second argument to the A card is the name of the wiki page or ticket or technical note to which the attachment is connected. The third argument is either missing or else it is the 40-character artifact ID of the attachment itself. A missing third argument means that the attachment should be deleted. The C card is an optional comment describing what the attachment is about. The C card is optional, but there can only be one. A single D card is required to give the date and time when the attachment |
︙ | ︙ | |||
485 486 487 488 489 490 491 | <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </blockquote> The C card contains text that is displayed on the timeline for the technote. The C card is optional, but there can only be one. | | | 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 | <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </blockquote> The C card contains text that is displayed on the timeline for the technote. The C card is optional, but there can only be one. A single D card is required to give the date and time when the technote artifact was created. This is different from the time at which the technote appears on the timeline. A single E card gives the time of the technote (the point on the timeline where the technote is displayed) and a unique identifier for the technote. When there are multiple artifacts with the same technote-id, the one with the most recent D card is the only one used. The technote-id must be a |
︙ | ︙ | |||
523 524 525 526 527 528 529 | name means that tags can only be add and they can only be non-propagating tags. In a technote, T cards are normally used to set the background display color for timelines. The optional U card gives name of the user who entered the technote. A single W card provides wiki text for the document associated with the | | | 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 | name means that tags can only be add and they can only be non-propagating tags. In a technote, T cards are normally used to set the background display color for timelines. The optional U card gives name of the user who entered the technote. A single W card provides wiki text for the document associated with the technote. The format of the W card is exactly the same as for a [#wikichng | wiki artifact]. The Z card is the required checksum over the rest of the artifact. <a name="summary"></a> <h2>8.0 Card Summary</h2> |
︙ | ︙ |
Changes to www/fiveminutes.wiki.
1 2 3 4 5 6 7 8 | <title>Up and running in 5 minutes as a single user</title> <p align="center"><b><i> The following document was contributed by Gilles Ganault on 2013-01-08. </i></b> </p><hr> <h1>Up and running in 5 minutes as a single user</h1> | | | | | | | | | | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | <title>Up and running in 5 minutes as a single user</title> <p align="center"><b><i> The following document was contributed by Gilles Ganault on 2013-01-08. </i></b> </p><hr> <h1>Up and running in 5 minutes as a single user</h1> <p>This short document explains the main basic Fossil commands for a single user, i.e. with no additional users, with no need to synchronize with some remote repository, and no need for branching/forking.</p> <h2>Create a new repository</h2> <p>fossil new c:\test.repo</p> <p>This will create the new SQLite binary file that holds the repository, i.e. files, tickets, wiki, etc. It can be located anywhere, although it's considered best practice to keep it outside the work directory where you will work on files after they've been checked out of the repository.</p> <h2>Open the repository</h2> <p>cd c:\temp\test.fossil</p> <p>fossil open c:\test.repo</p> <p>This will check out the last revision of all the files in the repository, if any, into the current work directory. In addition, it will create a binary file _FOSSIL_ to keep track of changes (on non-Windows systems it is called <tt>.fslckout</tt>).</p> <h2>Add new files</h2> <p>fossil add .</p> <p>To tell Fossil to add new files to the repository. The files aren't actually added until you run "commit". When using ".", it tells Fossil to add all the files in the current directory recursively, i.e. including all the files in all the subdirectories.</p> <p>Note: To tell Fossil to ignore some extensions:</p> <p>fossil settings ignore-glob "*.o,*.obj,*.exe" --global</p> <h2>Remove files that haven't been committed yet</h2> <p>fossil delete myfile.c</p> <p>This will simply remove the item from the list of files that were previously added through "fossil add".</p> <h2>Check current status</h2> <p>fossil changes</p> <p>This shows the list of changes that have been done and will be committed the next time you run "fossil commit". It's a useful command to run before running "fossil commit" just to check that things are OK before proceeding.</p> <h2>Commit changes</h2> <p>To actually apply the pending changes to the repository, e.g. new files marked for addition, checked-out files that have been edited and must be checked-in, etc.</p> <p>fossil commit -m "Added stuff"</p> If no file names are provided on the command-line then all changes will be checked in, otherwise just the listed file(s) will be checked in. <h2>Compare two revisions of a file</h2> <p>If you wish to compare the last revision of a file and its checked out version in your work directory:</p> <p>fossil gdiff myfile.c</p> <p>If you wish to compare two different revisions of a file in the repository:</p> <p>fossil finfo myfile: Note the first hash, which is the UUID of the commit when the file was committed</p> <p>fossil gdiff --from UUID#1 --to UUID#2 myfile.c</p> <h2>Cancel changes and go back to previous revision</h2> <p>fossil revert myfile.c</p> <p>Fossil does not prompt when reverting a file. It simply reminds the user about the "undo" command, just in case the revert was a mistake.</p> <h2>Close the repository</h2> <p>fossil close</p> <p>This will simply remove the _FOSSIL_ at the root of the work directory but will not delete the files in the work directory. From then on, any use of "fossil" will trigger an error since there is no longer any connection.</p> |
Changes to www/foss-cklist.wiki.
︙ | ︙ | |||
87 88 89 90 91 92 93 | <li><p>The project has a bug tracker. <li><p>The project has a website. <li><p>Release version numbers are in the traditional X.Y or X.Y.Z format. | | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | <li><p>The project has a bug tracker. <li><p>The project has a website. <li><p>Release version numbers are in the traditional X.Y or X.Y.Z format. <li><p>Releases can be downloaded as tarball using gzip or bzip2 compression. <li><p>Releases unpack into a versioned top-level directory. (ex: "projectname-1.2.3/"). <li><p>A statement of license appears at the top of every source code file and the complete text of the license is included in the source code tarball. <li><p>There are no incompatible licenses in the code. <li><p>The project has not been blithely proclaimed "public domain" without having gone through the tedious and exacting legal steps to actually put it in the public domain. <li><p>There is an accurate change log in the code and on the website. <li><p>There is documentation in the code and on the website. </ol> |
Changes to www/fossil-from-msvc.wiki.
︙ | ︙ | |||
9 10 11 12 13 14 15 | <ol type="1"> <li>Tools > Settings > Expert Settings</li> <li>Tools > External Tools, where the items in this list map to "External Tool X" that we'll add to our own Fossil menu later: </li> <ol type="1"> <li>Rename the default "[New Tool 1]" to eg. | | | | | | | | | | | | | 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | <ol type="1"> <li>Tools > Settings > Expert Settings</li> <li>Tools > External Tools, where the items in this list map to "External Tool X" that we'll add to our own Fossil menu later: </li> <ol type="1"> <li>Rename the default "[New Tool 1]" to eg. "Commit" 2. </li> <li>Change Command to where Fossil is located eg. "c:\fossil.exe"</li> <li>Change Arguments to the required command, eg. "commit -m". The user will be prompted to type the comment that Commit expects</li> <li>Set "Initial Directory" to point it to the work directory where the source files are currently checked out by Fossil (eg. c:\Workspace). It's also possible to use system variables such as "$(ProjectDir)" instead of hard-coding the path</li> <li>Check "Prompt for arguments", since Commit requires typing a comment. Useless for commands like Changes that don't require arguments</li> <li>Uncheck "Close on Exit", so we can see what Fossil says before closing the DOS box. Note that "Use Output Window" will display the output in a child window within the IDE instead of opening a DOS box</li> <li>Click on OK</li> </ol> <li>Tools > Customize > Commands</li> <ol type="1"> <li>With "Menu bar = Menu Bar" selected, click on "Add New Menu". A new "Fossil" menu is displayed in the IDE's menu bar</li> <li>Click on "Modify Selection" to rename it "Fossil", and...</li> <li>Use the "Move Down" button to move it lower in the list</li> </ol> <li>Still in Customize dialog: In the "Menu bar" combo, select the new Fossil menu you just created, and Click on "Add Command...": From Categories, select Tools, and select "External Command 1". Click on Close. It's unfortunate that the IDE doesn't say which command maps to "External Command X".</li> </ol> |
Changes to www/index.wiki.
1 2 3 4 5 6 | <title>Home</title> <h3>What Is Fossil?</h3> <div style='width:200px;float:right;border:2px solid #446979;padding:10px;margin:0px 10px;'> <ul> | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | <title>Home</title> <h3>What Is Fossil?</h3> <div style='width:200px;float:right;border:2px solid #446979;padding:10px;margin:0px 10px;'> <ul> <li> [/uv/download.html | Download] <li> [./quickstart.wiki | Quick Start] <li> [./build.wiki | Install] <li> [../COPYRIGHT-BSD2.txt | License] <li> [./faq.wiki | FAQ] <li> [./changes.wiki | Change Log] <li> [./hacker-howto.wiki | Hacker How-To] <li> [./hints.wiki | Tip & Hints] |
︙ | ︙ | |||
36 37 38 39 40 41 42 | Fossil has a built-in and intuitive [./webui.wiki | web interface] with a rich assortment of information pages ([./webpage-ex.md|examples]) designed to promote situational awareness. This entire website is just a running instance of Fossil. The pages you see here are all [./wikitheory.wiki | wiki] or [./embeddeddoc.wiki | embedded documentation] or (in the case of | | | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | Fossil has a built-in and intuitive [./webui.wiki | web interface] with a rich assortment of information pages ([./webpage-ex.md|examples]) designed to promote situational awareness. This entire website is just a running instance of Fossil. The pages you see here are all [./wikitheory.wiki | wiki] or [./embeddeddoc.wiki | embedded documentation] or (in the case of the [/uv/download.html|download] page) [./unvers.wiki | unversioned files]. When you clone Fossil from one of its [./selfhost.wiki | self-hosting repositories], you get more than just source code - you get this entire website. 3. <b>Self-Contained</b> - Fossil is a single self-contained stand-alone executable. |
︙ | ︙ |
Changes to www/inout.wiki.
1 2 | <title>Import And Export</title> | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | <title>Import And Export</title> Fossil has the ability to import and export repositories from and to [http://git-scm.com/ | Git]. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary. <h2>Git → Fossil</h2> To import a Git repository into Fossil, run commands like this: <blockquote><pre> cd git-repo git fast-export --all | fossil import --git new-repo.fossil </pre></blockquote> In other words, simply pipe the output of the "git fast-export" command into the "fossil import --git" command. The 3rd argument to the "fossil import" command is the name of a new Fossil repository that is created to hold the Git content. The --git option is not actually required. The git-fast-export file format is currently the only VCS interchange format that Fossil understands. But future versions of Fossil might be enhanced to understand other VCS interchange formats, and so for compatibility, use of the --git option is recommended. <h2>Fossil → Git</h2> To convert a Fossil repository into a Git repository, run commands like this: |
︙ | ︙ | |||
41 42 43 44 45 46 47 | "fossil export --git" command into the "git fast-import" command. Note that the "fossil export --git" command only exports the versioned files. Tickets and wiki and events are not exported, since Git does not understand those concepts. As with the "import" command, the --git option is not required | | | > > > > > | > > > | > > > | > > > | > > > | > > > | > > > > > > > > > > | > > | > > > > > > > | | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | "fossil export --git" command into the "git fast-import" command. Note that the "fossil export --git" command only exports the versioned files. Tickets and wiki and events are not exported, since Git does not understand those concepts. As with the "import" command, the --git option is not required since the git-fast-export file format is currently the only VCS interchange format that Fossil will generate. However, future versions of Fossil might add the ability to generate other VCS interchange formats, and so for compatibility, the use of the --git option recommended. <h2>Bidirectional Synchronization</h2> Fossil also has the ability to synchronize with a Git repository via repeated imports and/or exports. To do this, it uses marks files to store a record of artifacts which are known by both Git and Fossil to exist at a given point in time. To illustrate, consider the example of a remote Fossil repository that a user wants to import into a local Git repository. First, the user would clone the remote repository and import it into a new Git repository: <blockquote><pre> fossil clone /path/to/remote/repo.fossil repo.fossil mkdir repo cd repo fossil open ../repo.fossil mkdir ../repo.git cd ../repo.git git init . fossil export --git --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --export-marks=../repo/git.marks </pre></blockquote> Once the import has completed, the user would need to <tt>git checkout trunk</tt>. At any point after this, new changes can be imported from the remote Fossil repository: <blockquote><pre> cd ../repo fossil pull cd ../repo.git fossil export --git --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks </pre></blockquote> Changes in the Git repository can be exported to the Fossil repository and then pushed to the remote: <blockquote><pre> git fast-export --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks --all | fossil import --git \ --incremental --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks ../repo.fossil cd ../repo fossil push </pre></blockquote> |
Changes to www/makefile.wiki.
1 2 3 4 5 6 | <title>The Fossil Build Process</title> <h1>1.0 Introduction</h1> The build process for Fossil is tricky in that the source code needs to be processed by three different preprocessor programs | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <title>The Fossil Build Process</title> <h1>1.0 Introduction</h1> The build process for Fossil is tricky in that the source code needs to be processed by three different preprocessor programs before it is compiled. Most users will download a [http://www.fossil-scm.org/download.html | precompiled binary] so this is of no consequence to them, and even those who want to compile the code themselves can use one of the [./build.wiki | existing makefiles]. So must people do not need to be concerned with the build complexities of Fossil. But hard-core developers who desire a deep understanding of how Fossil is put together can benefit from reviewing this article. <a name="srctour"></a> <h1>2.0 Source Code Tour</h1> The source code for Fossil is found in the [/dir?ci=trunk&name=src | src/] subdirectory of the source tree. The src/ subdirectory contains all code, including the code for the separate preprocessor programs. Each preprocessor program is a separate C program implemented in a single file of C source code. The three preprocessor programs are: |
︙ | ︙ | |||
48 49 50 51 52 53 54 | shell.c file from the SQLite release. The TH1 script engine is implemented using files: 7. th.c 8. th.h | | | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | shell.c file from the SQLite release. The TH1 script engine is implemented using files: 7. th.c 8. th.h These two files are imports like the SQLite source files, and so are not preprocessed. The VERSION.h header file is generated from other information sources using a small program called: 9. mkversion.c |
︙ | ︙ | |||
84 85 86 87 88 89 90 | Finally, there is one of the makefiles generated by makemake.tcl: 13. main.mk The main.mk makefile is invoked from the Makefile in the top-level directory. The main.mk is generated by makemake.tcl and should not | | | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | Finally, there is one of the makefiles generated by makemake.tcl: 13. main.mk The main.mk makefile is invoked from the Makefile in the top-level directory. The main.mk is generated by makemake.tcl and should not be hand edited. Other makefiles generated by makemake.tcl are in other subdirectories (currently all in the win/ subdirectory). All the other files in the src/ subdirectory (79 files at the time of this writing) are C source code files that are subject to the preprocessing steps described below. In the sequel, we will call these other files "src.c" in order to have a convenient name. The reader should understand that whenever "src.c" or "src.h" is used in the text |
︙ | ︙ | |||
107 108 109 110 111 112 113 | "manifest.uuid", and "VERSION" source files in the root directory of the source tree. (The "manifest" and "manifest.uuid" files are automatically generated and updated by Fossil itself. See the [/help/setting | fossil set manifest] command for additional information.) The VERSION.h header file is generated by | | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | "manifest.uuid", and "VERSION" source files in the root directory of the source tree. (The "manifest" and "manifest.uuid" files are automatically generated and updated by Fossil itself. See the [/help/setting | fossil set manifest] command for additional information.) The VERSION.h header file is generated by a C program: src/mkversion.c. To run the VERSION.h generator, first compile the src/mkversion.c source file into a command-line program (named "mkversion.exe") then run: <blockquote><pre> mkversion.exe manifest.uuid manifest VERSION >VERSION.h </pre></blockquote> The pathnames in the above command might need to be adjusted to get the directories right. The point is that the manifest.uuid, manifest, and VERSION files in the root of the source tree are the three arguments and the generated VERSION.h file appears on standard output. The builtin_data.h header file is generated by a C program: src/mkbuiltin.c. The builtin_data.h file contains C-langauge byte-array definitions for the content of resource files used by Fossil. To generate the builtin_data.h file, first compile the mkbuiltin.c program, then run: <blockquote><pre> mkbuiltin.exe diff.tcl <i>OtherFiles...</i> >builtin_data.h </pre></blockquote> At the time of this writing, the "diff.tcl" script (a Tcl/Tk script used |
︙ | ︙ | |||
161 162 163 164 165 166 167 | </pre></blockquote> Note that "src.c" in the above is a stand-in for the (79) regular source files of Fossil - all source files except for the exceptions described in section 2.0 above. The output of the mkindex program is a header file that is #include-ed by | | | | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | </pre></blockquote> Note that "src.c" in the above is a stand-in for the (79) regular source files of Fossil - all source files except for the exceptions described in section 2.0 above. The output of the mkindex program is a header file that is #include-ed by the main.c source file during the final compilation step. <h2>4.2 The translate preprocessor</h2> The translate preprocessor looks for lines of source code that begin with "@" and converts those lines into string constants or (depending on context) into special "printf" operations for generating the output of an HTTP request. The translate preprocessor is a simple C program whose sources are in the translate.c source file. The translate preprocess is run on each of the other ordinary source files separately, like this: <blockquote><pre> ./translate src.c >src_.c </pre></blockquote> In this case, the "src.c" file represents any single source file from the set of ordinary source files as described in section 2.0 above. Note that each source file is translated separately. By convention, the names of the translated source files are the names of the input sources with a single "_" character at the end. But a new makefile can use any naming convention it wants - the "_" is not critical to the build process. After being translated, the output files (the "src_.c" files) should be used for all subsequent preprocessing and compilation steps. <h2>4.3 The makeheaders preprocessor</h2> |
︙ | ︙ | |||
207 208 209 210 211 212 213 | is like this: <blockquote><pre> makeheaders src_.c:src.h sqlite3.h th.h VERSION.h </pre></blockquote> In the example above the "src_.c" and "src.h" names represent all of the | | | | 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 | is like this: <blockquote><pre> makeheaders src_.c:src.h sqlite3.h th.h VERSION.h </pre></blockquote> In the example above the "src_.c" and "src.h" names represent all of the (79) ordinary C source files, each as a separate argument. <h1>5.0 Compilation</h1> After all generated files have been created and all ordinary source files have been preprocessed, the generated and preprocessed files can be combined into a single executable using a C compiler. This can be done all at once, or each preprocessed source file can be compiled into a separate object code file and the resulting object code files linked together in a final step. Some files require special C-preprocessor macro definitions. When compiling sqlite.c, the following macros are recommended: |
︙ | ︙ | |||
245 246 247 248 249 250 251 | When compiling the shell.c source file, these macros are required: * -Dmain=sqlite3_main * -DSQLITE_OMIT_LOAD_EXTENSION=1 The "main()" routine in the shell must be changed into sqlite3_main() to prevent it from colliding with the real main() in Fossil, and to give | | | 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 | When compiling the shell.c source file, these macros are required: * -Dmain=sqlite3_main * -DSQLITE_OMIT_LOAD_EXTENSION=1 The "main()" routine in the shell must be changed into sqlite3_main() to prevent it from colliding with the real main() in Fossil, and to give Fossil an entry point to jump to when the [/help/sqlite3 | fossil sql] command is invoked. All the other source code files can be compiled without any special options. <h1>6.0 Linkage</h1> |
︙ | ︙ |
Changes to www/mkdownload.tcl.
1 2 | #!/usr/bin/tclsh # | | > > | < < < < < < < < < < | < < < < < < < < < < < < < < < < < < | | > > > | > > > > > > | > > | > > > > > | | < < | < | | > | | | | | | | < > | < < < < | > > > | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | #!/usr/bin/tclsh # # Run this script to build and install the "download.html" page of # unversioned comment. # # Also generate the fossil_download_checksums.html page. # # set out [open download.html w] fconfigure $out -encoding utf-8 -translation lf puts $out \ {<div class='fossil-doc' data-title='Download Page'> <center><font size=4>} puts $out \ "<b>To install Fossil →</b> download the stand-alone executable" puts $out \ {and put it on your $PATH. </font><p><small> RPMs available <a href="http://download.opensuse.org/repositories/home:/rmax:/fossil/"> here.</a> Cryptographic checksums for download files are <a href="http://www.hwaci.com/fossil_download_checksums.html">here</a>. </small></p> <table cellpadding="10"> } # Find all unique timestamps. # set in [open {|fossil uv list} rb] while {[gets $in line]>0} { set fn [lindex $line 5] set filesize($fn) [lindex $line 3] if {[regexp -- {-(\d\.\d+)\.(tar\.gz|zip)$} $fn all version]} { set filehash($fn) [lindex $line 1] set avers($version) 1 } } close $in set vdate(1.36) 2016-10-24 set vdate(1.35) 2016-06-14 set vdate(1.34) 2016-11-02 # Do all versions from newest to oldest # foreach vers [lsort -decr -real [array names avers]] { # set hr "../timeline?c=version-$vers;y=ci" set v2 v[string map {. _} $vers] set hr "../doc/trunk/www/changes.wiki#$v2" puts $out "<tr><td colspan=6 align=left><hr>" puts $out "<center><b><a href=\"$hr\">Version $vers</a>" if {[info exists vdate($vers)]} { set hr2 "../timeline?c=version-$vers&y=ci" puts $out " (<a href='$hr2'>$vdate($vers)</a>)" } puts $out "</b></center>" puts $out "</td></tr>" puts $out "<tr>" foreach {prefix img desc} { fossil-linux-x86 linux.gif {Linux 3.x x86} fossil-macosx mac.gif {Mac 10.x x86} fossil-openbsd-x86 openbsd.gif {OpenBSD 5.x x86} fossil-w32 win32.gif {Windows} fossil-src src.gif {Source Tarball} } { set glob download/$prefix*-$vers* set filename [array names filesize $glob] if {[info exists filesize($filename)]} { set size [set filesize($filename)] set units bytes if {$size>1024*1024} { set size [format %.2f [expr {$size/(1024.0*1024.0)}]] set units MiB } elseif {$size>1024} { set size [format %.2f [expr {$size/(1024.0)}]] set units KiB } puts $out "<td align=center valign=bottom><a href=\"$filename\">" puts $out "<img src=\"build-icons/$img\" border=0><br>$desc</a><br>" puts $out "$size $units</td>" } else { puts $out "<td> </td>" } } puts $out "</tr>" # # if {[info exists filesize(download/releasenotes-$vers.html)]} { # puts $out "<tr><td colspan=6 align=left>" # set rn [|open uv cat download/releasenotes-$vers.html] # fconfigure $rn -encoding utf-8 # puts $out "[read $rn]" # close $rn # puts $out "</td></tr>" # } } puts $out "<tr><td colspan=5><hr></td></tr>" puts $out {</table></center></div>} close $out # Generate the checksum page # set out [open fossil_download_checksums.html w] fconfigure $out -encoding utf-8 -translation lf puts $out {<html> <title>Fossil Download Checksums</title> <body> <h1 align="center">Checksums For Fossil Downloads</h1> <p>The following table shows the SHA1 checksums for the precompiled binaries available on the <a href="/download.html">Fossil website</a>.</p> <pre>} foreach {line} [split [exec fossil sql "SELECT hash, name FROM unversioned\ WHERE name GLOB '*.tar.gz' OR\ name GLOB '*.zip'"] \n] { set x [split $line |] set hash [lindex $x 0] set nm [file tail [lindex $x 1]] puts $out "$hash $nm" } puts $out {</pre></body></html>} close $out |
Changes to www/mkindex.tcl.
|
| | | 1 2 3 4 5 6 7 8 | #!/usr/bin/env tclsh # # Run this TCL script to generate a WIKI page that contains a # permuted index of the various documentation files. # # tclsh mkindex.tcl # |
︙ | ︙ | |||
76 77 78 79 80 81 82 | whyusefossil.wiki {Why You Should Use Fossil} whyusefossil.wiki {Benefits Of Version Control} wikitheory.wiki {Wiki In Fossil} /wiki_rules {Wiki Formatting Rules} } set permindex {} | | > > > | | | 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 | whyusefossil.wiki {Why You Should Use Fossil} whyusefossil.wiki {Benefits Of Version Control} wikitheory.wiki {Wiki In Fossil} /wiki_rules {Wiki Formatting Rules} } set permindex {} set stopwords { a about against and are as by for fossil from in of on or should the to use used with } foreach {file title} $doclist { set n [llength $title] regsub -all {\s+} $title { } title lappend permindex [list $title $file 1] for {set i 0} {$i<$n-1} {incr i} { set prefix [lrange $title 0 $i] set suffix [lrange $title [expr {$i+1}] end] set firstword [string tolower [lindex $suffix 0]] if {[lsearch $stopwords $firstword]<0} { lappend permindex [list "$suffix — $prefix" $file 0] } } } set permindex [lsort -dict -index 0 $permindex] set out [open permutedindex.html w] fconfigure $out -encoding utf-8 -translation lf puts $out \ |
︙ | ︙ | |||
116 117 118 119 120 121 122 | book</a> <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul>} foreach entry $permindex { | | > | 119 120 121 122 123 124 125 126 127 128 129 130 131 | book</a> <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul>} foreach entry $permindex { foreach {title file bold} $entry break if {$bold} {set title <b>$title</b>} if {[string match /* $file]} {set file ../../..$file} puts $out "<li><a href=\"$file\">$title</a></li>" } puts $out "</ul></div>" |
Changes to www/newrepo.wiki.
︙ | ︙ | |||
51 52 53 54 55 56 57 | The next thing we need to do is <em>open</em> the repository. To do so we create a working directory and then <tt>cd</tt> to it: <verbatim> stephan@ludo:~/fossil$ mkdir demo stephan@ludo:~/fossil$ cd demo stephan@ludo:~/fossil/demo$ fossil open ../demo.fossil | | | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | The next thing we need to do is <em>open</em> the repository. To do so we create a working directory and then <tt>cd</tt> to it: <verbatim> stephan@ludo:~/fossil$ mkdir demo stephan@ludo:~/fossil$ cd demo stephan@ludo:~/fossil/demo$ fossil open ../demo.fossil stephan@ludo:~/fossil/demo$ </verbatim> That creates a file called <tt>_FOSSIL_</tt> in the current directory, and this file contains all kinds of fossil-related information about your local repository. You can ignore it for all purposes, but be sure not to accidentally remove it or otherwise damage it - it belongs to fossil, not you. |
︙ | ︙ |
Changes to www/permutedindex.html.
︙ | ︙ | |||
17 18 19 20 21 22 23 | <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul> <li><a href="fiveminutes.wiki">5 Minutes as a Single User — Update and Running in</a></li> <li><a href="fossil-from-msvc.wiki">2010 IDE — Integrating Fossil in the Microsoft Express</a></li> | | | < < | | | | | | | | | | | | | | | | | | | | | | | | | | | | < < | | | | | | | | | | | | | | | | | | < | | | | | | | | | | | | | | | < < | | | | | < | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul> <li><a href="fiveminutes.wiki">5 Minutes as a Single User — Update and Running in</a></li> <li><a href="fossil-from-msvc.wiki">2010 IDE — Integrating Fossil in the Microsoft Express</a></li> <li><a href="tech_overview.wiki"><b>A Technical Overview Of The Design And Implementation Of Fossil</b></a></li> <li><a href="adding_code.wiki"><b>Adding New Features To Fossil</b></a></li> <li><a href="copyright-release.html">Agreement — Contributor License</a></li> <li><a href="delta_encoder_algorithm.wiki">Algorithm — Fossil Delta Encoding</a></li> <li><a href="blame.wiki">Algorithm Of Fossil — The Annotate/Blame</a></li> <li><a href="blame.wiki">Annotate/Blame Algorithm Of Fossil — The</a></li> <li><a href="customskin.md">Appearance of Web Pages — Theming: Customizing The</a></li> <li><a href="faq.wiki">Asked Questions — Frequently</a></li> <li><a href="password.wiki">Authentication — Password Management And</a></li> <li><a href="whyusefossil.wiki"><b>Benefits Of Version Control</b></a></li> <li><a href="antibot.wiki">Bots — Defense against Spiders and</a></li> <li><a href="private.wiki">Branches — Creating, Syncing, and Deleting Private</a></li> <li><a href="branching.wiki"><b>Branching, Forking, Merging, and Tagging</b></a></li> <li><a href="bugtheory.wiki"><b>Bug Tracking In Fossil</b></a></li> <li><a href="makefile.wiki">Build Process — The Fossil</a></li> <li><a href="aboutcgi.wiki">CGI Works In Fossil — How</a></li> <li><a href="changes.wiki">Changelog — Fossil</a></li> <li><a href="checkin_names.wiki"><b>Check-in And Version Names</b></a></li> <li><a href="checkin.wiki"><b>Check-in Checklist</b></a></li> <li><a href="checkin.wiki">Checklist — Check-in</a></li> <li><a href="../test/release-checklist.wiki">Checklist — Pre-Release Testing</a></li> <li><a href="foss-cklist.wiki"><b>Checklist For Successful Open-Source Projects</b></a></li> <li><a href="selfcheck.wiki">Checks — Fossil Repository Integrity Self</a></li> <li><a href="childprojects.wiki"><b>Child Projects</b></a></li> <li><a href="contribute.wiki">Code or Documentation To The Fossil Project — Contributing</a></li> <li><a href="style.wiki">Code Style Guidelines — Source</a></li> <li><a href="../../../help">Commands and Webpages — Lists of</a></li> <li><a href="build.wiki"><b>Compiling and Installing Fossil</b></a></li> <li><a href="concepts.wiki">Concepts — Fossil Core</a></li> <li><a href="server.wiki">Configure A Fossil Server — How To</a></li> <li><a href="shunning.wiki">Content From Fossil — Shunning: Deleting</a></li> <li><a href="contribute.wiki"><b>Contributing Code or Documentation To The Fossil Project</b></a></li> <li><a href="copyright-release.html"><b>Contributor License Agreement</b></a></li> <li><a href="whyusefossil.wiki">Control — Benefits Of Version</a></li> <li><a href="concepts.wiki">Core Concepts — Fossil</a></li> <li><a href="newrepo.wiki">Create A New Fossil Repository — How To</a></li> <li><a href="private.wiki"><b>Creating, Syncing, and Deleting Private Branches</b></a></li> <li><a href="qandc.wiki">Criticisms — Questions And</a></li> <li><a href="customskin.md">Customizing The Appearance of Web Pages — Theming:</a></li> <li><a href="custom_ticket.wiki"><b>Customizing The Ticket System</b></a></li> <li><a href="customgraph.md">Customizing the Timeline Graph — Theming:</a></li> <li><a href="tech_overview.wiki">Databases Used By Fossil — SQLite</a></li> <li><a href="antibot.wiki"><b>Defense against Spiders and Bots</b></a></li> <li><a href="shunning.wiki">Deleting Content From Fossil — Shunning:</a></li> <li><a href="private.wiki">Deleting Private Branches — Creating, Syncing, and</a></li> <li><a href="delta_encoder_algorithm.wiki">Delta Encoding Algorithm — Fossil</a></li> <li><a href="delta_format.wiki">Delta Format — Fossil</a></li> <li><a href="tech_overview.wiki">Design And Implementation Of Fossil — A Technical Overview Of The</a></li> <li><a href="theory1.wiki">Design Of The Fossil DVCS — Thoughts On The</a></li> <li><a href="embeddeddoc.wiki">Documentation — Embedded Project</a></li> <li><a href="contribute.wiki">Documentation To The Fossil Project — Contributing Code or</a></li> <li><a href="theory1.wiki">DVCS — Thoughts On The Design Of The Fossil</a></li> <li><a href="quotes.wiki">DVCSes in General — Quotes: What People Are Saying About Fossil, Git, and</a></li> <li><a href="embeddeddoc.wiki"><b>Embedded Project Documentation</b></a></li> <li><a href="delta_encoder_algorithm.wiki">Encoding Algorithm — Fossil Delta</a></li> <li><a href="encryptedrepos.wiki">Encrypted Repositories — How To Use</a></li> <li><a href="env-opts.md"><b>Environment Variables and Global Options</b></a></li> <li><a href="event.wiki"><b>Events</b></a></li> <li><a href="webpage-ex.md">Examples — Webpage</a></li> <li><a href="inout.wiki">Export To And From Git — Import And</a></li> <li><a href="fossil-from-msvc.wiki">Express 2010 IDE — Integrating Fossil in the Microsoft</a></li> <li><a href="adding_code.wiki">Features To Fossil — Adding New</a></li> <li><a href="fileformat.wiki">File Format — Fossil</a></li> <li><a href="unvers.wiki">Files — Unversioned</a></li> <li><a href="branching.wiki">Forking, Merging, and Tagging — Branching,</a></li> <li><a href="delta_format.wiki">Format — Fossil Delta</a></li> <li><a href="fileformat.wiki">Format — Fossil File</a></li> <li><a href="../../../md_rules">Formatting Rules — Markdown</a></li> <li><a href="../../../wiki_rules">Formatting Rules — Wiki</a></li> <li><a href="changes.wiki"><b>Fossil Changelog</b></a></li> <li><a href="concepts.wiki"><b>Fossil Core Concepts</b></a></li> <li><a href="delta_encoder_algorithm.wiki"><b>Fossil Delta Encoding Algorithm</b></a></li> <li><a href="delta_format.wiki"><b>Fossil Delta Format</b></a></li> <li><a href="fileformat.wiki"><b>Fossil File Format</b></a></li> <li><a href="quickstart.wiki"><b>Fossil Quick Start Guide</b></a></li> <li><a href="selfcheck.wiki"><b>Fossil Repository Integrity Self Checks</b></a></li> <li><a href="selfhost.wiki"><b>Fossil Self Hosting Repositories</b></a></li> <li><a href="settings.wiki"><b>Fossil Settings</b></a></li> <li><a href="hints.wiki"><b>Fossil Tips And Usage Hints</b></a></li> <li><a href="fossil-v-git.wiki"><b>Fossil Versus Git</b></a></li> <li><a href="quotes.wiki">Fossil, Git, and DVCSes in General — Quotes: What People Are Saying About</a></li> <li><a href="faq.wiki"><b>Frequently Asked Questions</b></a></li> <li><a href="quotes.wiki">General — Quotes: What People Are Saying About Fossil, Git, and DVCSes in</a></li> <li><a href="fossil-v-git.wiki">Git — Fossil Versus</a></li> <li><a href="inout.wiki">Git — Import And Export To And From</a></li> <li><a href="quotes.wiki">Git, and DVCSes in General — Quotes: What People Are Saying About Fossil,</a></li> <li><a href="env-opts.md">Global Options — Environment Variables and</a></li> <li><a href="customgraph.md">Graph — Theming: Customizing the Timeline</a></li> <li><a href="quickstart.wiki">Guide — Fossil Quick Start</a></li> <li><a href="style.wiki">Guidelines — Source Code Style</a></li> <li><a href="hacker-howto.wiki"><b>Hacker How-To</b></a></li> <li><a href="adding_code.wiki"><b>Hacking Fossil</b></a></li> <li><a href="hints.wiki">Hints — Fossil Tips And Usage</a></li> <li><a href="index.wiki"><b>Home Page</b></a></li> <li><a href="selfhost.wiki">Hosting Repositories — Fossil Self</a></li> <li><a href="aboutcgi.wiki"><b>How CGI Works In Fossil</b></a></li> <li><a href="server.wiki"><b>How To Configure A Fossil Server</b></a></li> <li><a href="newrepo.wiki"><b>How To Create A New Fossil Repository</b></a></li> <li><a href="encryptedrepos.wiki"><b>How To Use Encrypted Repositories</b></a></li> <li><a href="hacker-howto.wiki">How-To — Hacker</a></li> <li><a href="fossil-from-msvc.wiki">IDE — Integrating Fossil in the Microsoft Express 2010</a></li> <li><a href="tech_overview.wiki">Implementation Of Fossil — A Technical Overview Of The Design And</a></li> <li><a href="inout.wiki"><b>Import And Export To And From Git</b></a></li> <li><a href="build.wiki">Installing Fossil — Compiling and</a></li> <li><a href="fossil-from-msvc.wiki"><b>Integrating Fossil in the Microsoft Express 2010 IDE</b></a></li> <li><a href="selfcheck.wiki">Integrity Self Checks — Fossil Repository</a></li> <li><a href="webui.wiki">Interface — The Fossil Web</a></li> <li><a href="th1.md">Language — The TH1 Scripting</a></li> <li><a href="copyright-release.html">License Agreement — Contributor</a></li> <li><a href="../../../help"><b>Lists of Commands and Webpages</b></a></li> <li><a href="password.wiki">Management And Authentication — Password</a></li> <li><a href="../../../sitemap">Map — Site</a></li> <li><a href="../../../md_rules"><b>Markdown Formatting Rules</b></a></li> <li><a href="branching.wiki">Merging, and Tagging — Branching, Forking,</a></li> <li><a href="fossil-from-msvc.wiki">Microsoft Express 2010 IDE — Integrating Fossil in the</a></li> <li><a href="fiveminutes.wiki">Minutes as a Single User — Update and Running in 5</a></li> <li><a href="checkin_names.wiki">Names — Check-in And Version</a></li> <li><a href="adding_code.wiki">New Features To Fossil — Adding</a></li> <li><a href="newrepo.wiki">New Fossil Repository — How To Create A</a></li> <li><a href="foss-cklist.wiki">Open-Source Projects — Checklist For Successful</a></li> <li><a href="pop.wiki">Operations — Principles Of</a></li> <li><a href="env-opts.md">Options — Environment Variables and Global</a></li> <li><a href="tech_overview.wiki">Overview Of The Design And Implementation Of Fossil — A Technical</a></li> <li><a href="index.wiki">Page — Home</a></li> <li><a href="customskin.md">Pages — Theming: Customizing The Appearance of Web</a></li> <li><a href="password.wiki"><b>Password Management And Authentication</b></a></li> <li><a href="quotes.wiki">People Are Saying About Fossil, Git, and DVCSes in General — Quotes: What</a></li> <li><a href="stats.wiki"><b>Performance Statistics</b></a></li> <li><a href="../test/release-checklist.wiki"><b>Pre-Release Testing Checklist</b></a></li> <li><a href="pop.wiki"><b>Principles Of Operations</b></a></li> <li><a href="private.wiki">Private Branches — Creating, Syncing, and Deleting</a></li> <li><a href="makefile.wiki">Process — The Fossil Build</a></li> <li><a href="contribute.wiki">Project — Contributing Code or Documentation To The Fossil</a></li> <li><a href="embeddeddoc.wiki">Project Documentation — Embedded</a></li> <li><a href="foss-cklist.wiki">Projects — Checklist For Successful Open-Source</a></li> <li><a href="childprojects.wiki">Projects — Child</a></li> <li><a href="sync.wiki">Protocol — The Fossil Sync</a></li> <li><a href="faq.wiki">Questions — Frequently Asked</a></li> <li><a href="qandc.wiki"><b>Questions And Criticisms</b></a></li> <li><a href="quickstart.wiki">Quick Start Guide — Fossil</a></li> <li><a href="quotes.wiki"><b>Quotes: What People Are Saying About Fossil, Git, and DVCSes in General</b></a></li> <li><a href="selfhost.wiki">Repositories — Fossil Self Hosting</a></li> <li><a href="encryptedrepos.wiki">Repositories — How To Use Encrypted</a></li> <li><a href="newrepo.wiki">Repository — How To Create A New Fossil</a></li> <li><a href="selfcheck.wiki">Repository Integrity Self Checks — Fossil</a></li> <li><a href="reviews.wiki"><b>Reviews</b></a></li> <li><a href="../../../md_rules">Rules — Markdown Formatting</a></li> <li><a href="../../../wiki_rules">Rules — Wiki Formatting</a></li> <li><a href="fiveminutes.wiki">Running in 5 Minutes as a Single User — Update and</a></li> <li><a href="quotes.wiki">Saying About Fossil, Git, and DVCSes in General — Quotes: What People Are</a></li> <li><a href="th1.md">Scripting Language — The TH1</a></li> <li><a href="selfcheck.wiki">Self Checks — Fossil Repository Integrity</a></li> <li><a href="selfhost.wiki">Self Hosting Repositories — Fossil</a></li> <li><a href="server.wiki">Server — How To Configure A Fossil</a></li> <li><a href="settings.wiki">Settings — Fossil</a></li> <li><a href="shunning.wiki"><b>Shunning: Deleting Content From Fossil</b></a></li> <li><a href="fiveminutes.wiki">Single User — Update and Running in 5 Minutes as a</a></li> <li><a href="../../../sitemap"><b>Site Map</b></a></li> <li><a href="style.wiki"><b>Source Code Style Guidelines</b></a></li> <li><a href="antibot.wiki">Spiders and Bots — Defense against</a></li> <li><a href="tech_overview.wiki"><b>SQLite Databases Used By Fossil</b></a></li> <li><a href="ssl.wiki">SSL with Fossil — Using</a></li> <li><a href="quickstart.wiki">Start Guide — Fossil Quick</a></li> <li><a href="stats.wiki">Statistics — Performance</a></li> <li><a href="style.wiki">Style Guidelines — Source Code</a></li> <li><a href="foss-cklist.wiki">Successful Open-Source Projects — Checklist For</a></li> <li><a href="sync.wiki">Sync Protocol — The Fossil</a></li> <li><a href="private.wiki">Syncing, and Deleting Private Branches — Creating,</a></li> <li><a href="custom_ticket.wiki">System — Customizing The Ticket</a></li> <li><a href="tickets.wiki">System — The Fossil Ticket</a></li> <li><a href="branching.wiki">Tagging — Branching, Forking, Merging, and</a></li> <li><a href="tech_overview.wiki">Technical Overview Of The Design And Implementation Of Fossil — A</a></li> <li><a href="../test/release-checklist.wiki">Testing Checklist — Pre-Release</a></li> <li><a href="th1.md">TH1 Scripting Language — The</a></li> <li><a href="blame.wiki"><b>The Annotate/Blame Algorithm Of Fossil</b></a></li> <li><a href="makefile.wiki"><b>The Fossil Build Process</b></a></li> <li><a href="sync.wiki"><b>The Fossil Sync Protocol</b></a></li> <li><a href="tickets.wiki"><b>The Fossil Ticket System</b></a></li> <li><a href="webui.wiki"><b>The Fossil Web Interface</b></a></li> <li><a href="th1.md"><b>The TH1 Scripting Language</b></a></li> <li><a href="customskin.md"><b>Theming: Customizing The Appearance of Web Pages</b></a></li> <li><a href="customgraph.md"><b>Theming: Customizing the Timeline Graph</b></a></li> <li><a href="theory1.wiki"><b>Thoughts On The Design Of The Fossil DVCS</b></a></li> <li><a href="custom_ticket.wiki">Ticket System — Customizing The</a></li> <li><a href="tickets.wiki">Ticket System — The Fossil</a></li> <li><a href="customgraph.md">Timeline Graph — Theming: Customizing the</a></li> <li><a href="hints.wiki">Tips And Usage Hints — Fossil</a></li> <li><a href="bugtheory.wiki">Tracking In Fossil — Bug</a></li> <li><a href="unvers.wiki"><b>Unversioned Files</b></a></li> <li><a href="fiveminutes.wiki"><b>Update and Running in 5 Minutes as a Single User</b></a></li> <li><a href="hints.wiki">Usage Hints — Fossil Tips And</a></li> <li><a href="fiveminutes.wiki">User — Update and Running in 5 Minutes as a Single</a></li> <li><a href="ssl.wiki"><b>Using SSL with Fossil</b></a></li> <li><a href="env-opts.md">Variables and Global Options — Environment</a></li> <li><a href="whyusefossil.wiki">Version Control — Benefits Of</a></li> <li><a href="checkin_names.wiki">Version Names — Check-in And</a></li> <li><a href="fossil-v-git.wiki">Versus Git — Fossil</a></li> <li><a href="webui.wiki">Web Interface — The Fossil</a></li> <li><a href="customskin.md">Web Pages — Theming: Customizing The Appearance of</a></li> <li><a href="webpage-ex.md"><b>Webpage Examples</b></a></li> <li><a href="../../../help">Webpages — Lists of Commands and</a></li> <li><a href="quotes.wiki">What People Are Saying About Fossil, Git, and DVCSes in General — Quotes:</a></li> <li><a href="whyusefossil.wiki"><b>Why You Should Use Fossil</b></a></li> <li><a href="../../../wiki_rules"><b>Wiki Formatting Rules</b></a></li> <li><a href="wikitheory.wiki"><b>Wiki In Fossil</b></a></li> <li><a href="aboutcgi.wiki">Works In Fossil — How CGI</a></li> <li><a href="whyusefossil.wiki">You Should Use Fossil — Why</a></li> </ul></div> |
Changes to www/pop.wiki.
1 2 3 4 5 6 7 8 | <h1>Principles Of Operation</h1> <p> This page attempts to define the foundational principals upon which Fossil is built. </p> <ul> | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | <h1>Principles Of Operation</h1> <p> This page attempts to define the foundational principals upon which Fossil is built. </p> <ul> <li><p>A project consists of source files, wiki pages, and trouble tickets, and control files (collectively "artifacts"). All historical copies of all artifacts are saved. The project maintains an audit trail.</p></li> <li><p>A project resides in one or more repositories. Each repository is administered and operates independently of the others.</p></li> <li><p>Each repository has both global and local state. The global state is common to all repositories (or at least has the potential to be shared in common when the repositories are fully synchronized). The local state for each repository is private to that repository. The global state represents the content of the project. The local state identifies the authorized users and access policies for a particular repository.</p></li> <li><p>The global state of a repository is an unordered collection of artifacts. Each artifact is named by its SHA1 hash encoded in lowercase hexadecimal. In many contexts, the name can be abbreviated to a unique prefix. A five- or six-character prefix usually suffices to uniquely identify a file.</p></li> <li><p>Because artifacts are named by their SHA1 hash, all artifacts are immutable. Any change to the content of an artifact also changes the hash that forms the artifacts name, thus creating a new artifact. Both the old original version of the artifact and the new change are preserved under different names.</p></li> <li><p>It is theoretically possible for two artifacts with different content to share the same hash. But finding two such artifacts is so incredibly difficult and unlikely that we consider it to be an impossibility.</p></li> <li><p>The signature of an artifact is the SHA1 hash of the artifact itself, exactly as it would appear in a disk file. No prefix or meta-information about the artifact is added before computing the hash. So you can always find the SHA1 signature of a file by using the "sha1sum" command-line utility.</p></li> <li><p>The artifacts that comprise the global state of a repository |
︙ | ︙ |
Changes to www/private.wiki.
︙ | ︙ | |||
39 40 41 42 43 44 45 | visible to other users of the project. <h2>Syncing Private Branches</h2> A private branch normally stays on the one repository where it was originally created. But sometimes you want to share private branches with another repository. For example, you might be building a cross-platform | | | 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | visible to other users of the project. <h2>Syncing Private Branches</h2> A private branch normally stays on the one repository where it was originally created. But sometimes you want to share private branches with another repository. For example, you might be building a cross-platform application and have separate repositories on your Windows laptop, your Linux desktop, and your iMac. You can transfer private branches between these machines by using the --private option on the "sync", "push", "pull", and "clone" commands. For example, if you are running "fossil server" on your Linux box and you want to clone that repository to your Mac, including all private branches, use: <blockquote><pre> |
︙ | ︙ | |||
65 66 67 68 69 70 71 | you leave the "x" capability turned off on all repositories used for collaboration (repositories to which many people push and pull) and only enable "x" for local repositories when you need to share private branches. Private branch sync only works if you use the --private command-line option. Private branches are never synced via the auto-sync mechanism. Once | | | | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | you leave the "x" capability turned off on all repositories used for collaboration (repositories to which many people push and pull) and only enable "x" for local repositories when you need to share private branches. Private branch sync only works if you use the --private command-line option. Private branches are never synced via the auto-sync mechanism. Once again, this restriction is designed to make it hard to accidently push private branches beyond their intended audience. <h2>Purging Private Branches</h2> You can remove all private branches from a repository using this command: <blockquote><pre> fossil scrub --private </pre></blockquote> Note that the above is a permanent and irreversible change. You will be asked to confirm before continuing. Once the private branches are removed, they cannot be retrieved (unless you have synced them to another repository.) So be careful with the command. <h2>Additional Notes</h2> All of the features above apply to <u>all</u> private branches in a single repository at once. There is no mechanism in Fossil (currently) that allows you to push, pull, clone, sync, or scrub and individual private branch within a repository that contains multiple private branches. |
Changes to www/qandc.wiki.
︙ | ︙ | |||
20 21 22 23 24 25 26 | <ol> <li> Integrated <a href="wikitheory.wiki">wiki</a>. </li> <li> Integrated <a href="bugtheory.wiki">bug tracking</a> </li> <li> Immutable artifacts </li> <li> Self-contained, stand-alone executable that can be run in a <a href="http://en.wikipedia.org/wiki/Chroot">chroot jail</a> </li> | | | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | <ol> <li> Integrated <a href="wikitheory.wiki">wiki</a>. </li> <li> Integrated <a href="bugtheory.wiki">bug tracking</a> </li> <li> Immutable artifacts </li> <li> Self-contained, stand-alone executable that can be run in a <a href="http://en.wikipedia.org/wiki/Chroot">chroot jail</a> </li> <li> Simple, well-defined, <a href="fileformat.wiki">enduring file format</a> </li> <li> Integrated <a href="webui.wiki">web interface</a> </li> </ol> </blockquote> <b>Why should I use this rather than Trac?</b> <blockquote> <ol> <li> Fossil is distributed. You can view and/or edit tickets, wiki, and code while off network, then sync your changes later. With Trac, you can only view and edit tickets and wiki while you are connected to the server. </li> <li> Fossil is lightweight and fully self-contained. It is very easy to setup on a low-resource machine. Fossil does not require an administrator.</li> <li> Fossil integrates code versioning into the same repository with wiki and tickets. There is nothing extra to add or install. Fossil is an all-in-one turnkey solution. </li> </ol> </blockquote> <b>Love the concept here. Anyone using this for real work yet?</b> <blockquote> Fossil is <a href="http://www.fossil-scm.org/">self-hosting</a>. In fact, this page was probably delivered to your web-browser via a working fossil instance. The same virtual machine that hosts http://www.fossil-scm.org/ (a <a href="http://www.linode.com/">Linode 720</a>) also hosts 24 other fossil repositories for various small projects. The documentation files for <a href="http://www.sqlite.org/">SQLite</a> are hosted in a fossil repository <a href="http://www.sqlite.org/docsrc/">here</a>, for example. Other projects are also adopting fossil. But fossil does not yet have the massive user base of git or mercurial. </blockquote> <b>Fossil looks like the bug tracker that would be in your Linksys Router's administration screen.</b> <blockquote> <p>I take a pragmatic approach to software: form follows function. To me, it is more important to have a reliable, fast, efficient, enduring, and simple DVCS than one that looks pretty.</p> <p>On the other hand, if you have patches that improve the appearance of Fossil without seriously compromising its reliability, performance, and/or maintainability, I will be happy to accept them. Fossil is self-hosting. Send email to request a password that will let you push to the main fossil repository.</p> </blockquote> <b>It would be useful to have a separate application that keeps the bug-tracking database in a versioned file. That file can then be pushed and pulled along with the rest repository.</b> <blockquote> <p>Fossil already <u>does</u> push and pull bugs along with the files in your repository. But fossil does <u>not</u> track bugs as files in the source tree. That approach to bug tracking was rejected for three reasons:</p> <ol> <li> Check-ins in fossil are immutable. So if tickets were part of the check-in, then there would be no way to add new tickets to a check-in as new bugs are discovered. |
︙ | ︙ | |||
106 107 108 109 110 111 112 | be permitted to create tickets. </ol> <p>These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document.</p> </blockquote> | | | 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | be permitted to create tickets. </ol> <p>These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document.</p> </blockquote> <b>Fossil is already the name of a plan9 versioned append-only filesystem.</b> <blockquote> I did not know that. Perhaps they selected the name for the same reason that I did: because a repository with immutable artifacts preserves an excellent fossil record of a long-running project. </blockquote> |
︙ | ︙ | |||
135 136 137 138 139 140 141 | directly in the VCS - either they are under-featured compared to full software like Trac, or the VCS is massively bloated compared to Subversion or Bazaar.</b> <blockquote> <p>I have no doubt that Trac has many features that fossil lacks. But that is not the point. Fossil has several key features that Trac lacks and that | | | | | 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | directly in the VCS - either they are under-featured compared to full software like Trac, or the VCS is massively bloated compared to Subversion or Bazaar.</b> <blockquote> <p>I have no doubt that Trac has many features that fossil lacks. But that is not the point. Fossil has several key features that Trac lacks and that I need: most notably the fact that fossil supports disconnected operation.</p> <p>As for bloat: Fossil is a single self-contained executable. You do not need any other packages (diff, patch, merge, cvs, svn, rcs, git, python, perl, tcl, apache, sqlite, and so forth) in order to run fossil. Fossil runs just fine in a chroot jail all by itself. And the self-contained fossil executable is much less than 1MB in size. (Update 2015-01-12: Fossil has grown in the years since the previous sentence was written but is still much less than 2MB according to "size" when compiled using -Os on x64 Linux.) Fossil is the very opposite of bloat.</p> </blockquote> </nowiki> |
Changes to www/quickstart.wiki.
1 2 3 4 5 6 7 8 9 | <title>Fossil Quick Start Guide</title> <h1 align="center">Fossil Quick Start</h1> <p>This is a guide to get you started using fossil quickly and painlessly.</p> <h2>Installing</h2> <p>Fossil is a single self-contained C program. You need to | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | <title>Fossil Quick Start Guide</title> <h1 align="center">Fossil Quick Start</h1> <p>This is a guide to get you started using fossil quickly and painlessly.</p> <h2>Installing</h2> <p>Fossil is a single self-contained C program. You need to either download a <a href="http://www.fossil-scm.org/download.html">precompiled binary</a> or <a href="build.wiki">compile it yourself</a> from sources. Install fossil by putting the fossil binary someplace on your $PATH.</p> <a name="fslclone"></a> <h2>General Work Flow</h2> <p>Fossil works with repository files (a database with the project's complete history) and with checked-out local trees (the working directory you use to do your work). The workflow looks like this:</p> |
︙ | ︙ | |||
32 33 34 35 36 37 38 | <p>The following sections will give you a brief overview of these operations.</p> <h2>Starting A New Project</h2> <p>To start a new project with fossil, create a new empty repository this way: ([/help/init | more info]) </p> | | | | | | | | | | | | | | | | | | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | <p>The following sections will give you a brief overview of these operations.</p> <h2>Starting A New Project</h2> <p>To start a new project with fossil, create a new empty repository this way: ([/help/init | more info]) </p> <blockquote> <b>fossil init </b><i> repository-filename</i> </blockquote> <h2>Cloning An Existing Repository</h2> <p>Most fossil operations interact with a repository that is on the local disk drive, not on a remote system. Hence, before accessing a remote repository it is necessary to make a local copy of that repository. Making a local copy of a remote repository is called "cloning".</p> <p>Clone a remote repository as follows: ([/help/clone | more info])</p> <blockquote> <b>fossil clone</b> <i>URL repository-filename</i> </blockquote> <p>The <i>URL</i> specifies the fossil repository you want to clone. The <i>repository-filename</i> is the new local filename into which the cloned repository will be written. For example: <blockquote> <b>fossil clone http://www.fossil-scm.org/ myclone.fossil</b> </blockquote> <p>If the remote repository requires a login, include a userid in the URL like this: <blockquote> <b>fossil clone http://</b><i>userid</i><b>@www.fossil-scm.org/ myclone.fossil</b> </blockquote> <p>You will be prompted separately for the password. Use "%HH" escapes for special characters in the userid. Examples: "%40" in place of "@" and "%2F" in place of "/". <p>If you are behind a restrictive firewall, you might need to <a href="#proxy">specify an HTTP proxy</a>.</p> <p>A Fossil repository is a single disk file. Instead of cloning, you can just make a copy of the repository file (for example, using "scp"). Note, however, that the repository file contains auxiliary information above and beyond the versioned files, including some sensitive information such as password hashes and email addresses. If you want to share Fossil repositories directly, consider running the [/help/scrub|fossil scrub] command to remove sensitive information before transmitting the file. <h2>Importing From Another Version Control System</h2> <p>Rather than start a new project, or clone an existing Fossil project, you might prefer to <a href="./inout.wiki">import an existing Git project</a> into Fossil using the [/help/import | fossil import] command. <h2>Checking Out A Local Tree</h2> <p>To work on a project in fossil, you need to check out a local copy of the source tree. Create the directory you want to be the root of your tree and cd into that directory. Then do this: ([/help/open | more info])</p> <blockquote> <b>fossil open </b><i> repository-filename</i> </blockquote> <p>This leaves you with the newest version of the tree checked out. From anywhere underneath the root of your local tree, you can type commands like the following to find out the status of your local tree:</p> <blockquote> <b>[/help/info | fossil info]</b><br> <b>[/help/status | fossil status]</b><br> <b>[/help/changes | fossil changes]</b><br> <b>[/help/diff | fossil diff]</b><br> <b>[/help/timeline | fossil timeline]</b><br> <b>[/help/ls | fossil ls]</b><br> <b>[/help/branch | fossil branch]</b><br> </blockquote> <p>Note that Fossil allows you to make multiple check-outs in separate directories from the same repository. This enables you, for example, to do builds from multiple branches or versions at the same time without having to generate extra clones.</p> <p>To switch a checkout between different versions and branches, use:</p> <blockquote> <b>[/help/update | fossil update]</b><br> <b>[/help/checkout | fossil checkout]</b><br> </blockquote> <p>[/help/update | update] honors the "autosync" option and does a "soft" switch, merging any local changes into the target version, whereas [/help/checkout | checkout] does not automatically sync and does a "hard" switch, overwriting local changes if told to do so.</p> <h2>Configuring Your Local Repository</h2> <p>When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some local configuration. This is easily accomplished using the web-server that is built into fossil. Start the fossil webserver like this: ([/help/ui | more info])</p> <blockquote> <b>fossil ui </b><i> repository-filename</i> </blockquote> <p>You can omit the <i>repository-filename</i> from the command above if you are inside a checked-out local tree.</p> <p>This starts a web server then automatically launches your web browser and makes it point to this web server. If your system has an unusual configuration, fossil might not be able to figure out how to start your web browser. In that case, first tell fossil where to find your web browser using a command like this:</p> <blockquote> <b>fossil setting web-browser </b><i> path-to-web-browser</i> </blockquote> <p>By default, fossil does not require a login for HTTP connections coming in from the IP loopback address 127.0.0.1. You can, and perhaps should, change this after you create a few users.</p> <p>When you are finished configuring, just press Control-C or use the <b>kill</b> command to shut down the mini-server.</p> <h2>Making Changes</h2> <p>To add new files to your project, or remove old files, use these commands:</p> |
︙ | ︙ | |||
192 193 194 195 196 197 198 | </blockquote> <p>You will be prompted for check-in comments using whatever editor is specified by your VISUAL or EDITOR environment variable.</p> In the default configuration, the [/help/commit|commit] command will also automatically [/help/push|push] your changes, but that | | | | | 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | </blockquote> <p>You will be prompted for check-in comments using whatever editor is specified by your VISUAL or EDITOR environment variable.</p> In the default configuration, the [/help/commit|commit] command will also automatically [/help/push|push] your changes, but that feature can be disabled. (More information about [./concepts.wiki#workflow|autosync] and how to disable it.) Remember that your coworkers can not see your changes until you commit and push them.</p> <h2>Sharing Changes</h2> <p>When [./concepts.wiki#workflow|autosync] is turned off, the changes you [/help/commit | commit] are only on your local repository. To share those changes with other repositories, do:</p> <blockquote> <b>[/help/push | fossil push]</b> <i>URL</i> </blockquote> |
︙ | ︙ | |||
239 240 241 242 243 244 245 | date/time stamp. ([./checkin_names.wiki | more info]) If you omit the <i>VERSION</i>, then fossil moves you to the latest version of the branch your are currently on.</p> <p>The default behavior is for [./concepts.wiki#workflow|autosync] to be turned on. That means that a [/help/pull|pull] automatically occurs | | | 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | date/time stamp. ([./checkin_names.wiki | more info]) If you omit the <i>VERSION</i>, then fossil moves you to the latest version of the branch your are currently on.</p> <p>The default behavior is for [./concepts.wiki#workflow|autosync] to be turned on. That means that a [/help/pull|pull] automatically occurs when you run [/help/update|update] and a [/help/push|push] happens automatically after you [/help/commit|commit]. So in normal practice, the push, pull, and sync commands are rarely used. But it is important to know about them, all the same.</p> <blockquote> <b>[/help/checkout | fossil checkout]</b> <i>VERSION</i> </blockquote> |
︙ | ︙ | |||
340 341 342 343 344 345 346 | <ul> <li>[./server.wiki#inetd|inetd/xinetd] <li>[./server.wiki#cgi|CGI] <li>[./server.wiki#scgi|SCGI] </ul> <p>The [./selfhost.wiki | self-hosting fossil repositories] use | | | 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 | <ul> <li>[./server.wiki#inetd|inetd/xinetd] <li>[./server.wiki#cgi|CGI] <li>[./server.wiki#scgi|SCGI] </ul> <p>The [./selfhost.wiki | self-hosting fossil repositories] use CGI. <a name="proxy"></a> <h2>HTTP Proxies</h2> <p>If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using |
︙ | ︙ | |||
380 381 382 383 384 385 386 | </blockquote> <p>Or unset the environment variable. The fossil setting for the HTTP proxy takes precedence over the environment variable and the command-line option overrides both. If you have an persistent proxy setting that you want to override for a one-time sync, that is easily done on the command-line. For example, to sync with | | | 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 | </blockquote> <p>Or unset the environment variable. The fossil setting for the HTTP proxy takes precedence over the environment variable and the command-line option overrides both. If you have an persistent proxy setting that you want to override for a one-time sync, that is easily done on the command-line. For example, to sync with a co-workers repository on your LAN, you might type:</p> <blockquote> <b>fossil sync http://192.168.1.36:8080/ --proxy off</b> </blockquote> <h2>More Hints</h2> <p>A [/help | complete list of commands] is available, as is the [./hints.wiki|helpful hints] document. See the [./permutedindex.html#pindex|permuted index] for additional documentation. <p>Explore and have fun!</p> |
Changes to www/quotes.wiki.
1 2 3 4 5 6 7 8 9 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... <h2>On The Usability Of Git:</h2> <ol> | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... <h2>On The Usability Of Git:</h2> <ol> <li>Git approaches the usability of iptables, which is to say, utterly unusable unless you have the manpage tattooed on you arm. <blockquote> <i>by mml at [http://news.ycombinator.com/item?id=1433387]</i> </blockquote> <li><nowiki>It's simplest to think of the state of your [git] repository as a point in a high-dimensional "code-space", in which branches are represented as n-dimensional membranes, mapping the spatial loci of successive commits onto the projected manifold of each cloned repository.</nowiki> <blockquote> <i>At [http://tartley.com/?p=1267]</i> </blockquote> <li>Git is not a Prius. Git is a Model T. Its plumbing and wiring sticks out all over the place. You have to be a mechanic to operate it successfully or you'll be stuck on the side of the road when it breaks down. And it <b>will</b> break down. <blockquote> <i>Nick Farina at [http://nfarina.com/post/9868516270/git-is-simpler]</i> </blockquote> <li>Initial revision of "git", The information manager from hell <blockquote> <i>Linus Torvalds - 2005-04-07 22:13:13<br> Commit comment on the very first source-code check-in for git </blockquote> <li>I've been experimenting a lot with git at work. Damn, it's complicated. It has things to trip you up with that sane people just wouldn't ever both with including the ability to allow you to commit stuff in such a way that you can't find it again afterwards (!!!) Demented workflow complexity on acid? <p>* dkf really wishes he could use fossil instead</p> <blockquote> |
︙ | ︙ | |||
102 103 104 105 106 107 108 | I'm glad to be able to replace Git in every place that I possibly can with Fossil. <blockquote> <i>Joe Prostko at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg16716.html] </blockquote> | | | | | | | | | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | I'm glad to be able to replace Git in every place that I possibly can with Fossil. <blockquote> <i>Joe Prostko at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg16716.html] </blockquote> <li>Fossil is awesome!!! I have never seen an app like that before, such simplicity and flexibility!!! <blockquote> <i>zengr at [http://stackoverflow.com/questions/138621/best-version-control-for-lone-developer]</i> </blockquote> <li>This is my favourite VCS. I can carry it on a USB. And it's a complete system, with it's own server, ticketing system, Wiki pages, and a very, very helpful timeline visualization. And the entire program in a single file! <blockquote> <i>thunderbong commenting on hacker news: [https://news.ycombinator.com/item?id=9131619]</i> </blockquote> </ol> <h2>On Git Versus Fossil</h2> <ol> <li value=15> Just want to say thanks for fossil making my life easier.... Also <nowiki>[for]</nowiki> not having a misanthropic command line interface. <blockquote> <i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i> </blockquote> <li>We use it at a large university to manage code that small teams write. The runs everywhere, ease of installation and portability is something that seems to be a good fit with the environment we have (highly ditrobuted, sometimes very restrictive firewalls, OSX/Win/Linux). We are happy with it and teaching a Msc/Phd student (read complete novice) fossil has just been a smoother ride than Git was. <blockquote> <i>viablepanic at [http://www.reddit.com/r/programming/comments/bxcto/why_not_fossil_scm/]</i> </blockquote> <li>In the fossil community - and hence in fossil itself - development history is pretty much sacrosanct. The very name "fossil" was to chosen to reflect the unchanging nature of things in that history. <p>In git (or rather, the git community), the development history is part of the published aspect of the project, so it provides tools for rearranging that history so you can present what you "should" have done rather than what you actually did. |
︙ | ︙ |
Changes to www/reviews.wiki.
1 2 | <title>Reviews</title> <b>External links:</b> | | | 1 2 3 4 5 6 7 8 9 10 | <title>Reviews</title> <b>External links:</b> * [http://nixtu.blogspot.com/2010/03/fossil-dvcs-on-go-first-impressions.html | Fossil DVCS on the Go - First Impressions] * [http://blog.mired.org/2011/02/fossil-sweet-spot-in-vcs-space.html | Fossil - a sweet spot in the VCS space] by Mike Meyer. * [http://blog.s11n.net/?p=72|Four reasons to take a closer look at the Fossil SCM] by Stephan Beal <b>See Also:</b> |
︙ | ︙ | |||
20 21 22 23 24 25 26 | single .exe applications! </blockquote> <b>Joshua Paine on 2010-10-22:</b> <blockquote> | | | | | | | | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | single .exe applications! </blockquote> <b>Joshua Paine on 2010-10-22:</b> <blockquote> With one of my several hats on, I'm in a small team using git. Another team member just checked some stuff into trunk that should have been on a branch. Nothing else had happened since, so in fossil I would have just edited that commit and put it on a new branch. In git that can't actually be done without danger once other people have pulled, so I had to create a new commit rolling back the changes, then branch and cherry pick the earlier changes, then figure out how to make my new branch shared instead of private. Just want to say thanks for fossil making my life easier on most of my projects, and being able to move commits to another branch after the fact and shared-by-default branches are good features. Also not having a misanthropic command line interface. </blockquote> <b>Stephan Beal writes on 2009-01-11:</b> <blockquote> Sometime in late 2007 I came across a link to fossil on <a href="http://www.sqlite.org/">sqlite.org</a>. It was a good thing I bookmarked it, because I was never able to find the link again (it might have been in a bug report or something). The reasons I first took a close look at it were (A) it stemmed from the sqlite project, which I've held in high regards for years (e.g. I wrote JavaScript bindings for it: <a href="http://spiderape.sourceforge.net/plugins/sqlite/"> |
︙ | ︙ |
Changes to www/selfcheck.wiki.
︙ | ︙ | |||
12 13 14 15 16 17 18 | years now. Many bugs have been encountered. But, thanks in large part to the defensive measures described here, no data has been lost. The integrity checks are doing their job well.</p> <h2>Atomic Check-ins With Rollback</h2> The fossil repository is stored in an | | | 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | years now. Many bugs have been encountered. But, thanks in large part to the defensive measures described here, no data has been lost. The integrity checks are doing their job well.</p> <h2>Atomic Check-ins With Rollback</h2> The fossil repository is stored in an <a href="http://www.sqlite.org/">SQLite</a> database file. ([./tech_overview.wiki | Addition information] about the repository file format.) SQLite is very mature and stable and has been in wide-spread use for many years, so we are confident it will not cause repository corruption. SQLite databases do not corrupt even if a program or system crash or power failure occurs in the middle of the update. If some kind of crash |
︙ | ︙ | |||
59 60 61 62 63 64 65 | the SHA1 checksum again, and verifies that the checksums match. If anything does not match up, an error message is printed and the transaction rolls back. So, in other words, fossil always checks to make sure it can re-extract a file before it commits a change to that file. Hence bugs in fossil are unlikely to corrupt the repository in | | | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | the SHA1 checksum again, and verifies that the checksums match. If anything does not match up, an error message is printed and the transaction rolls back. So, in other words, fossil always checks to make sure it can re-extract a file before it commits a change to that file. Hence bugs in fossil are unlikely to corrupt the repository in a way that prevents us from extracting historical versions of files. <h2>Checksum Over All Files In A Check-in</h2> Manifest artifacts that define a check-in have two fields (the R-card and Z-card) that record MD5 hashes of the manifest itself and of all other files in the manifest. Prior to any check-in |
︙ | ︙ | |||
100 101 102 103 104 105 106 | doing all of the checksumming and verification outlined above. Fossil takes the philosophy of the <a href="http://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare">tortoise</a>: reliability is more important than raw speed. The developers of fossil see no merit in getting the wrong answer quickly. Fossil may not be the fastest versioning system, but it is "fast enough". | | | 100 101 102 103 104 105 106 107 108 | doing all of the checksumming and verification outlined above. Fossil takes the philosophy of the <a href="http://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare">tortoise</a>: reliability is more important than raw speed. The developers of fossil see no merit in getting the wrong answer quickly. Fossil may not be the fastest versioning system, but it is "fast enough". Fossil runs quickly enough to stay out of the developers way. Most operations complete in under a second. |
Changes to www/selfhost.wiki.
1 2 3 4 5 6 7 8 9 10 11 | <title>Fossil Self-Hosting Repositories</title> Fossil has self-hosted since 2007-07-21. As of this writing (2009-08-24) there are three publicly accessible repositories for the Fossil source code: 1. [http://www.fossil-scm.org/] 2. [http://www2.fossil-scm.org/] 3. [http://www3.fossil-scm.org/site.cgi] The canonical repository is (1). Repositories (2) and (3) automatically | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | <title>Fossil Self-Hosting Repositories</title> Fossil has self-hosted since 2007-07-21. As of this writing (2009-08-24) there are three publicly accessible repositories for the Fossil source code: 1. [http://www.fossil-scm.org/] 2. [http://www2.fossil-scm.org/] 3. [http://www3.fossil-scm.org/site.cgi] The canonical repository is (1). Repositories (2) and (3) automatically stay in synchronization with (1) via a <a href="http://en.wikipedia.org/wiki/Cron">cron job</a> that invokes "fossil sync" at regular intervals. Note that the two secondary repositories are more than just read-only mirrors. All three servers support full read/write capabilities. Changes (such as new tickets or wiki or check-ins) can be implemented on any of the three servers and those changes automatically propagate to the other two servers. Server (1) runs as a CGI script on a <a href="http://www.linode.com/">Linode 1024</a> located in Dallas, TX - on the same virtual machine that hosts <a href="http://www.sqlite.org/">SQLite</a> and over a dozen other smaller projects. This demonstrates that Fossil can run on a low-power host processor. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> Server (3) runs as a CGI script on a shared hosting account at <a href="http://www.he.net/">Hurricane Electric</a> in Fremont, CA. This server demonstrates the ability of Fossil to run on an economical shared-host web account with no privileges beyond port 80 HTTP access and CGI. It is not necessary to have a dedicated computer with administrator privileges to run Fossil. As far as we are aware, Fossil is the only full-featured configuration management system that can run in such a restricted environment. The CGI script that runs on the Hurricane Electric server is the same as the CGI script shown above, except that the pathnames are modified to suit the environment: <blockquote><pre> #!/home/hwaci/bin/fossil |
︙ | ︙ |
Changes to www/server.wiki.
1 2 3 4 5 6 | <title>How To Configure A Fossil Server</title> <h2>Introduction</h2><blockquote> <p>A server is not necessary to use Fossil, but a server does help in collaborating with peers. A Fossil server also works well as a complete website for a project. For example, the complete [https://www.fossil-scm.org/] website, including the page you are now reading, | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | <title>How To Configure A Fossil Server</title> <h2>Introduction</h2><blockquote> <p>A server is not necessary to use Fossil, but a server does help in collaborating with peers. A Fossil server also works well as a complete website for a project. For example, the complete [https://www.fossil-scm.org/] website, including the page you are now reading, is just a Fossil server displaying the content of the self-hosting repository for Fossil.</p> <p>This article is a guide for setting up your own Fossil server. <p>See "[./aboutcgi.wiki|How CGI Works In Fossil]" for background information on the underlying CGI technology. See "[./sync.wiki|The Fossil Sync Protocol]" for information on the wire protocol used for client/server communication.</p> </blockquote> <h2>Overview</h2><blockquote> There are basically four ways to set up a Fossil server: <ol> <li>A stand-alone server <li>Using inetd or xinetd or stunnel <li>CGI <li>SCGI (a.k.a. SimpleCGI) </ol> Each of these can serve either a single repository, or a directory hierarchy containing many repositories with names ending in ".fossil". </blockquote> <a name="standalone"></a> <h2>Standalone server</h2><blockquote> The easiest way to set up a Fossil server is to use either the [/help/server|server] or the [/help/ui|ui] commands: <ul> <li><b>fossil server</b> <i>REPOSITORY</i> <li><b>fossil ui</b> <i>REPOSITORY</i> </ul> <p> The <i>REPOSITORY</i> argument is either the name of the repository file, or a directory containing many repositories. Both of these commands start a Fossil server, usually on TCP port 8080, though a higher numbered port might also be used if 8080 is already occupied. You can access these using URLs of the form <b>http://localhost:8080/</b>, or if <i>REPOSITORY</i> is a directory, URLs of the form <b>http://localhost:8080/</b><i>repo</i><b>/</b> where <i>repo</i> is the base name of the repository file without the ".fossil" suffix. The difference between "ui" and "server" is that "ui" will also start a web browser and point it to the URL mentioned above, and the "ui" command binds to the loopback IP address (127.0.0.1) only so that the "ui" command cannot be |
︙ | ︙ | |||
73 74 75 76 77 78 79 | program with the arguments shown. Obviously you will need to modify the pathnames for your particular setup. The final argument is either the name of the fossil repository to be served, or a directory containing multiple repositories. </p> <p> | | | | | | | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | program with the arguments shown. Obviously you will need to modify the pathnames for your particular setup. The final argument is either the name of the fossil repository to be served, or a directory containing multiple repositories. </p> <p> If you use a non-standard TCP port on systems where the port-specification must be a symbolic name and cannot be numeric, add the desired name and port to /etc/services. For example, if you want your Fossil server running on TCP port 12345 instead of 80, you will need to add: <blockquote> <pre> fossil 12345/tcp #fossil server </pre> </blockquote> and use the symbolic name ('fossil' in this example) instead of the numeral ('12345') in inetd.conf. For details, see the relevant section in your system's documentation, e.g. the [https://www.freebsd.org/doc/en/books/handbook/network-inetd.html|FreeBSD Handbook] in case you use FreeBSD. </p> <p> If your system is running xinetd, then the configuration is likely to be in the file "/etc/xinetd.conf" or in a subfile of "/etc/xinetd.d". An xinetd configuration file will appear like this:</p> <blockquote> |
︙ | ︙ | |||
117 118 119 120 121 122 123 | In both cases notice that Fossil was launched as root. This is not required, but if it is done, then Fossil will automatically put itself into a chroot jail for the user who owns the fossil repository before reading any information off of the wire. </p> <p> Inetd or xinetd must be enabled, and must be (re)started whenever their configuration | | | | | | 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 | In both cases notice that Fossil was launched as root. This is not required, but if it is done, then Fossil will automatically put itself into a chroot jail for the user who owns the fossil repository before reading any information off of the wire. </p> <p> Inetd or xinetd must be enabled, and must be (re)started whenever their configuration changes - consult your system's documentation for details. </p> <p> [https://www.stunnel.org/ | Stunnel version 5] is an inetd-like process that accepts and decodes SSL-encrypted connections. Fossil can be run directly from stunnel in a manner similar to inetd and xinetd. This can be used to provide a secure link to a Fossil project. The configuration needed to get stunnel5 to invoke Fossil is very similar to the inetd and xinetd examples shown above. The relevant parts of an stunnel configuration might look something like the following: <blockquote><pre><nowiki> [https] accept = www.ubercool-project.org:443 TIMEOUTclose = 0 exec = /usr/bin/fossil execargs = /usr/bin/fossil http /home/fossil/ubercool.fossil --https </nowiki></pre></blockquote> See the stunnel5 documentation for further details about the /etc/stunnel/stunnel.conf configuration file. Note that the [/help/http|fossil http] command should include the --https option to let Fossil know to use "https" instead of "http" as the scheme on generated hyperlinks. <p> Using inetd or xinetd or stunnel is a more complex setup than the "standalone" server, but it has the advantage of only using system resources when an actual connection is attempted. If no-one ever connects to that port, a Fossil server will not (automatically) run. It has the disadvantage of requiring "root" access and therefore may not normally be available to lower-priced "shared" servers on the internet. </p> </blockquote> <a name="cgi"></a> <h2>Fossil as CGI</h2><blockquote> <p> A Fossil server can also be run from an ordinary web server as a CGI program. This feature allows Fossil to be seamlessly integrated into a larger website. CGI is how the [./selfhost.wiki | self-hosting fossil repositories] are implemented. </p> <p> To run Fossil as CGI, create a CGI script (here called "repo") in the CGI directory of your web server and having content like this: <blockquote><pre> #!/usr/bin/fossil |
︙ | ︙ | |||
182 183 184 185 186 187 188 | must be readable by the process which executes the CGI.</li> <li>ALL directories leading to the CGI script must also be readable and the CGI script itself must be executable for the user under which it will run (which often differs from the one running the web server - consult your site's documentation or administrator).</li> <li>The repository file AND the directory containing it must be writable by the same account which executes the Fossil binary (again, this might differ from the WWW user). The directory needs to be writable so that sqlite can write its journal files.</li> | | | | 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | must be readable by the process which executes the CGI.</li> <li>ALL directories leading to the CGI script must also be readable and the CGI script itself must be executable for the user under which it will run (which often differs from the one running the web server - consult your site's documentation or administrator).</li> <li>The repository file AND the directory containing it must be writable by the same account which executes the Fossil binary (again, this might differ from the WWW user). The directory needs to be writable so that sqlite can write its journal files.</li> <li>Fossil must be able to create temporary files, the default directory for which depends on the OS. When the CGI process is operating within a chroot, ensure that this directory exists and is readable/writeable by the user who executes the Fossil binary.</li> </ul> </p> <p> Once the script is set up correctly, and assuming your server is also set |
︙ | ︙ | |||
217 218 219 220 221 222 223 | </p> </blockquote> <a name="scgi"></a> <h2>Fossil as SCGI</h2><blockquote> <p> | | | 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | </p> </blockquote> <a name="scgi"></a> <h2>Fossil as SCGI</h2><blockquote> <p> The [/help/server|fossil server] command, described above as a way of starting a stand-alone web server, can also be used for SCGI. Simply add the --scgi command-line option and the stand-alone server will interpret and respond to the SimpleCGI or SCGI protocol rather than raw HTTP. This can be used in combination with a webserver (such as [http://nginx.org|Nginx]) that does not support CGI. A typical Nginx configuration to support SCGI with Fossil would look something like this: <blockquote><pre> |
︙ | ︙ | |||
282 283 284 285 286 287 288 | For more information, see <a href="./ssl.wiki">Using SSL with Fossil</a>. </p> </blockquote> <a name="loadmgmt"></a> <h2>Managing Server Load</h2><blockquote> <p> | | | | | | | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 | For more information, see <a href="./ssl.wiki">Using SSL with Fossil</a>. </p> </blockquote> <a name="loadmgmt"></a> <h2>Managing Server Load</h2><blockquote> <p> A Fossil server is very efficient and normally presents a very light load on the server. The Fossil [./selfhost.wiki | self-hosting server] is a 1/24th slice VM at [http://www.linode.com | Linode.com] hosting 65 other repositories in addition to Fossil (and including some very high-traffic sites such as [http://www.sqlite.org] and [http://system.data.sqlite.org]) and it has a typical load of 0.05 to 0.1. A single HTTP request to Fossil normally takes less than 10 milliseconds of CPU time to complete. So requests can be arriving at a continuous rate of 20 or more per second and the CPU can still be mostly idle. <p> However, there are some Fossil web pages that can consume large amounts of CPU time, especially on repositories with a large number of files or with long revision histories. High CPU usage pages include [/help?cmd=/zip | /zip], [/help?cmd=/tarball | /tarball], [/help?cmd=/annotate | /annotate] and others. On very large repositories, these commands can take 15 seconds or more of CPU time. If these kinds of requests arrive too quickly, the load average on the server can grow dramatically, making the server unresponsive. <p> Fossil provides two capabilities to help avoid server overload problems due to excessive requests to expensive pages: <ol> <li><p>An optional cache is available that remembers the 10 most recently requested /zip or /tarball pages and returns the precomputed answer if the same page is requested again. <li><p>Page requests can be configured to fail with a [http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.3 | "503 Server Overload"] HTTP error if an expensive request is received while the host load average is too high. </ol> Both of these load-control mechanisms are turned off by default, but they are recommended for high-traffic sites. <p> The webpage cache is activated using the [/help?cmd=cache|fossil cache init] command-line on the server. Add a -R option to specify the specific repository |
︙ | ︙ |
Changes to www/settings.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil Settings</title> <h2>Using Fossil Settings</h2> Settings control the behaviour of fossil. They are set with the <tt>fossil settings</tt> command, or through the web interface in the Settings page in the Admin section. | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | <title>Fossil Settings</title> <h2>Using Fossil Settings</h2> Settings control the behaviour of fossil. They are set with the <tt>fossil settings</tt> command, or through the web interface in the Settings page in the Admin section. For a list of all settings, view the Settings page, or type <tt>fossil help settings</tt> from the command line. <h3>Repository settings</h3> Settings are set on a per-repository basis. When you clone a repository, a subset of settings are copied to your local repository. If you make a change to a setting on your local repository, it is not synced back to the server when you <tt>push</tt> or <tt>sync</tt>. If you make a change on the server, you need to manually make the change on all repositories which are cloned from this repository. You can also set a setting globally on your local machine. The value will be used for all repositories cloned to your machine, unless overridden explicitly in a particular repository. Global settings can be set by using the <tt>-global</tt> option on the <tt>fossil settings</tt> command. <h3>"Versionable" settings</h3> Most of the settings control the behaviour of fossil on your local machine, largely acting to reflect your preference on how you want to use Fossil, how you communicate with the server, or options for hosting a repository on the web. |
︙ | ︙ |
Changes to www/shunning.wiki.
1 2 3 4 5 6 | <title>Deleting Content From Fossil</title> <h1 align="center">Deleting Content From Fossil</h1> Fossil is designed to keep all historical content forever. Users of Fossil are discouraged from "deleting" content simply because it has become obsolete. Old content is part of the historical record | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | <title>Deleting Content From Fossil</title> <h1 align="center">Deleting Content From Fossil</h1> Fossil is designed to keep all historical content forever. Users of Fossil are discouraged from "deleting" content simply because it has become obsolete. Old content is part of the historical record (part of the "fossil record") and should be maintained indefinitely. Such is the design intent of Fossil. Nevertheless, there may occasionally arise legitimate reasons for deleting content. Such reasons might include: * Spammers have inserted inappropriate content into a wiki page or ticket that needs to be removed. * A file that contains trade secrets or that is under copyright may have been accidentally committed and needs to be backed out. * A malformed control artifact may have been inserted and is disrupting the operation of Fossil. <h2>Shunning</h2> Fossil provides a mechanism called "shunning" for removing content from a repository. Every Fossil repository maintains a list of the SHA1 hash names of "shunned" artifacts. Fossil will refuse to push or pull any shunned artifact. Furthermore, all shunned artifacts (but not the shunning list itself) are removed from the repository whenever the repository is reconstructed using the "rebuild" command. <h3>Shunning lists are local state</h3> The shunning list is part of the local state of a Fossil repository. In other words, shunning does not propagate to a remote repository using the normal "sync" mechanism. An artifact can be shunned from one repository but be allowed to exist in another. The fact that the shunning list does not propagate is a security feature. If the shunning list propagated then a malicious user (or a bug in the fossil code) might introduce a shun record that would propagate through all repositories in a network and permanently destroy vital information. By refusing to propagate the shunning list, Fossil ensures that no remote user will ever be able to remove information from your personal repositories without your permission. The shunning list does not propagate to a remote repository by the normal "sync" mechanism, but it is still possible to copy shuns from one repository to another using the "configuration" command: <b>fossil configuration pull shun</b> <i>remote-url</i><br> <b>fossil configuration push shun</b> <i>remote-url</i> The two command above will pull or push shunning lists from or to the <i>remote-url</i> indicated and merge the lists on the receiving end. "Admin" privilege on the remote server is required in order to push a shun list. In contrast, the shunning list will be automatically received by default as part of a normal client "pull" operation unless disabled by the "<tt>auto-shun</tt>" setting. Note that the shunning list remains in the repository even after the shunned artifact has been removed. This is to prevent the artifact from being reintroduced into the repository the next time it syncs with another repository that has not shunned the artifact. <h3>Managing the shunning list</h3> The complete shunning list for a repository can be viewed by a user with "admin" privilege on the "/shun" URL of the web interface to Fossil. That URL is accessible under the "Admin" button on the default menu bar. Items can be added to or removed from the shunning list. "Sync" operations are inhibited as soon as the artifact is added to the shunning list, but the content of the artifact is not actually removed from the repository until the next time the repository is rebuilt. When viewing individual artifacts with the web interface, "admin" users will usually see a "Shun" option in the submenu that will take them directly to the shunning page and enable that artifact to be shunned with a single additional mouse click. |
Changes to www/stats.wiki.
1 2 3 | <title>Fossil Performance</title> <h1 align="center">Performance Statistics</h1> | | | 1 2 3 4 5 6 7 8 9 10 11 | <title>Fossil Performance</title> <h1 align="center">Performance Statistics</h1> The questions will inevitably arise: How does Fossil perform? Does it use a lot of disk space or bandwidth? Is it scalable? In an attempt to answers these questions, this report looks at several projects that use fossil for configuration management and examines how well they are working. The following table is a summary of the results. (Last updated on 2015-02-28.) Explanation and analysis follows the table. |
︙ | ︙ | |||
94 95 96 97 98 99 100 | In Fossil, every version of every file, every wiki page, every change to every ticket, and every check-in is a separate "artifact". One way to think of a Fossil project is as a bag of artifacts. Of course, there is a lot more than this going on in Fossil. Many of the artifacts have meaning and are related to other artifacts. But at a low level (for example when synchronizing two instances of the same project) the only thing that matters | | | | 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | In Fossil, every version of every file, every wiki page, every change to every ticket, and every check-in is a separate "artifact". One way to think of a Fossil project is as a bag of artifacts. Of course, there is a lot more than this going on in Fossil. Many of the artifacts have meaning and are related to other artifacts. But at a low level (for example when synchronizing two instances of the same project) the only thing that matters is the unordered collection of artifacts. In fact, one of the key characteristics of Fossil is that the entire project history can be reconstructed simply by scanning the artifacts in an arbitrary order. The number of check-ins is the number of times that the "commit" command has been run. A single check-in might change a 3 or 4 files, or it might change dozens or hundreds of files. Regardless of the number of files changed, it still only counts as one check-in. The "Uncompressed Size" is the total size of all the artifacts within the repository assuming they were all uncompressed and stored separately on the disk. Fossil makes use of delta compression between related versions of the same file, and then uses zlib compression on the resulting deltas. The total resulting repository size is shown after the uncompressed size. On the right end of the table, we show the "Clone Bandwidth". This is the total number of bytes sent from server back to the client. The number of |
︙ | ︙ |
Changes to www/sync.wiki.
1 2 | <title>The Fossil Sync Protocol</title> | | | 1 2 3 4 5 6 7 8 9 10 | <title>The Fossil Sync Protocol</title> <p>This document describes the wire protocol used to synchronize content between two Fossil repositories.</p> <h2>1.0 Overview</h2> <p>The global state of a fossil repository consists of an unordered collection of artifacts. Each artifact is identified by its SHA1 hash expressed as a 40-character lower-case hexadecimal string. |
︙ | ︙ | |||
20 21 22 23 24 25 26 | SHA1 hashes for this many artifacts can be large. So optimizations are employed that usually reduce the number of SHA1 hashes that need to be shared to a few hundred.</p> <p>Each repository also has local state. The local state determines the web-page formatting preferences, authorized users, ticket formats, and similar information that varies from one repository to another. | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | SHA1 hashes for this many artifacts can be large. So optimizations are employed that usually reduce the number of SHA1 hashes that need to be shared to a few hundred.</p> <p>Each repository also has local state. The local state determines the web-page formatting preferences, authorized users, ticket formats, and similar information that varies from one repository to another. The local state is not using transferred during a sync. Except, some local state is transferred during a [/help?cmd=clone|clone] in order to initialize the local state of the new repository. And the [/help?cmd=configuration|config push] and [/help?cmd=configuration|config pull] commands can be an administrator to sync local state.</p> <h2>2.0 Transport</h2> <p>All communication between client and server is via HTTP requests. The server is listening for incoming HTTP requests. The client issues one or more HTTP requests and receives replies for each request.</p> <p>The server might be running as an independent server using the <b>server</b> command, or it might be launched from inetd or xinetd using the <b>http</b> command. Or the server might be launched from CGI. (See "[./server.wiki|How To Configure A Fossil Server]" for details.) The specifics of how the server listens for incoming HTTP requests is immaterial to this protocol. The important point is that the server is listening for requests and the client is the issuer of the requests.</p> <p>A single push, pull, or sync might involve multiple HTTP requests. The client maintains state between all requests. But on the server side, each request is independent. The server does not preserve any information about the client from one request to the next.</p> <h4>2.0.1 Encrypted Transport</h4> <p>In the current implementation of Fossil, the server only understands HTTP requests. The client can send either clear-text HTTP requests or encrypted HTTPS requests. But when HTTPS requests are sent, they first must be decrypted by a webserver or proxy before being passed to the Fossil server. This limitation may be relaxed in a future release.</p> <h3>2.1 Server Identification</h3> |
︙ | ︙ | |||
406 407 408 409 410 411 412 | <blockquote> <b>uvigot</b> <i>name mtime hash size</i> </blockquote> <p>The <i>name</i> argument is the name of an unversioned file. The <i>mtime</i> is the last modification time of the unversioned file | | | 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | <blockquote> <b>uvigot</b> <i>name mtime hash size</i> </blockquote> <p>The <i>name</i> argument is the name of an unversioned file. The <i>mtime</i> is the last modification time of the unversioned file in seconds since 1970. The <i>hash</i> is the SHA1 hash of the unversioned file content, or "<b>-</b>" if the file has been deleted. The <i>size</i> is the uncompressed size of the file in bytes. <p>When the server sees a "pragma uv-hash" card for which the hash does not match, it sends uvigot cards for every unversioned file that it holds. The client will use this information to figure out which |
︙ | ︙ |
Changes to www/th1.md.
︙ | ︙ | |||
172 173 174 175 176 177 178 179 180 181 182 183 184 185 | * tclEval * tclExpr * tclInvoke * tclIsSafe * tclMakeSafe * tclReady * trace * utime * verifyCsrf * wiki Each of the commands above is documented by a block comment above their implementation in the th\_main.c or th\_tcl.c source files. | > > | 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | * tclEval * tclExpr * tclInvoke * tclIsSafe * tclMakeSafe * tclReady * trace * unversioned content * unversioned list * utime * verifyCsrf * wiki Each of the commands above is documented by a block comment above their implementation in the th\_main.c or th\_tcl.c source files. |
︙ | ︙ | |||
608 609 610 611 612 613 614 615 616 617 618 619 620 621 | <a name="trace"></a>TH1 trace Command ------------------------------------- * trace STRING Generates a TH1 trace message if TH1 tracing is enabled. <a name="utime"></a>TH1 utime Command ------------------------------------- * utime Returns the number of microseconds of CPU time consumed by the current | > > > > > > > > > > > > > > > > > | 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 | <a name="trace"></a>TH1 trace Command ------------------------------------- * trace STRING Generates a TH1 trace message if TH1 tracing is enabled. <a name="unversioned_content"></a>TH1 unversioned content Command ----------------------------------------------------------------- * unversioned content FILENAME Attempts to locate the specified unversioned file and return its contents. An error is generated if the repository is not open or the unversioned file cannot be found. <a name="unversioned_list"></a>TH1 unversioned list Command ----------------------------------------------------------- * unversioned list Returns a list of the names of all unversioned files held in the local repository. An error is generated if the repository is not open. <a name="utime"></a>TH1 utime Command ------------------------------------- * utime Returns the number of microseconds of CPU time consumed by the current |
︙ | ︙ |
Changes to www/theory1.wiki.
︙ | ︙ | |||
14 15 16 17 18 19 20 | because Fossil is a distributed NoSQL database. And, Fossil does use a modern high-level language for its implementation, namely SQL. <h2>Fossil Is A NoSQL Database</h2> We begin with the first question: Fossil is not based on a distributed NoSQL database because Fossil <u><i>is</i></u> a distributed NoSQL database. | | | | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | because Fossil is a distributed NoSQL database. And, Fossil does use a modern high-level language for its implementation, namely SQL. <h2>Fossil Is A NoSQL Database</h2> We begin with the first question: Fossil is not based on a distributed NoSQL database because Fossil <u><i>is</i></u> a distributed NoSQL database. Fossil is <u>not</u> based on SQLite. The current implementation of Fossil uses SQLite as a local store for the content of the distributed database and as a cache for meta-information about the distributed database that is precomputed for quick and easy presentation. But the use of SQLite in this role is an implementation detail and is not fundamental to the design. Some future version of Fossil might do away with SQLite and substitute a pile-of-files or a key/value database in place of SQLite. (Actually, that is very unlikely to happen since SQLite works amazingly well in its current role, but the point is that omitting SQLite from Fossil is a theoretical possibility.) The underlying database that Fossil implements has nothing to do with SQLite, or SQL, or even relational database theory. The underlying database is very simple: it is an unordered collection of "artifacts". |
︙ | ︙ | |||
62 63 64 65 66 67 68 | So really, Fossil works with two separate databases. There is the bag-of-artifacts database which is non-relational and distributed (like a NoSQL database) and there is the local relational database. The bag-of-artifacts database has a fixed format and is what defines a Fossil repository. Fossil will never modify the file format of the bag-of-artifacts database in an incompatible way because to do so would be to make something that is no longer "Fossil". The local relational database, on the other hand, | | | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | So really, Fossil works with two separate databases. There is the bag-of-artifacts database which is non-relational and distributed (like a NoSQL database) and there is the local relational database. The bag-of-artifacts database has a fixed format and is what defines a Fossil repository. Fossil will never modify the file format of the bag-of-artifacts database in an incompatible way because to do so would be to make something that is no longer "Fossil". The local relational database, on the other hand, is a cache that contains information derived from the bag-of-artifacts. The schema of the local relational database changes from time to time as the Fossil implementation is enhanced, and the content is recomputed from the unchanging bag of artifacts. The local relational database is an implementation detail which currently happens to use SQLite. Another way to think of the relational tables in a Fossil repository is as an index for the artifacts. Without the relational tables, |
︙ | ︙ | |||
87 88 89 90 91 92 93 | And Fossil doesn't use a distributed NoSQL database because Fossil is a distributed NoSQL database. That answers the first question. <h2>SQL Is A High-Level Scripting Language</h2> The second concern states that Fossil does not use a high-level scripting | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | And Fossil doesn't use a distributed NoSQL database because Fossil is a distributed NoSQL database. That answers the first question. <h2>SQL Is A High-Level Scripting Language</h2> The second concern states that Fossil does not use a high-level scripting language. But that is not true. Fossil uses SQL (as implemented by SQLite) as its scripting language. This misunderstanding likely arises because people fail to appreciate that SQL is a programming language. People are taught that SQL is a "query language" as if that were somehow different from a "programming language". But they really are two different flavors of the same thing. I find that people do better with SQL if they think of |
︙ | ︙ | |||
123 124 125 126 127 128 129 | Much of the "heavy lifting" within the Fossil implementation is carried out using SQL statements. It is true that these SQL statements are glued together with C code, but it turns out that C works surprisingly well in that role. Several early prototypes of Fossil were written in a scripting language (TCL). We normally find that TCL programs are shorter than the equivalent C code by a factor of 10 or more. But in the case of Fossil, | | | 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | Much of the "heavy lifting" within the Fossil implementation is carried out using SQL statements. It is true that these SQL statements are glued together with C code, but it turns out that C works surprisingly well in that role. Several early prototypes of Fossil were written in a scripting language (TCL). We normally find that TCL programs are shorter than the equivalent C code by a factor of 10 or more. But in the case of Fossil, the use of TCL was actually making the code longer and more difficult to understand. And so in the final design, we switched from TCL to C in order to make the code easier to implement and debug. Without the advantages of having SQLite built in, the design might well have followed a different path. Most reports generated by Fossil involve a complex set of queries against the relational tables of the repository database. These queries are normally implemented in only a few dozen lines of SQL code. But if those queries had been implemented procedurally using a key/value or pile-of-files database, it may have well been the case that a high-level scripting language such as Tcl, Python, or Ruby may have worked out better than C. |
Changes to www/webui.wiki.
︙ | ︙ | |||
23 24 25 26 27 28 29 | You get all of this, and more, for free when you use Fossil. There are no extra programs to install or setup. Everything you need is already pre-configured and built into the self-contained, stand-alone Fossil executable. As an example of how useful this web interface can be, | | < | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | You get all of this, and more, for free when you use Fossil. There are no extra programs to install or setup. Everything you need is already pre-configured and built into the self-contained, stand-alone Fossil executable. As an example of how useful this web interface can be, the entire [./index.wiki | Fossil website], including the document you are now reading, is rendered using the Fossil web interface, with no enhancements, and little customization. <blockquote> <b>Key point:</b> <i>The Fossil website is just a running instance of Fossil! |
︙ | ︙ |
Changes to www/wikitheory.wiki.
︙ | ︙ | |||
32 33 34 35 36 37 38 | In other words, if two users make unrelated changes to the same wiki page on separate repositories and those repositories are synced, the wiki page will fork. The web interface will display whichever edit was checked in last. The other edit can be found in the history. The file format will support merging the branches back together, but there is no mechanism in the user interface (yet) to perform the merge. | | | | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | In other words, if two users make unrelated changes to the same wiki page on separate repositories and those repositories are synced, the wiki page will fork. The web interface will display whichever edit was checked in last. The other edit can be found in the history. The file format will support merging the branches back together, but there is no mechanism in the user interface (yet) to perform the merge. Every change to a wiki page is a separate [./fileformat.wiki | control artifact] of type [./fileformat.wiki#wikichng | "Wiki Page"]. <h2>Embedded Documentation</h2> Files in the source tree that use the ".wiki", ".md", or ".markdown" suffixes can be accessed and displayed using special URLs to the fossil server. This allows project documentation to be stored in the source tree and accessed online. (Details are described [./embeddeddoc.wiki | separately].) Some projects prefer to store their documentation in wiki. There is nothing wrong with that. But other projects prefer to keep documentation as part of the source tree, so that it is versioned along with the source tree and so that only developers with check-in privileges can change it. Embedded documentation serves this latter purpose. Both forms of documentation use the exact same markup. Some projects may choose to use both forms of documentation at the same time. Because the same format is used, it is trivial to move a file from wiki to embedded documentation or back again as the project evolves. <h2>Bug-reports and check-in comments</h2> |
︙ | ︙ |