Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | Make update -verbose print even when there's no update required |
---|---|
Downloads: | Tarball | ZIP archive |
Timelines: | family | ancestors | descendants | both | venks-emacs |
Files: | files | file ages | folders |
SHA1: |
acfbfe0dd36a5fce3077cdbf18eb8fda |
User & Date: | venkat 2011-05-19 00:03:57.734 |
Context
2011-06-05
| ||
00:04 | Add ability to append to ticket fields from the fossil ticket command ... (check-in: 35d6029a user: venkat tags: venks-emacs) | |
2011-05-19
| ||
00:03 | Make update -verbose print even when there's no update required ... (check-in: acfbfe0d user: venkat tags: venks-emacs) | |
2011-05-18
| ||
15:01 | Update the built-in SQLite to the latest 3.7.7 alpha version. This adds no new capabilities - it is merely a beta-test of the SQLite version 3.7.7. ... (check-in: dcfa88bd user: drh tags: trunk) | |
2010-11-09
| ||
17:37 | Merge with trunk ... (check-in: 96c0c68a user: venkat tags: venks-emacs) | |
Changes
Changes to BUILD.txt.
1 2 | All of the source code for fossil is contained in the src/ subdirectory. But there is a lot of generated code, so you will probably want to | | < | < > | | < > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | All of the source code for fossil is contained in the src/ subdirectory. But there is a lot of generated code, so you will probably want to use the Makefile. To do a complete build on unix, just type: make On a windows box, use one of the Makefiles in the win/ subdirectory, according to your compiler and environment. For example: make -f win/Makefile.w32 If you have trouble, or you want to do something fancy, just look at top level makefile. There are 6 configuration options that are all well commented. Instead of editing the Makefile, consider copying the Makefile to an alternative name such as "GNUMakefile", "BSDMakefile", or "makefile" and editing the copy. BUILDING OUTSIDE THE SOURCE TREE An out of source build is pretty easy: 1. Make a new directory to do the builds in. 2. Copy "Makefile" from the source into the build directory and modify the SRCDIR macro along the lines of: |
︙ | ︙ |
Changes to Makefile.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | #!/usr/bin/make # #### The toplevel directory of the source tree. Fossil can be built # in a directory that is separate from the source tree. Just change # the following to point from the build directory to the src/ folder. # SRCDIR = ./src #### The directory into which object code files should be written. # # OBJDIR = ./bld #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # | > > > > | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | #!/usr/bin/make # # This is the top-level makefile for Fossil when the build is occurring # on a unix platform. This works out-of-the-box on most unix platforms. # But you are free to vary some of the definitions if desired. # #### The toplevel directory of the source tree. Fossil can be built # in a directory that is separate from the source tree. Just change # the following to point from the build directory to the src/ folder. # SRCDIR = ./src #### The directory into which object code files should be written. # # OBJDIR = ./bld #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # BCC = gcc #### The suffix to add to final executable file. When cross-compiling # to windows, make this ".exe". Otherwise leave it blank. # E = #### C Compile and options for use in building executables that # will run on the target platform. This is usually the same # as BCC, unless you are cross-compiling. This C compiler builds # the finished binary for fossil. The BCC compiler above is used |
︙ | ︙ | |||
43 44 45 46 47 48 49 | # chroot jail. # LIB = -lz $(LDFLAGS) # If using HTTPS: LIB += -lcrypto -lssl | | > | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | # chroot jail. # LIB = -lz $(LDFLAGS) # If using HTTPS: LIB += -lcrypto -lssl #### Tcl shell for use in running the fossil testsuite. If you do not # care about testing the end result, this can be blank. # TCLSH = tclsh # You should not need to change anything below this line ############################################################################### # # Automatic platform-specific options. |
︙ | ︙ |
Deleted Makefile.w32.
|
| < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < |
Changes to fossil.nsi.
1 2 3 4 5 6 7 8 9 10 11 12 | ; example2.nsi ; ; This script is based on example1.nsi, but adds uninstall support ; and (optionally) start menu shortcuts. ; ; It will install notepad.exe into a directory that the user selects, ; ; The name of the installer Name "Fossil" ; The file to write | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | ; example2.nsi ; ; This script is based on example1.nsi, but adds uninstall support ; and (optionally) start menu shortcuts. ; ; It will install notepad.exe into a directory that the user selects, ; ; The name of the installer Name "Fossil" ; The file to write OutFile "fossil-setup.exe" ; The default installation directory InstallDir $PROGRAMFILES\Fossil ; Registry key to check for directory (so if you install again, it will ; overwrite the old one automatically) InstallDirRegKey HKLM SOFTWARE\Fossil "Install_Dir" ; The text to prompt the user to enter a directory ComponentText "This will install fossil on your computer." ; The text to prompt the user to enter a directory DirText "Choose a directory to install in to:" ; The stuff to install Section "Fossil (required)" ; Set output path to the installation directory. SetOutPath $INSTDIR ; Put file there File ".\fossil.exe" ; Write the installation path into the registry WriteRegStr HKLM SOFTWARE\Fossil "Install_Dir" "$INSTDIR" ; Write the uninstall keys for Windows WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\Fossil" "DisplayName" "Fossil (remove only)" WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\Fossil" "UninstallString" '"$INSTDIR\uninstall.exe"' WriteUninstaller "uninstall.exe" SectionEnd |
︙ | ︙ |
Changes to src/add.c.
︙ | ︙ | |||
26 27 28 29 30 31 32 | /* ** Set to true if files whose names begin with "." should be ** included when processing a recursive "add" command. */ static int includeDotFiles = 0; /* | > > > | > > | > > > > > > > > > > > > > | > > > | > > > > > > > > | > > | > > > > > > > > > | > | < | < > | > | > | > > > > > > | < > | < < | | | | | | | | | | | | > | | | > > | | < | < > > > > | < | < < | < < < < < < | < > | < < | > | | < < < < | < < < < | | < < < < < < < | | | > > > > > | | < > > | | | | | | > > > > > > > > | | > | > > > > > < < < > > > > > > | | < < < < < < | < < < < < < < < | < < < < < < < < < < < < < < < < | < < < < < | > | < < < < | | | | | < | | | | | | > > > > > > | | < | < | > | | | < | | < < | < | | | > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 | /* ** Set to true if files whose names begin with "." should be ** included when processing a recursive "add" command. */ static int includeDotFiles = 0; /* ** This routine returns the names of files in a working checkout that ** are created by Fossil itself, and hence should not be added, deleted, ** or merge, and should be omitted from "clean" and "extra" lists. ** ** Return the N-th name. The first name has N==0. When all names have ** been used, return 0. */ const char *fossil_reserved_name(int N){ /* Possible names of the local per-checkout database file and ** its associated journals */ static const char *azName[] = { "_FOSSIL_", "_FOSSIL_-journal", "_FOSSIL_-wal", "_FOSSIL_-shm", ".fos", ".fos-journal", ".fos-wal", ".fos-shm", }; /* Names of auxiliary files generated by SQLite when the "manifest" ** properity is enabled */ static const char *azManifest[] = { "manifest", "manifest.uuid", }; if( N>=0 && N<count(azName) ) return azName[N]; if( N>=count(azName) && N<count(azName)+count(azManifest) && db_get_boolean("manifest",0) ){ return azManifest[N-count(azName)]; } return 0; } /* ** Return a list of all reserved filenames as an SQL list. */ const char *fossil_all_reserved_names(void){ static char *zAll = 0; if( zAll==0 ){ Blob x; int i; const char *z; blob_zero(&x); for(i=0; (z = fossil_reserved_name(i))!=0; i++){ if( i>0 ) blob_append(&x, ",", 1); blob_appendf(&x, "'%s'", z); } zAll = blob_str(&x); } return zAll; } /* ** Add a single file named zName to the VFILE table with vid. ** ** Omit any file whose name is pOmit. */ static int add_one_file( const char *zPath, /* Tree-name of file to add. */ int vid /* Add to this VFILE */ ){ if( !file_is_simple_pathname(zPath) ){ fossil_fatal("filename contains illegal characters: %s", zPath); } #if defined(_WIN32) if( db_exists("SELECT 1 FROM vfile" " WHERE pathname=%Q COLLATE nocase", zPath) ){ db_multi_exec("UPDATE vfile SET deleted=0" " WHERE pathname=%Q COLLATE nocase", zPath); } #else if( db_exists("SELECT 1 FROM vfile WHERE pathname=%Q", zPath) ){ db_multi_exec("UPDATE vfile SET deleted=0 WHERE pathname=%Q", zPath); } #endif else{ char *zFullname = mprintf("%s%s", g.zLocalRoot, zPath); db_multi_exec( "INSERT INTO vfile(vid,deleted,rid,mrid,pathname,isexe)" "VALUES(%d,0,0,0,%Q,%d)", vid, zPath, file_isexe(zFullname)); fossil_free(zFullname); } printf("ADDED %s\n", zPath); return 1; } /* ** Add all files in the sfile temp table. ** ** Automatically exclude the repository file. */ static int add_files_in_sfile(int vid){ const char *zRepo; /* Name of the repository database file */ int nAdd = 0; /* Number of files added */ int i; /* Loop counter */ const char *zReserved; /* Name of a reserved file */ Blob repoName; /* Treename of the repository */ Stmt loop; /* SQL to loop over all files to add */ if( !file_tree_name(g.zRepositoryName, &repoName, 0) ){ blob_zero(&repoName); zRepo = ""; }else{ zRepo = blob_str(&repoName); } db_prepare(&loop, "SELECT x FROM sfile ORDER BY x"); while( db_step(&loop)==SQLITE_ROW ){ const char *zToAdd = db_column_text(&loop, 0); if( fossil_strcmp(zToAdd, zRepo)==0 ) continue; for(i=0; (zReserved = fossil_reserved_name(i))!=0; i++){ if( fossil_strcmp(zToAdd, zReserved)==0 ) break; } if( zReserved ) continue; nAdd += add_one_file(zToAdd, vid); } db_finalize(&loop); blob_reset(&repoName); return nAdd; } /* ** COMMAND: add ** ** Usage: %fossil add ?OPTIONS? FILE1 ?FILE2 ...? ** ** Make arrangements to add one or more files or directories to the ** current checkout at the next commit. ** ** When adding files or directories recursively, filenames that begin ** with "." are excluded by default. To include such files, add ** the "--dotfiles" option to the command-line. ** ** The --ignore option is a comma-separate list of glob patterns for files ** to be excluded. Example: '*.o,*.obj,*.exe' If the --ignore option ** does not appear on the command line then the "ignore-glob" setting is ** used. ** ** SUMMARY: fossil add ?OPTIONS? FILE1 ?FILE2 ...? ** Options: --dotfiles, --ignore */ void add_cmd(void){ int i; /* Loop counter */ int vid; /* Currently checked out version */ int nRoot; /* Full path characters in g.zLocalRoot */ const char *zIgnoreFlag; /* The --ignore option or ignore-glob setting */ Glob *pIgnore; /* Ignore everything matching this glob pattern */ zIgnoreFlag = find_option("ignore",0,1); includeDotFiles = find_option("dotfiles",0,0)!=0; db_must_be_within_tree(); if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } vid = db_lget_int("checkout",0); if( vid==0 ){ fossil_panic("no checkout to add to"); } db_begin_transaction(); db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); #if defined(_WIN32) db_multi_exec( "CREATE INDEX IF NOT EXISTS vfile_pathname " " ON vfile(pathname COLLATE nocase)" ); #endif pIgnore = glob_create(zIgnoreFlag); nRoot = strlen(g.zLocalRoot); /* Load the names of all files that are to be added into sfile temp table */ for(i=2; i<g.argc; i++){ char *zName; int isDir; Blob fullName; file_canonical_name(g.argv[i], &fullName); zName = blob_str(&fullName); isDir = file_isdir(zName); if( isDir==1 ){ vfile_scan(&fullName, nRoot-1, includeDotFiles, pIgnore); }else if( isDir==0 ){ fossil_fatal("not found: %s", zName); }else if( access(zName, R_OK) ){ fossil_fatal("cannot open %s", zName); }else{ char *zTreeName = &zName[nRoot]; db_multi_exec( "INSERT OR IGNORE INTO sfile(x)" " SELECT %Q WHERE NOT EXISTS(SELECT 1 FROM vfile WHERE pathname=%Q)", zTreeName, zTreeName ); } blob_reset(&fullName); } glob_free(pIgnore); add_files_in_sfile(vid); db_end_transaction(0); } /* ** COMMAND: rm ** COMMAND: delete ** ** Usage: %fossil rm FILE1 ?FILE2 ...? ** or: %fossil delete FILE1 ?FILE2 ...? ** ** Remove one or more files or directories from the repository. ** ** This command does NOT remove the files from disk. It just marks the ** files as no longer being part of the project. In other words, future ** changes to the named files will not be versioned. ** ** SUMMARY: fossil rm FILE1 ?FILE2 ...? ** or: fossil delete FILE1 ?FILE2 ...? */ void delete_cmd(void){ int i; int vid; Stmt loop; db_must_be_within_tree(); vid = db_lget_int("checkout", 0); if( vid==0 ){ fossil_panic("no checkout to remove from"); } db_begin_transaction(); db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); for(i=2; i<g.argc; i++){ Blob treeName; char *zTreeName; file_tree_name(g.argv[i], &treeName, 1); zTreeName = blob_str(&treeName); db_multi_exec( "INSERT OR IGNORE INTO sfile" " SELECT pathname FROM vfile" " WHERE (pathname=%Q" " OR (pathname>'%q/' AND pathname<'%q0'))" " AND NOT deleted", zTreeName, zTreeName, zTreeName ); blob_reset(&treeName); } db_prepare(&loop, "SELECT x FROM sfile"); while( db_step(&loop)==SQLITE_ROW ){ printf("DELETED %s\n", db_column_text(&loop, 0)); } db_finalize(&loop); db_multi_exec( "UPDATE vfile SET deleted=1 WHERE pathname IN sfile;" "DELETE FROM vfile WHERE rid=0 AND deleted;" ); db_end_transaction(0); } /* ** COMMAND: addremove ** ** Usage: %fossil addremove ?--dotfiles? ?--ignore GLOBPATTERN? ?--test? ** ** Do all necessary "add" and "rm" commands to synchronize the repository ** with the content of the working checkout: ** ** * All files in the checkout but not in the repository (that is, ** all files displayed using the "extra" command) are added as ** if by the "add" command. ** ** * All files in the repository but missing from the checkout (that is, ** all files that show as MISSING with the "status" command) are ** removed as if by the "rm" command. ** ** The command does not "commit". You must run the "commit" separately ** as a separate step. ** ** Files and directories whose names begin with "." are ignored unless ** the --dotfiles option is used. ** ** The --ignore option overrides the "ignore-glob" setting. See ** documentation on the "settings" command for further information. ** ** The --test option shows what would happen without actually doing anything. ** ** This command can be used to track third party software. ** ** ** SUMMARY: fossil addremove ** Options: ?--dotfiles? ?--ignore GLOBPATTERN? ?--test? */ void import_cmd(void){ Blob path; const char *zIgnoreFlag = find_option("ignore",0,1); int allFlag = find_option("dotfiles",0,0)!=0; int isTest = find_option("test",0,0)!=0; int n; Stmt q; int vid; int nAdd = 0; int nDelete = 0; Glob *pIgnore; db_must_be_within_tree(); if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } vid = db_lget_int("checkout",0); if( vid==0 ){ fossil_panic("no checkout to add to"); } db_begin_transaction(); /* step 1: ** Populate the temp table "sfile" with the names of all unmanged ** files currently in the check-out, except for files that match the ** --ignore or ignore-glob patterns and dot-files. Then add all of ** the files in the sfile temp table to the set of managed files. */ db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); n = strlen(g.zLocalRoot); blob_init(&path, g.zLocalRoot, n-1); /* now we read the complete file structure into a temp table */ pIgnore = glob_create(zIgnoreFlag); vfile_scan(&path, blob_size(&path), allFlag, pIgnore); glob_free(pIgnore); nAdd = add_files_in_sfile(vid); /* step 2: search for missing files */ db_prepare(&q, "SELECT pathname, %Q || pathname, deleted FROM vfile" " WHERE NOT deleted" " ORDER BY 1", g.zLocalRoot ); while( db_step(&q)==SQLITE_ROW ){ const char * zFile; const char * zPath; zFile = db_column_text(&q, 0); zPath = db_column_text(&q, 1); if( !file_isfile(zPath) ){ if( !isTest ){ db_multi_exec("UPDATE vfile SET deleted=1 WHERE pathname=%Q", zFile); } printf("DELETED %s\n", zFile); nDelete++; } } db_finalize(&q); /* show cmmand summary */ printf("added %d files, deleted %d files\n", nAdd, nDelete); db_end_transaction(isTest); } /* ** Rename a single file. ** ** The original name of the file is zOrig. The new filename is zNew. */ static void mv_one_file(int vid, const char *zOrig, const char *zNew){ |
︙ | ︙ | |||
290 291 292 293 294 295 296 | /* ** COMMAND: mv ** COMMAND: rename ** ** Usage: %fossil mv|rename OLDNAME NEWNAME ** or: %fossil mv|rename OLDNAME... DIR ** | | > | > > > > | 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 | /* ** COMMAND: mv ** COMMAND: rename ** ** Usage: %fossil mv|rename OLDNAME NEWNAME ** or: %fossil mv|rename OLDNAME... DIR ** ** Move or rename one or more files or directories within the repository tree. ** You can either rename a file or directory or move it to another subdirectory. ** ** This command does NOT rename or move the files on disk. This command merely ** records the fact that filenames have changed so that appropriate notations ** can be made at the next commit/checkin. ** ** ** SUMMARY: fossil mv|rename OLDNAME NEWNAME ** or: fossil mv|rename OLDNAME... DIR */ void mv_cmd(void){ int i; int vid; char *zDest; Blob dest; Stmt q; |
︙ | ︙ | |||
345 346 347 348 349 350 351 | int nOrig; file_tree_name(g.argv[i], &orig, 1); zOrig = blob_str(&orig); nOrig = blob_size(&orig); db_prepare(&q, "SELECT pathname FROM vfile" " WHERE vid=%d" | | | | 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 | int nOrig; file_tree_name(g.argv[i], &orig, 1); zOrig = blob_str(&orig); nOrig = blob_size(&orig); db_prepare(&q, "SELECT pathname FROM vfile" " WHERE vid=%d" " AND (pathname='%q' OR (pathname>'%q/' AND pathname<'%q0'))" " ORDER BY 1", vid, zOrig, zOrig, zOrig ); while( db_step(&q)==SQLITE_ROW ){ const char *zPath = db_column_text(&q, 0); int nPath = db_column_bytes(&q, 0); const char *zTail; if( nPath==nOrig ){ zTail = file_tail(zPath); |
︙ | ︙ |
Changes to src/allrepo.c.
︙ | ︙ | |||
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | int n; Stmt q; const char *zCmd; char *zSyscmd; char *zFossil; char *zQFilename; int nMissing; if( g.argc<3 ){ usage("list|ls|pull|push|rebuild|sync"); } n = strlen(g.argv[2]); db_open_config(1); zCmd = g.argv[2]; if( strncmp(zCmd, "list", n)==0 || strncmp(zCmd,"ls",n)==0 ){ zCmd = "list"; }else if( strncmp(zCmd, "push", n)==0 ){ zCmd = "push -autourl -R"; }else if( strncmp(zCmd, "pull", n)==0 ){ zCmd = "pull -autourl -R"; }else if( strncmp(zCmd, "rebuild", n)==0 ){ zCmd = "rebuild"; }else if( strncmp(zCmd, "sync", n)==0 ){ zCmd = "sync -autourl -R"; }else if( strncmp(zCmd, "ignore", n)==0 ){ int j; db_begin_transaction(); for(j=3; j<g.argc; j++){ db_multi_exec("DELETE FROM global_config WHERE name GLOB 'repo:%q'", g.argv[j]); } db_end_transaction(0); return; }else{ fossil_fatal("\"all\" subcommand should be one of: " "ignore list ls push pull rebuild sync"); } | > > > > | | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | int n; Stmt q; const char *zCmd; char *zSyscmd; char *zFossil; char *zQFilename; int nMissing; int stopOnError = find_option("dontstop",0,0)==0; int rc; if( g.argc<3 ){ usage("list|ls|pull|push|rebuild|sync"); } n = strlen(g.argv[2]); db_open_config(1); zCmd = g.argv[2]; if( strncmp(zCmd, "list", n)==0 || strncmp(zCmd,"ls",n)==0 ){ zCmd = "list"; }else if( strncmp(zCmd, "push", n)==0 ){ zCmd = "push -autourl -R"; }else if( strncmp(zCmd, "pull", n)==0 ){ zCmd = "pull -autourl -R"; }else if( strncmp(zCmd, "rebuild", n)==0 ){ zCmd = "rebuild"; }else if( strncmp(zCmd, "sync", n)==0 ){ zCmd = "sync -autourl -R"; }else if( strncmp(zCmd, "test-integrity", n)==0 ){ zCmd = "test-integrity"; }else if( strncmp(zCmd, "ignore", n)==0 ){ int j; db_begin_transaction(); for(j=3; j<g.argc; j++){ db_multi_exec("DELETE FROM global_config WHERE name GLOB 'repo:%q'", g.argv[j]); } db_end_transaction(0); return; }else{ fossil_fatal("\"all\" subcommand should be one of: " "ignore list ls push pull rebuild sync"); } zFossil = quoteFilename(fossil_nameofexe()); nMissing = 0; db_prepare(&q, "SELECT DISTINCT substr(name, 6) COLLATE nocase" " FROM global_config" " WHERE substr(name, 1, 5)=='repo:' ORDER BY 1" ); while( db_step(&q)==SQLITE_ROW ){ |
︙ | ︙ | |||
136 137 138 139 140 141 142 | printf("%s\n", zFilename); continue; } zQFilename = quoteFilename(zFilename); zSyscmd = mprintf("%s %s %s", zFossil, zCmd, zQFilename); printf("%s\n", zSyscmd); fflush(stdout); | | > > > > | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 | printf("%s\n", zFilename); continue; } zQFilename = quoteFilename(zFilename); zSyscmd = mprintf("%s %s %s", zFossil, zCmd, zQFilename); printf("%s\n", zSyscmd); fflush(stdout); rc = fossil_system(zSyscmd); free(zSyscmd); free(zQFilename); if( stopOnError && rc ){ nMissing = 0; break; } } /* If any repositories whose names appear in the ~/.fossil file could not ** be found, remove those names from the ~/.fossil file. */ if( nMissing ){ db_begin_transaction(); |
︙ | ︙ |
Changes to src/attach.c.
︙ | ︙ | |||
172 173 174 175 176 177 178 | style_header("Missing"); @ Attachment has been deleted style_footer(); return; } g.okRead = 1; cgi_replace_parameter("name",zUUID); | | | 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 | style_header("Missing"); @ Attachment has been deleted style_footer(); return; } g.okRead = 1; cgi_replace_parameter("name",zUUID); if( fossil_strcmp(g.zPath,"attachview")==0 ){ artifact_page(); }else{ cgi_replace_parameter("m", mimetype_from_name(zFile)); rawartifact_page(); } } |
︙ | ︙ | |||
239 240 241 242 243 244 245 | const char *zComment; char *zDate; int rid; int i, n; db_begin_transaction(); blob_init(&content, aContent, szContent); | | | < | > | | 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 | const char *zComment; char *zDate; int rid; int i, n; db_begin_transaction(); blob_init(&content, aContent, szContent); rid = content_put(&content); zUUID = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_zero(&manifest); for(i=n=0; zName[i]; i++){ if( zName[i]=='/' || zName[i]=='\\' ) n = i; } zName += n; if( zName[0]==0 ) zName = "unknown"; blob_appendf(&manifest, "A %F %F %s\n", zName, zTarget, zUUID); zComment = PD("comment", ""); while( fossil_isspace(zComment[0]) ) zComment++; n = strlen(zComment); while( n>0 && fossil_isspace(zComment[n-1]) ){ n--; } if( n>0 ){ blob_appendf(&manifest, "C %F\n", zComment); } zDate = date_in_standard_format("now"); blob_appendf(&manifest, "D %s\n", zDate); blob_appendf(&manifest, "U %F\n", g.zLogin ? g.zLogin : "nobody"); md5sum_blob(&manifest, &cksum); blob_appendf(&manifest, "Z %b\n", &cksum); rid = content_put(&manifest); manifest_crosslink(rid, &manifest); assert( blob_is_reset(&manifest) ); db_end_transaction(0); cgi_redirect(zFrom); } style_header("Add Attachment"); @ <h2>Add Attachment To %s(zTargetType)</h2> @ <form action="%s(g.zTop)/attachadd" method="post" @ enctype="multipart/form-data"><div> @ File to Attach: @ <input type="file" name="f" size="60" /><br /> @ Description:<br /> @ <textarea name="comment" cols="80" rows="5" wrap="virtual"></textarea><br /> if( zTkt ){ @ <input type="hidden" name="tkt" value="%h(zTkt)" /> |
︙ | ︙ | |||
335 336 337 338 339 340 341 | blob_zero(&manifest); for(i=n=0; zFile[i]; i++){ if( zFile[i]=='/' || zFile[i]=='\\' ) n = i; } zFile += n; if( zFile[0]==0 ) zFile = "unknown"; blob_appendf(&manifest, "A %F %F\n", zFile, zTarget); | | < | | | 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 | blob_zero(&manifest); for(i=n=0; zFile[i]; i++){ if( zFile[i]=='/' || zFile[i]=='\\' ) n = i; } zFile += n; if( zFile[0]==0 ) zFile = "unknown"; blob_appendf(&manifest, "A %F %F\n", zFile, zTarget); zDate = date_in_standard_format("now"); blob_appendf(&manifest, "D %s\n", zDate); blob_appendf(&manifest, "U %F\n", g.zLogin ? g.zLogin : "nobody"); md5sum_blob(&manifest, &cksum); blob_appendf(&manifest, "Z %b\n", &cksum); rid = content_put(&manifest); manifest_crosslink(rid, &manifest); db_end_transaction(0); cgi_redirect(zFrom); } style_header("Delete Attachment"); @ <form action="%s(g.zTop)/attachdelete" method="post"><div> @ <p>Confirm that you want to delete the attachment named @ "%h(zFile)" on %s(zTkt?"ticket":"wiki page") %h(zTarget):<br /></p> if( zTkt ){ @ <input type="hidden" name="tkt" value="%h(zTkt)" /> }else{ @ <input type="hidden" name="page" value="%h(zPage)" /> } |
︙ | ︙ |
Added src/bisect.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 | /* ** Copyright (c) 2010 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@sqlite.org ** ******************************************************************************* ** ** This file contains code used to implement the "bisect" command. ** ** This file also contains logic used to compute the closure of filename ** changes that have occurred across multiple check-ins. */ #include "config.h" #include "bisect.h" #include <assert.h> /* ** Local variables for this module */ static struct { int bad; /* The bad version */ int good; /* The good version */ } bisect; /* ** Find the shortest path between bad and good. */ void bisect_path(void){ PathNode *p; bisect.bad = db_lget_int("bisect-bad", 0); if( bisect.bad==0 ){ fossil_fatal("no \"bad\" version has been identified"); } bisect.good = db_lget_int("bisect-good", 0); if( bisect.good==0 ){ fossil_fatal("no \"good\" version has been identified"); } p = path_shortest(bisect.good, bisect.bad, bisect_option("direct-only")); if( p==0 ){ char *zBad = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", bisect.bad); char *zGood = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", bisect.good); fossil_fatal("no path from good ([%S]) to bad ([%S]) or back", zGood, zBad); } } /* ** The set of all bisect options. */ static const struct { const char *zName; const char *zDefault; const char *zDesc; } aBisectOption[] = { { "auto-next", "on", "Automatically run \"bisect next\" after each " "\"bisect good\" or \"bisect bad\"" }, { "direct-only", "on", "Follow only primary parent-child links, not " "merges\n" }, }; /* ** Return the value of a boolean bisect option. */ int bisect_option(const char *zName){ unsigned int i; int r = -1; for(i=0; i<sizeof(aBisectOption)/sizeof(aBisectOption[0]); i++){ if( strcmp(zName, aBisectOption[i].zName)==0 ){ char *zLabel = mprintf("bisect-%s", zName); char *z = db_lget(zLabel, (char*)aBisectOption[i].zDefault); if( is_truth(z) ) r = 1; if( is_false(z) ) r = 0; if( r<0 ) r = is_truth(aBisectOption[i].zDefault); free(zLabel); break; } } assert( r>=0 ); return r; } /* ** COMMAND: bisect ** ** Usage: %fossil bisect SUBCOMMAND ... ** ** Run various subcommands useful for searching for bugs. ** ** fossil bisect bad ?VERSION? ** ** Identify version VERSION as non-working. If VERSION is omitted, ** the current checkout is marked as non-working. ** ** fossil bisect good ?VERSION? ** ** Identify version VERSION as working. If VERSION is omitted, ** the current checkout is marked as working. ** ** fossil bisect next ** ** Update to the next version that is halfway between the working and ** non-working versions. ** ** fossil bisect options ?NAME? ?VALUE? ** ** List all bisect options, or the value of a single option, or set the ** value of a bisect option. ** ** fossil bisect reset ** ** Reinitialize a bisect session. This cancels prior bisect history ** and allows a bisect session to start over from the beginning. ** ** fossil bisect vlist ** ** List the versions in between "bad" and "good". */ void bisect_cmd(void){ int n; const char *zCmd; db_must_be_within_tree(); if( g.argc<3 ){ usage("bisect SUBCOMMAND ARGS..."); } zCmd = g.argv[2]; n = strlen(zCmd); if( n==0 ) zCmd = "-"; if( memcmp(zCmd, "bad", n)==0 ){ int ridBad; if( g.argc==3 ){ ridBad = db_lget_int("checkout",0); }else{ ridBad = name_to_rid(g.argv[3]); } db_lset_int("bisect-bad", ridBad); if( ridBad>0 && bisect_option("auto-next") && db_lget_int("bisect-good",0)>0 ){ zCmd = "next"; n = 4; }else{ return; } }else if( memcmp(zCmd, "good", n)==0 ){ int ridGood; if( g.argc==3 ){ ridGood = db_lget_int("checkout",0); }else{ ridGood = name_to_rid(g.argv[3]); } db_lset_int("bisect-good", ridGood); if( ridGood>0 && bisect_option("auto-next") && db_lget_int("bisect-bad",0)>0 ){ zCmd = "next"; n = 4; }else{ return; } } if( memcmp(zCmd, "next", n)==0 ){ PathNode *pMid; bisect_path(); pMid = path_midpoint(); if( pMid==0 ){ fossil_fatal("bisect is done - there are no more intermediate versions"); } g.argv[1] = "update"; g.argv[2] = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", pMid->rid); g.argc = 3; g.fNoSync = 1; update_cmd(); }else if( memcmp(zCmd, "options", n)==0 ){ if( g.argc==3 ){ unsigned int i; for(i=0; i<sizeof(aBisectOption)/sizeof(aBisectOption[0]); i++){ char *z = mprintf("bisect-%s", aBisectOption[i].zName); printf(" %-15s %-6s ", aBisectOption[i].zName, db_lget(z, (char*)aBisectOption[i].zDefault)); fossil_free(z); comment_print(aBisectOption[i].zDesc, 27, 79); } }else if( g.argc==4 || g.argc==5 ){ unsigned int i; n = strlen(g.argv[3]); for(i=0; i<sizeof(aBisectOption)/sizeof(aBisectOption[0]); i++){ if( memcmp(g.argv[3], aBisectOption[i].zName, n)==0 ){ char *z = mprintf("bisect-%s", aBisectOption[i].zName); if( g.argc==5 ){ db_lset(z, g.argv[4]); } printf("%s\n", db_lget(z, (char*)aBisectOption[i].zDefault)); fossil_free(z); break; } } if( i>=sizeof(aBisectOption)/sizeof(aBisectOption[0]) ){ fossil_fatal("no such bisect option: %s", g.argv[3]); } }else{ usage("bisect option ?NAME? ?VALUE?"); } }else if( memcmp(zCmd, "reset", n)==0 ){ db_multi_exec( "DELETE FROM vvar WHERE name IN ('bisect-good', 'bisect-bad');" ); }else if( memcmp(zCmd, "vlist", n)==0 ){ PathNode *p; int vid = db_lget_int("checkout", 0); int n; Stmt s; int nStep; bisect_path(); db_prepare(&s, "SELECT substr(blob.uuid,1,20) || ' ' || " " datetime(event.mtime) FROM blob, event" " WHERE blob.rid=:rid AND event.objid=:rid" " AND event.type='ci'"); nStep = path_length(); for(p=path_last(), n=0; p; p=p->pFrom, n++){ const char *z; db_bind_int(&s, ":rid", p->rid); if( db_step(&s)==SQLITE_ROW ){ z = db_column_text(&s, 0); printf("%s", z); if( p->rid==bisect.good ) printf(" GOOD"); if( p->rid==bisect.bad ) printf(" BAD"); if( p->rid==vid ) printf(" CURRENT"); if( nStep>1 && n==nStep/2 ) printf(" NEXT"); printf("\n"); } db_reset(&s); } db_finalize(&s); }else{ usage("bad|good|next|reset|vlist ..."); } } |
Changes to src/blob.c.
︙ | ︙ | |||
58 59 60 61 62 63 64 65 66 | ** Make sure a blob is initialized */ #define blob_is_init(x) \ assert((x)->xRealloc==blobReallocMalloc || (x)->xRealloc==blobReallocStatic) /* ** Make sure a blob does not contain malloced memory. */ #if 0 /* Enable for debugging only */ | > > > > | < | > > | 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | ** Make sure a blob is initialized */ #define blob_is_init(x) \ assert((x)->xRealloc==blobReallocMalloc || (x)->xRealloc==blobReallocStatic) /* ** Make sure a blob does not contain malloced memory. ** ** This might fail if we are unlucky and x is uninitialized. For that ** reason it should only be used locally for debugging. Leave it turned ** off for production. */ #if 0 /* Enable for debugging only */ #define assert_blob_is_reset(x) assert(blob_is_reset(x)) #else #define assert_blob_is_reset(x) #endif /* ** We find that the built-in isspace() function does not work for ** some international character sets. So here is a substitute. */ int fossil_isspace(char c){ return c==' ' || (c<='\r' && c>='\t'); |
︙ | ︙ | |||
176 177 178 179 180 181 182 183 184 185 186 187 | ** Reset a blob to be an empty container. */ void blob_reset(Blob *pBlob){ blob_is_init(pBlob); pBlob->xRealloc(pBlob, 0); } /* ** Initialize a blob to a string or byte-array constant of a specified length. ** Any prior data in the blob is discarded. */ void blob_init(Blob *pBlob, const char *zData, int size){ | > > > > > > > > > > > > > | | 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | ** Reset a blob to be an empty container. */ void blob_reset(Blob *pBlob){ blob_is_init(pBlob); pBlob->xRealloc(pBlob, 0); } /* ** Return true if the blob has been zeroed - in other words if it contains ** no malloced memory. This only works reliably if the blob has been ** initialized - it can return a false negative on an uninitialized blob. */ int blob_is_reset(Blob *pBlob){ if( pBlob==0 ) return 1; if( pBlob->nUsed ) return 0; if( pBlob->xRealloc==blobReallocMalloc && pBlob->nAlloc ) return 0; return 1; } /* ** Initialize a blob to a string or byte-array constant of a specified length. ** Any prior data in the blob is discarded. */ void blob_init(Blob *pBlob, const char *zData, int size){ assert_blob_is_reset(pBlob); if( zData==0 ){ *pBlob = empty_blob; }else{ if( size<=0 ) size = strlen(zData); pBlob->nUsed = pBlob->nAlloc = size; pBlob->aData = (char*)zData; pBlob->iCursor = 0; |
︙ | ︙ | |||
206 207 208 209 210 211 212 | } /* ** Initialize a blob to an empty string. */ void blob_zero(Blob *pBlob){ static const char zEmpty[] = ""; | | | 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 | } /* ** Initialize a blob to an empty string. */ void blob_zero(Blob *pBlob){ static const char zEmpty[] = ""; assert_blob_is_reset(pBlob); pBlob->nUsed = 0; pBlob->nAlloc = 1; pBlob->aData = (char*)zEmpty; pBlob->iCursor = 0; pBlob->xRealloc = blobReallocStatic; } |
︙ | ︙ | |||
277 278 279 280 281 282 283 | blob_is_init(p); if( p->nUsed==0 ) return ""; p->aData[p->nUsed] = 0; return p->aData; } /* | | > | 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 | blob_is_init(p); if( p->nUsed==0 ) return ""; p->aData[p->nUsed] = 0; return p->aData; } /* ** Compare two blobs. Return negative, zero, or positive if the first ** blob is less then, equal to, or greater than the second. */ int blob_compare(Blob *pA, Blob *pB){ int szA, szB, sz, rc; blob_is_init(pA); blob_is_init(pB); szA = blob_size(pA); szB = blob_size(pB); |
︙ | ︙ | |||
354 355 356 357 358 359 360 | ** Extract N bytes from blob pFrom and use it to initialize blob pTo. ** Return the actual number of bytes extracted. ** ** After this call completes, pTo will be an ephemeral blob. */ int blob_extract(Blob *pFrom, int N, Blob *pTo){ blob_is_init(pFrom); | | | 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | ** Extract N bytes from blob pFrom and use it to initialize blob pTo. ** Return the actual number of bytes extracted. ** ** After this call completes, pTo will be an ephemeral blob. */ int blob_extract(Blob *pFrom, int N, Blob *pTo){ blob_is_init(pFrom); assert_blob_is_reset(pTo); if( pFrom->iCursor + N > pFrom->nUsed ){ N = pFrom->nUsed - pFrom->iCursor; if( N<=0 ){ blob_zero(pTo); return 0; } } |
︙ | ︙ | |||
468 469 470 471 472 473 474 475 476 477 478 479 480 481 | int blob_token(Blob *pFrom, Blob *pTo){ char *aData = pFrom->aData; int n = pFrom->nUsed; int i = pFrom->iCursor; while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; while( i<n && !fossil_isspace(aData[i]) ){ i++; } blob_extract(pFrom, i-pFrom->iCursor, pTo); while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; return pTo->nUsed; } /* | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 | int blob_token(Blob *pFrom, Blob *pTo){ char *aData = pFrom->aData; int n = pFrom->nUsed; int i = pFrom->iCursor; while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; while( i<n && !fossil_isspace(aData[i]) ){ i++; } blob_extract(pFrom, i-pFrom->iCursor, pTo); while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; return pTo->nUsed; } /* ** Extract a single SQL token from pFrom and use it to initialize pTo. ** Return the number of bytes in the token. If no token is found, ** return 0. ** ** An SQL token consists of one or more non-space characters. If the ** first character is ' then the token is terminated by a matching ' ** (ignoring double '') or by the end of the string ** ** The cursor of pFrom is left pointing at the first character past ** the end of the token. ** ** pTo will be an ephermeral blob. If pFrom changes, it might alter ** pTo as well. */ int blob_sqltoken(Blob *pFrom, Blob *pTo){ char *aData = pFrom->aData; int n = pFrom->nUsed; int i = pFrom->iCursor; while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; if( aData[i]=='\'' ){ i++; while( i<n ){ if( aData[i]=='\'' ){ if( aData[++i]!='\'' ) break; } i++; } }else{ while( i<n && !fossil_isspace(aData[i]) ){ i++; } } blob_extract(pFrom, i-pFrom->iCursor, pTo); while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; return pTo->nUsed; } /* |
︙ | ︙ | |||
667 668 669 670 671 672 673 | char *zName, zBuf[1000]; nName = strlen(zFilename); if( nName>=sizeof(zBuf) ){ zName = mprintf("%s", zFilename); }else{ zName = zBuf; | | | 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 | char *zName, zBuf[1000]; nName = strlen(zFilename); if( nName>=sizeof(zBuf) ){ zName = mprintf("%s", zFilename); }else{ zName = zBuf; memcpy(zName, zFilename, nName+1); } nName = file_simplify_name(zName, nName); for(i=1; i<nName; i++){ if( zName[i]=='/' ){ zName[i] = 0; #if defined(_WIN32) /* |
︙ | ︙ | |||
732 733 734 735 736 737 738 | outBuf[1] = nIn>>16 & 0xff; outBuf[2] = nIn>>8 & 0xff; outBuf[3] = nIn & 0xff; nOut2 = (long int)nOut; compress(&outBuf[4], &nOut2, (unsigned char*)blob_buffer(pIn), blob_size(pIn)); if( pOut==pIn ) blob_reset(pOut); | | | 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | outBuf[1] = nIn>>16 & 0xff; outBuf[2] = nIn>>8 & 0xff; outBuf[3] = nIn & 0xff; nOut2 = (long int)nOut; compress(&outBuf[4], &nOut2, (unsigned char*)blob_buffer(pIn), blob_size(pIn)); if( pOut==pIn ) blob_reset(pOut); assert_blob_is_reset(pOut); *pOut = temp; blob_resize(pOut, nOut2+4); } /* ** COMMAND: test-compress */ |
︙ | ︙ | |||
785 786 787 788 789 790 791 | stream.next_in = (unsigned char*)blob_buffer(pIn2); deflate(&stream, 0); deflate(&stream, Z_FINISH); blob_resize(&temp, stream.total_out + 4); deflateEnd(&stream); if( pOut==pIn1 ) blob_reset(pOut); if( pOut==pIn2 ) blob_reset(pOut); | | | 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 | stream.next_in = (unsigned char*)blob_buffer(pIn2); deflate(&stream, 0); deflate(&stream, Z_FINISH); blob_resize(&temp, stream.total_out + 4); deflateEnd(&stream); if( pOut==pIn1 ) blob_reset(pOut); if( pOut==pIn2 ) blob_reset(pOut); assert_blob_is_reset(pOut); *pOut = temp; } /* ** COMMAND: test-compress-2 */ void compress2_cmd(void){ |
︙ | ︙ | |||
823 824 825 826 827 828 829 | } inBuf = (unsigned char*)blob_buffer(pIn); nOut = (inBuf[0]<<24) + (inBuf[1]<<16) + (inBuf[2]<<8) + inBuf[3]; blob_zero(&temp); blob_resize(&temp, nOut+1); nOut2 = (long int)nOut; rc = uncompress((unsigned char*)blob_buffer(&temp), &nOut2, | | | | 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 | } inBuf = (unsigned char*)blob_buffer(pIn); nOut = (inBuf[0]<<24) + (inBuf[1]<<16) + (inBuf[2]<<8) + inBuf[3]; blob_zero(&temp); blob_resize(&temp, nOut+1); nOut2 = (long int)nOut; rc = uncompress((unsigned char*)blob_buffer(&temp), &nOut2, &inBuf[4], nIn - 4); if( rc!=Z_OK ){ blob_reset(&temp); return 1; } blob_resize(&temp, nOut2); if( pOut==pIn ) blob_reset(pOut); assert_blob_is_reset(pOut); *pOut = temp; return 0; } /* ** COMMAND: test-uncompress */ |
︙ | ︙ |
Changes to src/branch.c.
︙ | ︙ | |||
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | char *zComment; /* Check-in comment for the new branch */ const char *zColor; /* Color of the new branch */ Blob branch; /* manifest for the new branch */ Manifest *pParent; /* Parsed parent manifest */ Blob mcksum; /* Self-checksum on the manifest */ const char *zDateOvrd; /* Override date string */ const char *zUserOvrd; /* Override user name */ noSign = find_option("nosign","",0)!=0; zColor = find_option("bgcolor","c",1); zDateOvrd = find_option("date-override",0,1); zUserOvrd = find_option("user-override",0,1); verify_all_options(); if( g.argc<5 ){ usage("new BRANCH-NAME CHECK-IN ?-bgcolor COLOR?"); } | > > | | 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | char *zComment; /* Check-in comment for the new branch */ const char *zColor; /* Color of the new branch */ Blob branch; /* manifest for the new branch */ Manifest *pParent; /* Parsed parent manifest */ Blob mcksum; /* Self-checksum on the manifest */ const char *zDateOvrd; /* Override date string */ const char *zUserOvrd; /* Override user name */ int isPrivate = 0; /* True if the branch should be private */ noSign = find_option("nosign","",0)!=0; zColor = find_option("bgcolor","c",1); isPrivate = find_option("private",0,0)!=0; zDateOvrd = find_option("date-override",0,1); zUserOvrd = find_option("user-override",0,1); verify_all_options(); if( g.argc<5 ){ usage("new BRANCH-NAME CHECK-IN ?-bgcolor COLOR?"); } db_find_and_open_repository(0, 0); noSign = db_get_int("omitsign", 0)|noSign; /* fossil branch new name */ zBranch = g.argv[3]; if( zBranch==0 || zBranch[0]==0 ){ fossil_panic("branch name cannot be empty"); } |
︙ | ︙ | |||
82 83 84 85 86 87 88 | blob_zero(&branch); if( pParent->zBaseline ){ blob_appendf(&branch, "B %s\n", pParent->zBaseline); } zComment = mprintf("Create new branch named \"%h\"", zBranch); blob_appendf(&branch, "C %F\n", zComment); zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); | < > | > > > > > > > | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | blob_zero(&branch); if( pParent->zBaseline ){ blob_appendf(&branch, "B %s\n", pParent->zBaseline); } zComment = mprintf("Create new branch named \"%h\"", zBranch); blob_appendf(&branch, "C %F\n", zComment); zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); blob_appendf(&branch, "D %s\n", zDate); /* Copy all of the content from the parent into the branch */ for(i=0; i<pParent->nFile; ++i){ blob_appendf(&branch, "F %F", pParent->aFile[i].zName); if( pParent->aFile[i].zUuid ){ blob_appendf(&branch, " %s", pParent->aFile[i].zUuid); if( pParent->aFile[i].zPerm && pParent->aFile[i].zPerm[0] ){ blob_appendf(&branch, " %s", pParent->aFile[i].zPerm); } } blob_append(&branch, "\n", 1); } zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rootid); blob_appendf(&branch, "P %s\n", zUuid); if( pParent->zRepoCksum ){ blob_appendf(&branch, "R %s\n", pParent->zRepoCksum); } manifest_destroy(pParent); /* Add the symbolic branch name and the "branch" tag to identify ** this as a new branch */ if( content_is_private(rootid) ) isPrivate = 1; if( isPrivate && zColor==0 ) zColor = "#fec084"; if( zColor!=0 ){ blob_appendf(&branch, "T *bgcolor * %F\n", zColor); } blob_appendf(&branch, "T *branch * %F\n", zBranch); blob_appendf(&branch, "T *sym-%F *\n", zBranch); if( isPrivate ){ blob_appendf(&branch, "T +private *\n"); noSign = 1; } /* Cancel all other symbolic tags */ db_prepare(&q, "SELECT tagname FROM tagxref, tag" " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid" " AND tagtype>0 AND tagname GLOB 'sym-*'" " ORDER BY tagname", |
︙ | ︙ | |||
135 136 137 138 139 140 141 | prompt_user("unable to sign manifest. continue (y/N)? ", &ans); if( blob_str(&ans)[0]!='y' ){ db_end_transaction(1); fossil_exit(1); } } | | > | | | > > > | | | > | | | > > > > > > > > < | | > > | | > | > > > | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 | prompt_user("unable to sign manifest. continue (y/N)? ", &ans); if( blob_str(&ans)[0]!='y' ){ db_end_transaction(1); fossil_exit(1); } } brid = content_put(&branch); if( brid==0 ){ fossil_panic("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch)==0 ){ fossil_panic("unable to install new manifest"); } assert( blob_is_reset(&branch) ); content_deltify(rootid, brid, 0); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", brid); printf("New branch: %s\n", zUuid); if( g.argc==3 ){ printf( "\n" "Note: the local check-out has not been updated to the new\n" " branch. To begin working on the new branch, do this:\n" "\n" " %s update %s\n", fossil_nameofexe(), zBranch ); } /* Commit */ db_end_transaction(0); /* Do an autosync push, if requested */ autosync(AUTOSYNC_PUSH); } /* ** COMMAND: branch ** ** Usage: %fossil branch SUBCOMMAND ... ?-R|--repository FILE? ** ** Run various subcommands to manage branches of the open repository or ** of the repository identified by the -R or --repository option. ** ** %fossil branch new BRANCH-NAME BASIS ?--bgcolor COLOR? ?--private? ** ** Create a new branch BRANCH-NAME off of check-in BASIS. ** You can optionally give the branch a default color. The ** --private option makes the branch private. ** ** %fossil branch list ** %fossil branch ls ** ** List all branches ** */ void branch_cmd(void){ int n; const char *zCmd = "list"; db_find_and_open_repository(0, 0); if( g.argc<2 ){ usage("new|list|ls ..."); } if( g.argc>=3 ) zCmd = g.argv[2]; n = strlen(zCmd); if( strncmp(zCmd,"new",n)==0 ){ branch_new(); }else if( (strncmp(zCmd,"list",n)==0)||(strncmp(zCmd, "ls", n)==0) ){ Stmt q; int vid; char *zCurrent = 0; if( g.localOpen ){ vid = db_lget_int("checkout", 0); zCurrent = db_text(0, "SELECT value FROM tagxref" " WHERE rid=%d AND tagid=%d", vid, TAG_BRANCH); } db_prepare(&q, "SELECT DISTINCT value FROM tagxref" " WHERE tagid=%d AND value NOT NULL" " AND rid IN leaf" " AND NOT %z" " ORDER BY value /*sort*/", TAG_BRANCH, leaf_is_closed_sql("tagxref.rid") ); while( db_step(&q)==SQLITE_ROW ){ const char *zBr = db_column_text(&q, 0); int isCur = zCurrent!=0 && fossil_strcmp(zCurrent,zBr)==0; printf("%s%s\n", (isCur ? "* " : " "), zBr); } db_finalize(&q); }else{ fossil_panic("branch subcommand should be one of: " "new list ls"); } } /* ** WEBPAGE: brlist ** ** Show a timeline of all branches |
︙ | ︙ | |||
230 231 232 233 234 235 236 | style_submenu_element("Timeline", "Timeline", "brtimeline"); if( showClosed ){ style_submenu_element("Open","Open","brlist"); }else{ style_submenu_element("Closed","Closed","brlist?closed"); } login_anonymous_available(); | < | | | > > > | > | | < < < | > | | | 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 | style_submenu_element("Timeline", "Timeline", "brtimeline"); if( showClosed ){ style_submenu_element("Open","Open","brlist"); }else{ style_submenu_element("Closed","Closed","brlist?closed"); } login_anonymous_available(); style_sidebox_begin("Nomenclature:", "33%"); @ <ol> @ <li> An <div class="sideboxDescribed"><a href="brlist"> @ open branch</a></div> is a branch that has one or @ more <a href="leaves">open leaves.</a> @ The presence of open leaves presumably means @ that the branch is still being extended with new check-ins.</li> @ <li> A <div class="sideboxDescribed"><a href="brlist?closed"> @ closed branch</a></div> is a branch with only @ <div class="sideboxDescribed"><a href="leaves?closed"> @ closed leaves</a></div>. @ Closed branches are fixed and do not change (unless they are first @ reopened)</li> @ </ol> style_sidebox_end(); cnt = 0; if( showClosed ){ db_prepare(&q, "SELECT value FROM tagxref" " WHERE tagid=%d AND value NOT NULL " "EXCEPT " "SELECT value FROM tagxref" " WHERE tagid=%d" " AND rid IN leaf" " AND NOT %z" " ORDER BY value /*sort*/", TAG_BRANCH, TAG_BRANCH, leaf_is_closed_sql("tagxref.rid") ); }else{ db_prepare(&q, "SELECT DISTINCT value FROM tagxref" " WHERE tagid=%d AND value NOT NULL" " AND rid IN leaf" " AND NOT %z" " ORDER BY value /*sort*/", TAG_BRANCH, leaf_is_closed_sql("tagxref.rid") ); } while( db_step(&q)==SQLITE_ROW ){ const char *zBr = db_column_text(&q, 0); if( cnt==0 ){ if( showClosed ){ @ <h2>Closed Branches:</h2> }else{ @ <h2>Open Branches:</h2> } @ <ul> cnt++; } if( g.okHistory ){ @ <li><a href="%s(g.zTop)/timeline?r=%T(zBr)">%h(zBr)</a></li> }else{ @ <li><b>%h(zBr)</b></li> } } if( cnt ){ @ </ul> } |
︙ | ︙ | |||
316 317 318 319 320 321 322 | " AND tagxref.tagid=tag.tagid" " AND tagxref.tagtype>0" " AND tag.tagname GLOB 'sym-*'", rid ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagName = db_column_text(&q, 0); | | | 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 | " AND tagxref.tagid=tag.tagid" " AND tagxref.tagtype>0" " AND tag.tagname GLOB 'sym-*'", rid ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagName = db_column_text(&q, 0); @ <a href="%s(g.zTop)/timeline?r=%T(zTagName)">[timeline]</a> } db_finalize(&q); } /* ** WEBPAGE: brtimeline ** |
︙ | ︙ | |||
342 343 344 345 346 347 348 | @ <h2>The initial check-in for each branch:</h2> db_prepare(&q, "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype>0 AND tagid=%d AND srcid!=0)" " ORDER BY event.mtime DESC", timeline_query_for_www(), TAG_BRANCH ); | | | 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 | @ <h2>The initial check-in for each branch:</h2> db_prepare(&q, "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype>0 AND tagid=%d AND srcid!=0)" " ORDER BY event.mtime DESC", timeline_query_for_www(), TAG_BRANCH ); www_print_timeline(&q, 0, 0, 0, brtimeline_extra); db_finalize(&q); @ <script type="text/JavaScript"> @ function xin(id){ @ } @ function xout(id){ @ } @ </script> style_footer(); } |
Changes to src/browse.c.
︙ | ︙ | |||
78 79 80 81 82 83 84 | char *zSep = ""; for(i=0; zPath[i]; i=j){ for(j=i; zPath[j] && zPath[j]!='/'; j++){} if( zPath[j] && g.okHistory ){ if( zCI ){ blob_appendf(pOut, "%s<a href=\"%s/dir?ci=%S&name=%#T\">%#h</a>", | | | | > | 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | char *zSep = ""; for(i=0; zPath[i]; i=j){ for(j=i; zPath[j] && zPath[j]!='/'; j++){} if( zPath[j] && g.okHistory ){ if( zCI ){ blob_appendf(pOut, "%s<a href=\"%s/dir?ci=%S&name=%#T\">%#h</a>", zSep, g.zTop, zCI, j, zPath, j-i, &zPath[i]); }else{ blob_appendf(pOut, "%s<a href=\"%s/dir?name=%#T\">%#h</a>", zSep, g.zTop, j, zPath, j-i, &zPath[i]); } }else{ blob_appendf(pOut, "%s%#h", zSep, j-i, &zPath[i]); } zSep = "/"; while( zPath[j]=='/' ){ j++; } } } /* ** WEBPAGE: dir ** ** Query parameters: ** ** name=PATH Directory to display. Required. ** ci=LABEL Show only files in this check-in. Optional. */ void page_dir(void){ char *zD = fossil_strdup(P("name")); int nD = zD ? strlen(zD)+1 : 0; int mxLen; int nCol, nRow; int cnt, i; char *zPrefix; Stmt q; const char *zCI = P("ci"); int rid = 0; char *zUuid = 0; Blob dirname; Manifest *pM = 0; const char *zSubdirLink; login_check_credentials(); if( !g.okHistory ){ login_needed(); return; } while( nD>1 && zD[nD-2]=='/' ){ zD[(--nD)-1] = 0; } style_header("File List"); sqlite3_create_function(g.db, "pathelement", 2, SQLITE_UTF8, 0, pathelementFunc, 0, 0); /* If the name= parameter is an empty string, make it a NULL pointer */ if( zD && strlen(zD)==0 ){ zD = 0; } |
︙ | ︙ | |||
159 160 161 162 163 164 165 | @ <h2>Files of check-in [<a href="vinfo?name=%T(zUuid)">%s(zShort)</a>] @ %s(blob_str(&dirname))</h2> zSubdirLink = mprintf("%s/dir?ci=%S&name=%T", g.zTop, zUuid, zPrefix); if( zD ){ style_submenu_element("Top", "Top", "%s/dir?ci=%S", g.zTop, zUuid); style_submenu_element("All", "All", "%s/dir?name=%t", g.zTop, zD); }else{ | | | | | | | | | 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | @ <h2>Files of check-in [<a href="vinfo?name=%T(zUuid)">%s(zShort)</a>] @ %s(blob_str(&dirname))</h2> zSubdirLink = mprintf("%s/dir?ci=%S&name=%T", g.zTop, zUuid, zPrefix); if( zD ){ style_submenu_element("Top", "Top", "%s/dir?ci=%S", g.zTop, zUuid); style_submenu_element("All", "All", "%s/dir?name=%t", g.zTop, zD); }else{ style_submenu_element("All", "All", "%s/dir", g.zTop); } }else{ int hasTrunk; @ <h2>The union of all files from all check-ins @ %s(blob_str(&dirname))</h2> hasTrunk = db_exists( "SELECT 1 FROM tagxref WHERE tagid=%d AND value='trunk'", TAG_BRANCH); zSubdirLink = mprintf("%s/dir?name=%T", g.zTop, zPrefix); if( zD ){ style_submenu_element("Top", "Top", "%s/dir", g.zTop); style_submenu_element("Tip", "Tip", "%s/dir?name=%t&ci=tip", g.zTop, zD); if( hasTrunk ){ style_submenu_element("Trunk", "Trunk", "%s/dir?name=%t&ci=trunk", g.zTop,zD); } }else{ style_submenu_element("Tip", "Tip", "%s/dir?ci=tip", g.zTop); if( hasTrunk ){ style_submenu_element("Trunk", "Trunk", "%s/dir?ci=trunk", g.zTop); } } } /* Compute the temporary table "localfiles" containing the names ** of all files and subdirectories in the zD[] directory. ** |
︙ | ︙ | |||
207 208 209 210 211 212 213 | int c; db_prepare(&ins, "INSERT OR IGNORE INTO localfiles VALUES(pathelement(:x,0), :u)" ); manifest_file_rewind(pM); while( (pFile = manifest_file_next(pM,0))!=0 ){ | > | > > > > | > > | 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | int c; db_prepare(&ins, "INSERT OR IGNORE INTO localfiles VALUES(pathelement(:x,0), :u)" ); manifest_file_rewind(pM); while( (pFile = manifest_file_next(pM,0))!=0 ){ if( nD>0 && (memcmp(pFile->zName, zD, nD-1)!=0 || pFile->zName[nD-1]!='/') ){ continue; } if( pPrev && memcmp(&pFile->zName[nD],&pPrev->zName[nD],nPrev)==0 && (pFile->zName[nD+nPrev]==0 || pFile->zName[nD+nPrev]=='/') ){ continue; } db_bind_text(&ins, ":x", &pFile->zName[nD]); db_bind_text(&ins, ":u", pFile->zUuid); db_step(&ins); db_reset(&ins); pPrev = pFile; |
︙ | ︙ | |||
257 258 259 260 261 262 263 | i++; zFN = db_column_text(&q, 0); if( zFN[0]=='/' ){ zFN++; @ <li><a href="%s(zSubdirLink)%T(zFN)">%h(zFN)/</a></li> }else if( zCI ){ const char *zUuid = db_column_text(&q, 1); | | | | 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 | i++; zFN = db_column_text(&q, 0); if( zFN[0]=='/' ){ zFN++; @ <li><a href="%s(zSubdirLink)%T(zFN)">%h(zFN)/</a></li> }else if( zCI ){ const char *zUuid = db_column_text(&q, 1); @ <li><a href="%s(g.zTop)/artifact?name=%s(zUuid)">%h(zFN)</a></li> }else{ @ <li><a href="%s(g.zTop)/finfo?name=%T(zPrefix)%T(zFN)">%h(zFN) @ </a></li> } } db_finalize(&q); manifest_destroy(pM); @ </ul></td></tr></table> style_footer(); } |
Changes to src/captcha.c.
︙ | ︙ | |||
388 389 390 391 392 393 394 | int i; unsigned int v; char *z; for(i=2; i<g.argc; i++){ char zHex[30]; v = (unsigned int)atoi(g.argv[i]); | | | 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 | int i; unsigned int v; char *z; for(i=2; i<g.argc; i++){ char zHex[30]; v = (unsigned int)atoi(g.argv[i]); sqlite3_snprintf(sizeof(zHex), zHex, "%x", v); z = captcha_render(zHex); printf("%s:\n%s", zHex, z); free(z); } } /* |
︙ | ︙ |
Changes to src/cgi.c.
︙ | ︙ | |||
53 54 55 56 57 58 59 | /* ** Shortcuts for cgi_parameter. P("x") returns the value of query parameter ** or cookie "x", or NULL if there is no such parameter or cookie. PD("x","y") ** does the same except "y" is returned in place of NULL if there is not match. */ #define P(x) cgi_parameter((x),0) #define PD(x,y) cgi_parameter((x),(y)) | | | | 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | /* ** Shortcuts for cgi_parameter. P("x") returns the value of query parameter ** or cookie "x", or NULL if there is no such parameter or cookie. PD("x","y") ** does the same except "y" is returned in place of NULL if there is not match. */ #define P(x) cgi_parameter((x),0) #define PD(x,y) cgi_parameter((x),(y)) #define PT(x) cgi_parameter_trimmed((x),0) #define PDT(x,y) cgi_parameter_trimmed((x),(y)) /* ** Destinations for output text. */ #define CGI_HEADER 0 #define CGI_BODY 1 |
︙ | ︙ | |||
219 220 221 222 223 224 225 | MD5Init(&ctx); MD5Update(&ctx,zTxt,nLen); MD5Final(digest,&ctx); for(j=i=0; i<16; i++,j+=2){ bprintf(&zETag[j],sizeof(zETag)-j,"%02x",(int)digest[i]); } blob_appendf(&extraHeader, "ETag: %s\r\n", zETag); | | | | | | 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | MD5Init(&ctx); MD5Update(&ctx,zTxt,nLen); MD5Final(digest,&ctx); for(j=i=0; i<16; i++,j+=2){ bprintf(&zETag[j],sizeof(zETag)-j,"%02x",(int)digest[i]); } blob_appendf(&extraHeader, "ETag: %s\r\n", zETag); return fossil_strdup(zETag); } /* ** Do some cache control stuff. First, we generate an ETag and include it in ** the response headers. Second, we do whatever is necessary to determine if ** the request was asking about caching and whether we need to send back the ** response body. If we shouldn't send a body, return non-zero. ** ** Currently, we just check the ETag against any If-None-Match header. ** ** FIXME: In some cases (attachments, file contents) we could check ** If-Modified-Since headers and always include Last-Modified in responses. */ static int check_cache_control(void){ /* FIXME: there's some gotchas wth cookies and some headers. */ char *zETag = cgi_add_etag(blob_buffer(&cgiContent),blob_size(&cgiContent)); char *zMatch = P("HTTP_IF_NONE_MATCH"); if( zETag!=0 && zMatch!=0 ) { char *zBuf = fossil_strdup(zMatch); if( zBuf!=0 ){ char *zTok = 0; char *zPos; for( zTok = strtok_r(zBuf, ",\"",&zPos); zTok && fossil_stricmp(zTok,zETag); zTok = strtok_r(0, ",\"",&zPos)){} fossil_free(zBuf); if(zTok) return 1; } } return 0; } #endif |
︙ | ︙ | |||
340 341 342 343 344 345 346 | ** Do a redirect request to the URL given in the argument. ** ** The URL must be relative to the base of the fossil server. */ void cgi_redirect(const char *zURL){ char *zLocation; CGIDEBUG(("redirect to %s\n", zURL)); | | > > > | | 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 | ** Do a redirect request to the URL given in the argument. ** ** The URL must be relative to the base of the fossil server. */ void cgi_redirect(const char *zURL){ char *zLocation; CGIDEBUG(("redirect to %s\n", zURL)); if( strncmp(zURL,"http:",5)==0 || strncmp(zURL,"https:",6)==0 ){ zLocation = mprintf("Location: %s\r\n", zURL); }else if( *zURL=='/' ){ zLocation = mprintf("Location: %.*s%s\r\n", strlen(g.zBaseURL)-strlen(g.zTop), g.zBaseURL, zURL); }else{ zLocation = mprintf("Location: %s/%s\r\n", g.zBaseURL, zURL); } cgi_append_header(zLocation); cgi_reset_content(); cgi_printf("<html>\n<p>Redirect to %h</p>\n</html>\n", zLocation); cgi_set_status(302, "Moved Temporarily"); free(zLocation); cgi_reply(); fossil_exit(0); } void cgi_redirectf(const char *zFormat, ...){ va_list ap; |
︙ | ︙ | |||
691 692 693 694 695 696 697 698 699 700 701 702 703 704 | process_multipart_form_data(z, len); } }else if( strcmp(zType, "application/x-fossil")==0 ){ blob_read_from_channel(&g.cgiIn, g.httpIn, len); blob_uncompress(&g.cgiIn, &g.cgiIn); }else if( strcmp(zType, "application/x-fossil-debug")==0 ){ blob_read_from_channel(&g.cgiIn, g.httpIn, len); } } z = (char*)P("HTTP_COOKIE"); if( z ){ z = mprintf("%s",z); add_param_list(z, ';'); | > > | 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 | process_multipart_form_data(z, len); } }else if( strcmp(zType, "application/x-fossil")==0 ){ blob_read_from_channel(&g.cgiIn, g.httpIn, len); blob_uncompress(&g.cgiIn, &g.cgiIn); }else if( strcmp(zType, "application/x-fossil-debug")==0 ){ blob_read_from_channel(&g.cgiIn, g.httpIn, len); }else if( strcmp(zType, "application/x-fossil-uncompressed")==0 ){ blob_read_from_channel(&g.cgiIn, g.httpIn, len); } } z = (char*)P("HTTP_COOKIE"); if( z ){ z = mprintf("%s",z); add_param_list(z, ';'); |
︙ | ︙ | |||
780 781 782 783 784 785 786 787 788 789 790 791 792 793 | CGIDEBUG(("env-match [%s] = [%s]\n", zName, zValue)); return zValue; } } CGIDEBUG(("no-match [%s]\n", zName)); return zDefault; } /* ** Return the name of the i-th CGI parameter. Return NULL if there ** are fewer than i registered CGI parmaeters. */ const char *cgi_parameter_name(int i){ if( i>=0 && i<nUsedQP ){ | > > > > > > > > > > > > > > > > > | 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 | CGIDEBUG(("env-match [%s] = [%s]\n", zName, zValue)); return zValue; } } CGIDEBUG(("no-match [%s]\n", zName)); return zDefault; } /* ** Return the value of a CGI parameter with leading and trailing ** spaces removed. */ char *cgi_parameter_trimmed(const char *zName, const char *zDefault){ const char *zIn; char *zOut; int i; zIn = cgi_parameter(zName, 0); if( zIn==0 ) zIn = zDefault; while( fossil_isspace(zIn[0]) ) zIn++; zOut = fossil_strdup(zIn); for(i=0; zOut[i]; i++){} while( i>0 && fossil_isspace(zOut[i-1]) ) zOut[--i] = 0; return zOut; } /* ** Return the name of the i-th CGI parameter. Return NULL if there ** are fewer than i registered CGI parmaeters. */ const char *cgi_parameter_name(int i){ if( i>=0 && i<nUsedQP ){ |
︙ | ︙ | |||
1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 | while( waitpid(0, 0, WNOHANG)>0 ){ nchildren--; } } /* NOT REACHED */ fossil_exit(1); #endif } /* ** Name of days and months. */ static const char *azDays[] = | > > | 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 | while( waitpid(0, 0, WNOHANG)>0 ){ nchildren--; } } /* NOT REACHED */ fossil_exit(1); #endif /* NOT REACHED */ return 0; } /* ** Name of days and months. */ static const char *azDays[] = |
︙ | ︙ | |||
1192 1193 1194 1195 1196 1197 1198 | memset(&t, 0, sizeof(t)); if( 7==sscanf(zDate, "%12[A-Za-z,] %d %12[A-Za-z] %d %d:%d:%d", zIgnore, &t.tm_mday, zMonth, &t.tm_year, &t.tm_hour, &t.tm_min, &t.tm_sec)){ if( t.tm_year > 1900 ) t.tm_year -= 1900; for(t.tm_mon=0; azMonths[t.tm_mon]; t.tm_mon++){ | | | 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 | memset(&t, 0, sizeof(t)); if( 7==sscanf(zDate, "%12[A-Za-z,] %d %12[A-Za-z] %d %d:%d:%d", zIgnore, &t.tm_mday, zMonth, &t.tm_year, &t.tm_hour, &t.tm_min, &t.tm_sec)){ if( t.tm_year > 1900 ) t.tm_year -= 1900; for(t.tm_mon=0; azMonths[t.tm_mon]; t.tm_mon++){ if( !fossil_strnicmp( azMonths[t.tm_mon], zMonth, 3 )){ return mkgmtime(&t); } } } return 0; } |
︙ | ︙ |
Changes to src/checkin.c.
︙ | ︙ | |||
47 48 49 50 51 52 53 | ); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3)==0; int isRenamed = db_column_int(&q,4); | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | ); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3)==0; int isRenamed = db_column_int(&q,4); char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); blob_append(report, zPrefix, nPrefix); if( isDeleted ){ blob_appendf(report, "DELETED %s\n", zPathname); }else if( !file_isfile(zFullName) ){ if( access(zFullName, 0)==0 ){ blob_appendf(report, "NOT_A_FILE %s\n", zPathname); if( missingIsFatal ){ |
︙ | ︙ | |||
100 101 102 103 104 105 106 107 108 109 110 111 112 113 | /* ** COMMAND: changes ** ** Usage: %fossil changes ** ** Report on the edit status of all files in the current checkout. ** See also the "status" and "extra" commands. */ void changes_cmd(void){ Blob report; int vid; db_must_be_within_tree(); blob_zero(&report); vid = db_lget_int("checkout", 0); | > > > > > > | > > > > > | | | > > < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < > > > > | > > > | > | < < < < < < < > | > > > > > > > > > > > > | > | < < < | > > | 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 | /* ** COMMAND: changes ** ** Usage: %fossil changes ** ** Report on the edit status of all files in the current checkout. ** See also the "status" and "extra" commands. ** ** Options: ** ** --sha1sum Verify file status using SHA1 hashing rather ** than relying on file mtimes. */ void changes_cmd(void){ Blob report; int vid; int useSha1sum = find_option("sha1sum", 0, 0)!=0; db_must_be_within_tree(); blob_zero(&report); vid = db_lget_int("checkout", 0); vfile_check_signature(vid, 0, useSha1sum); status_report(&report, "", 0); blob_write_to_file(&report, "-"); } /* ** COMMAND: status ** ** Usage: %fossil status ** ** Report on the status of the current checkout. ** ** Options: ** ** --sha1sum Verify file status using SHA1 hashing rather ** than relying on file mtimes. */ void status_cmd(void){ int vid; db_must_be_within_tree(); /* 012345678901234 */ printf("repository: %s\n", db_lget("repository","")); printf("local-root: %s\n", g.zLocalRoot); printf("server-code: %s\n", db_get("server-code", "")); vid = db_lget_int("checkout", 0); if( vid ){ show_common_info(vid, "checkout:", 1, 1); } changes_cmd(); } /* ** COMMAND: ls ** ** Usage: %fossil ls [-l] ** ** Show the names of all files in the current checkout. The -l provides ** extra information about each file. */ void ls_cmd(void){ int vid; Stmt q; int isBrief; isBrief = find_option("l","l", 0)==0; db_must_be_within_tree(); vid = db_lget_int("checkout", 0); vfile_check_signature(vid, 0, 0); db_prepare(&q, "SELECT pathname, deleted, rid, chnged, coalesce(origname!=pathname,0)" " FROM vfile" " ORDER BY 1" ); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isNew = db_column_int(&q,2)==0; int chnged = db_column_int(&q,3); int renamed = db_column_int(&q,4); char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); if( isBrief ){ printf("%s\n", zPathname); }else if( isNew ){ printf("ADDED %s\n", zPathname); }else if( isDeleted ){ printf("DELETED %s\n", zPathname); }else if( !file_isfile(zFullName) ){ if( access(zFullName, 0)==0 ){ printf("NOT_A_FILE %s\n", zPathname); }else{ printf("MISSING %s\n", zPathname); } }else if( chnged ){ printf("EDITED %s\n", zPathname); }else if( renamed ){ printf("RENAMED %s\n", zPathname); }else{ printf("UNCHANGED %s\n", zPathname); } free(zFullName); } db_finalize(&q); } /* ** COMMAND: extras ** Usage: %fossil extras ?--dotfiles? ?--ignore GLOBPATTERN? ** ** Print a list of all files in the source tree that are not part of ** the current checkout. See also the "clean" command. ** ** Files and subdirectories whose names begin with "." are normally ** ignored but can be included by adding the --dotfiles option. ** ** The GLOBPATTERN is a comma-separated list of GLOB expressions for ** files that are ignored. The GLOBPATTERN specified by the "ignore-glob" ** is used if the --ignore option is omitted. */ void extra_cmd(void){ Blob path; Blob repo; Stmt q; int n; const char *zIgnoreFlag = find_option("ignore",0,1); int allFlag = find_option("dotfiles",0,0)!=0; int outputManifest; Glob *pIgnore; db_must_be_within_tree(); outputManifest = db_get_boolean("manifest",0); db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); n = strlen(g.zLocalRoot); blob_init(&path, g.zLocalRoot, n-1); if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } pIgnore = glob_create(zIgnoreFlag); vfile_scan(&path, blob_size(&path), allFlag, pIgnore); glob_free(pIgnore); db_prepare(&q, "SELECT x FROM sfile" " WHERE x NOT IN (%s)" " ORDER BY 1", fossil_all_reserved_names() ); if( file_tree_name(g.zRepositoryName, &repo, 0) ){ db_multi_exec("DELETE FROM sfile WHERE x=%B", &repo); } while( db_step(&q)==SQLITE_ROW ){ printf("%s\n", db_column_text(&q, 0)); } db_finalize(&q); } /* ** COMMAND: clean ** Usage: %fossil clean ?--force? ?--dotfiles? ?--ignore GLOBPATTERN? ** ** Delete all "extra" files in the source tree. "Extra" files are ** files that are not officially part of the checkout. See also ** the "extra" command. This operation cannot be undone. ** ** You will be prompted before removing each file. If you are ** sure you wish to remove all "extra" files you can specify the ** optional --force flag and no prompts will be issued. ** ** Files and subdirectories whose names begin with "." are ** normally ignored. They are included if the "--dotfiles" option ** is used. ** ** The GLOBPATTERN is a comma-separated list of GLOB expressions for ** files that are ignored. The GLOBPATTERN specified by the "ignore-glob" ** is used if the --ignore option is omitted. */ void clean_cmd(void){ int allFlag; int dotfilesFlag; const char *zIgnoreFlag; Blob path, repo; Stmt q; int n; Glob *pIgnore; allFlag = find_option("force","f",0)!=0; dotfilesFlag = find_option("dotfiles",0,0)!=0; zIgnoreFlag = find_option("ignore",0,1); db_must_be_within_tree(); if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); n = strlen(g.zLocalRoot); blob_init(&path, g.zLocalRoot, n-1); pIgnore = glob_create(zIgnoreFlag); vfile_scan(&path, blob_size(&path), dotfilesFlag, pIgnore); glob_free(pIgnore); db_prepare(&q, "SELECT %Q || x FROM sfile" " WHERE x NOT IN (%s)" " ORDER BY 1", g.zLocalRoot, fossil_all_reserved_names() ); if( file_tree_name(g.zRepositoryName, &repo, 0) ){ db_multi_exec("DELETE FROM sfile WHERE x=%B", &repo); } while( db_step(&q)==SQLITE_ROW ){ if( allFlag ){ unlink(db_column_text(&q, 0)); }else{ |
︙ | ︙ | |||
380 381 382 383 384 385 386 387 388 389 390 391 392 393 | blob_init(&text, zInit, -1); blob_append(&text, "\n" "# Enter comments on this check-in. Lines beginning with # are ignored.\n" "# The check-in comment follows wiki formatting rules.\n" "#\n", -1 ); if( zBranch && zBranch[0] ){ blob_appendf(&text, "# tags: %s\n#\n", zBranch); }else{ char *zTags = info_tags_of_checkin(parent_rid, 1); if( zTags ) blob_appendf(&text, "# tags: %z\n#\n", zTags); } if( g.markPrivate ){ | > | 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 | blob_init(&text, zInit, -1); blob_append(&text, "\n" "# Enter comments on this check-in. Lines beginning with # are ignored.\n" "# The check-in comment follows wiki formatting rules.\n" "#\n", -1 ); blob_appendf(&text, "# user: %s\n", g.zLogin); if( zBranch && zBranch[0] ){ blob_appendf(&text, "# tags: %s\n#\n", zBranch); }else{ char *zTags = info_tags_of_checkin(parent_rid, 1); if( zTags ) blob_appendf(&text, "# tags: %z\n#\n", zTags); } if( g.markPrivate ){ |
︙ | ︙ | |||
402 403 404 405 406 407 408 | if( zEditor==0 ){ zEditor = getenv("VISUAL"); } if( zEditor==0 ){ zEditor = getenv("EDITOR"); } if( zEditor==0 ){ | | > > > > > > > | | < < < | | > > | | | | | | | > > > > > > > > | 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | if( zEditor==0 ){ zEditor = getenv("VISUAL"); } if( zEditor==0 ){ zEditor = getenv("EDITOR"); } if( zEditor==0 ){ blob_append(&text, "#\n" "# Since no default text editor is set using EDITOR or VISUAL\n" "# environment variables or the \"fossil set editor\" command,\n" "# and because no check-in comment was specified using the \"-m\"\n" "# or \"-M\" command-line options, you will need to enter the\n" "# check-in comment below. Type \".\" on a line by itself when\n" "# you are done:\n", -1); zFile = mprintf("-"); }else{ zFile = db_text(0, "SELECT '%qci-comment-' || hex(randomblob(6)) || '.txt'", g.zLocalRoot); } #if defined(_WIN32) blob_add_cr(&text); #endif blob_write_to_file(&text, zFile); if( zEditor ){ zCmd = mprintf("%s \"%s\"", zEditor, zFile); printf("%s\n", zCmd); if( fossil_system(zCmd) ){ fossil_panic("editor aborted"); } blob_reset(&text); blob_read_from_file(&text, zFile); }else{ char zIn[300]; blob_reset(&text); while( fgets(zIn, sizeof(zIn), stdin)!=0 ){ if( zIn[0]=='.' && (zIn[1]==0 || zIn[1]=='\r' || zIn[1]=='\n') ) break; blob_append(&text, zIn, -1); } } blob_remove_cr(&text); unlink(zFile); free(zFile); blob_zero(pComment); while( blob_line(&text, &line) ){ int i, n; char *z; |
︙ | ︙ | |||
478 479 480 481 482 483 484 | g.aCommitFile[ii-2] = iId; blob_reset(&b); } g.aCommitFile[ii-2] = 0; } } | < < < < < < < < < < < < < < < < < < | 464 465 466 467 468 469 470 471 472 473 474 475 476 477 | g.aCommitFile[ii-2] = iId; blob_reset(&b); } g.aCommitFile[ii-2] = 0; } } /* ** Make sure the current check-in with timestamp zDate is younger than its ** ancestor identified rid and zUuid. Throw a fatal error if not. */ static void checkin_verify_younger( int rid, /* The record ID of the ancestor */ const char *zUuid, /* The artifact ID of the ancestor */ |
︙ | ︙ | |||
526 527 528 529 530 531 532 | /* ** zDate should be a valid date string. Convert this string into the ** format YYYY-MM-DDTHH:MM:SS. If the string is not a valid date, ** print a fatal error and quit. */ char *date_in_standard_format(const char *zInputDate){ | | > > > | | > < < < > | 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 | /* ** zDate should be a valid date string. Convert this string into the ** format YYYY-MM-DDTHH:MM:SS. If the string is not a valid date, ** print a fatal error and quit. */ char *date_in_standard_format(const char *zInputDate){ char *zDate; zDate = db_text(0, "SELECT strftime('%%Y-%%m-%%dT%%H:%%M:%%f',%Q)", zInputDate); if( zDate[0]==0 ){ fossil_fatal( "unrecognized date format (%s): use \"YYYY-MM-DD HH:MM:SS.SSS\"", zInputDate ); } return zDate; } /* ** Create a manifest. */ static void create_manifest( Blob *pOut, /* Write the manifest here */ const char *zBaselineUuid, /* UUID of baseline, or zero */ Manifest *pBaseline, /* Make it a delta manifest if not zero */ Blob *pComment, /* Check-in comment text */ int vid, /* blob-id of the parent manifest */ int verifyDate, /* Verify that child is younger */ Blob *pCksum, /* Repository checksum. May be 0 */ const char *zDateOvrd, /* Date override. If 0 then use 'now' */ const char *zUserOvrd, /* User override. If 0 then use g.zLogin */ const char *zBranch, /* Branch name. May be 0 */ const char *zBgColor, /* Background color. May be 0 */ const char *zTag, /* Tag to apply to this check-in */ int *pnFBcard /* Number of generated B- and F-cards */ ){ char *zDate; /* Date of the check-in */ char *zParentUuid; /* UUID of parent check-in */ Blob filename; /* A single filename */ int nBasename; /* Size of base filename */ Stmt q; /* Query of files changed */ |
︙ | ︙ | |||
581 582 583 584 585 586 587 | pFile = 0; } blob_appendf(pOut, "C %F\n", blob_str(pComment)); zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); blob_appendf(pOut, "D %s\n", zDate); zDate[10] = ' '; db_prepare(&q, | | > > | > < | > > > | | | | | | 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 | pFile = 0; } blob_appendf(pOut, "C %F\n", blob_str(pComment)); zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); blob_appendf(pOut, "D %s\n", zDate); zDate[10] = ' '; db_prepare(&q, "SELECT pathname, uuid, origname, blob.rid, isexe," " file_is_selected(vfile.id)" " FROM vfile JOIN blob ON vfile.mrid=blob.rid" " WHERE (NOT deleted OR NOT file_is_selected(vfile.id))" " AND vfile.vid=%d" " ORDER BY 1", vid); blob_zero(&filename); blob_appendf(&filename, "%s", g.zLocalRoot); nBasename = blob_size(&filename); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zUuid = db_column_text(&q, 1); const char *zOrig = db_column_text(&q, 2); int frid = db_column_int(&q, 3); int isexe = db_column_int(&q, 4); int isSelected = db_column_int(&q, 5); const char *zPerm; int cmp; #if !defined(_WIN32) /* For unix, extract the "executable" permission bit directly from ** the filesystem. On windows, the "executable" bit is retained ** unchanged from the original. */ blob_resize(&filename, nBasename); blob_append(&filename, zName, -1); isexe = file_isexe(blob_str(&filename)); #endif if( isexe ){ zPerm = " x"; }else{ zPerm = ""; } if( !g.markPrivate ) content_make_public(frid); while( pFile && fossil_strcmp(pFile->zName,zName)<0 ){ blob_appendf(pOut, "F %F\n", pFile->zName); pFile = manifest_file_next(pBaseline, 0); nFBcard++; } cmp = 1; if( pFile==0 || (cmp = fossil_strcmp(pFile->zName,zName))!=0 || fossil_strcmp(pFile->zUuid, zUuid)!=0 ){ if( zOrig && !isSelected ){ zName = zOrig; zOrig = 0; } if( zOrig==0 || fossil_strcmp(zOrig,zName)==0 ){ blob_appendf(pOut, "F %F %s%s\n", zName, zUuid, zPerm); }else{ if( zPerm[0]==0 ){ zPerm = " w"; } blob_appendf(pOut, "F %F %s%s %F\n", zName, zUuid, zPerm, zOrig); } nFBcard++; } |
︙ | ︙ | |||
659 660 661 662 663 664 665 | } db_finalize(&q2); free(zDate); blob_appendf(pOut, "\n"); if( pCksum ) blob_appendf(pOut, "R %b\n", pCksum); if( zBranch && zBranch[0] ){ | | | > > > > > | > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 | } db_finalize(&q2); free(zDate); blob_appendf(pOut, "\n"); if( pCksum ) blob_appendf(pOut, "R %b\n", pCksum); if( zBranch && zBranch[0] ){ /* Set tags for the new branch */ if( zBgColor && zBgColor[0] ){ blob_appendf(pOut, "T *bgcolor * %F\n", zBgColor); } blob_appendf(pOut, "T *branch * %F\n", zBranch); blob_appendf(pOut, "T *sym-%F *\n", zBranch); } if( g.markPrivate ){ /* If this manifest is private, mark it as such */ blob_appendf(pOut, "T +private *\n"); } if( zTag && zTag[0] ){ /* Add a symbolic tag to this check-in */ blob_appendf(pOut, "T +sym-%F *\n", zTag); } if( zBranch && zBranch[0] ){ /* For a new branch, cancel all prior propagating tags */ Stmt q; db_prepare(&q, "SELECT tagname FROM tagxref, tag" " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid" " AND tagtype==2 AND tagname GLOB 'sym-*'" " AND tagname!='sym-'||%Q" " ORDER BY tagname", vid, zBranch); while( db_step(&q)==SQLITE_ROW ){ const char *zTag = db_column_text(&q, 0); blob_appendf(pOut, "T -%F *\n", zTag); } db_finalize(&q); } blob_appendf(pOut, "U %F\n", zUserOvrd ? zUserOvrd : g.zLogin); md5sum_blob(pOut, &mcksum); blob_appendf(pOut, "Z %b\n", &mcksum); if( pnFBcard ) *pnFBcard = nFBcard; } /* ** Issue a warning and give the user an opportunity to abandon out ** if a \r\n line ending is seen in a text file. */ static void cr_warning(const Blob *p, const char *zFilename){ int nCrNl = 0; /* Number of \r\n line endings seen */ const unsigned char *z; /* File text */ int n; /* Size of the file in bytes */ int lastNl = 0; /* Characters since last \n */ int i; /* Loop counter */ char *zMsg; /* Warning message */ Blob fname; /* Relative pathname of the file */ Blob ans; /* Answer to continue prompt */ static int allOk = 0; /* Set to true to disable this routine */ if( allOk ) return; z = (unsigned char*)blob_buffer(p); n = blob_size(p); for(i=0; i<n-1; i++){ unsigned char c = z[i]; if( c==0 ) return; /* It's binary */ if( c=='\n' ){ if( i>0 && z[i-1]=='\r' ){ nCrNl++; if( i>10000 ) break; } lastNl = 0; }else{ lastNl++; if( lastNl>1000 ) return; /* Binary if any line longer than 1000 */ } } if( nCrNl ){ char c; file_relative_name(zFilename, &fname); blob_zero(&ans); zMsg = mprintf( "%s contains CR/NL line endings; commit anyhow (yes/no/all)?", blob_str(&fname)); prompt_user(zMsg, &ans); fossil_free(zMsg); c = blob_str(&ans)[0]; if( c=='a' ){ allOk = 1; }else if( c!='y' ){ fossil_fatal("Abandoning commit due to CR+NL line endings in %s", blob_str(&fname)); } blob_reset(&ans); blob_reset(&fname); } } /* ** COMMAND: ci ** COMMAND: commit ** ** Usage: %fossil commit ?OPTIONS? ?FILE...? ** |
︙ | ︙ | |||
715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 | ** ** A check-in is not permitted to fork unless the --force or -f ** option appears. A check-in is not allowed against a closed check-in. ** ** The --private option creates a private check-in that is never synced. ** Children of private check-ins are automatically private. ** ** Options: ** ** --comment|-m COMMENT-TEXT ** --message-file|-M COMMENT-FILE ** --branch NEW-BRANCH-NAME ** --bgcolor COLOR ** --nosign ** --force|-f ** --private ** --baseline ** --delta ** */ void commit_cmd(void){ int hasChanges; /* True if unsaved changes exist */ int vid; /* blob-id of parent version */ int nrid; /* blob-id of a modified file */ int nvid; /* Blob-id of the new check-in */ | > > > | 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 | ** ** A check-in is not permitted to fork unless the --force or -f ** option appears. A check-in is not allowed against a closed check-in. ** ** The --private option creates a private check-in that is never synced. ** Children of private check-ins are automatically private. ** ** the --tag option applies the symbolic tag name to the check-in. ** ** Options: ** ** --comment|-m COMMENT-TEXT ** --message-file|-M COMMENT-FILE ** --branch NEW-BRANCH-NAME ** --bgcolor COLOR ** --nosign ** --force|-f ** --private ** --baseline ** --delta ** --tag TAG-NAME ** */ void commit_cmd(void){ int hasChanges; /* True if unsaved changes exist */ int vid; /* blob-id of parent version */ int nrid; /* blob-id of a modified file */ int nvid; /* Blob-id of the new check-in */ |
︙ | ︙ | |||
751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 | int outputManifest; /* True to output "manifest" and "manifest.uuid" */ int testRun; /* True for a test run. Debugging only */ const char *zBranch; /* Create a new branch with this name */ const char *zBgColor; /* Set background color when branching */ const char *zDateOvrd; /* Override date string */ const char *zUserOvrd; /* Override user name */ const char *zComFile; /* Read commit message from this file */ Blob manifest; /* Manifest in baseline form */ Blob muuid; /* Manifest uuid */ Blob cksum1, cksum2; /* Before and after commit checksums */ Blob cksum1b; /* Checksum recorded in the manifest */ int szD; /* Size of the delta manifest */ int szB; /* Size of the baseline manifest */ url_proxy_options(); noSign = find_option("nosign",0,0)!=0; forceDelta = find_option("delta",0,0)!=0; forceBaseline = find_option("baseline",0,0)!=0; if( forceDelta && forceBaseline ){ fossil_fatal("cannot use --delta and --baseline together"); } testRun = find_option("test",0,0)!=0; zComment = find_option("comment","m",1); forceFlag = find_option("force", "f", 0)!=0; zBranch = find_option("branch","b",1); zBgColor = find_option("bgcolor",0,1); zComFile = find_option("message-file", "M", 1); if( find_option("private",0,0) ){ g.markPrivate = 1; if( zBranch==0 ) zBranch = "private"; if( zBgColor==0 ) zBgColor = "#fec084"; /* Orange */ } zDateOvrd = find_option("date-override",0,1); | > > | 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 | int outputManifest; /* True to output "manifest" and "manifest.uuid" */ int testRun; /* True for a test run. Debugging only */ const char *zBranch; /* Create a new branch with this name */ const char *zBgColor; /* Set background color when branching */ const char *zDateOvrd; /* Override date string */ const char *zUserOvrd; /* Override user name */ const char *zComFile; /* Read commit message from this file */ const char *zTag; /* Symbolic tag to apply to this check-in */ Blob manifest; /* Manifest in baseline form */ Blob muuid; /* Manifest uuid */ Blob cksum1, cksum2; /* Before and after commit checksums */ Blob cksum1b; /* Checksum recorded in the manifest */ int szD; /* Size of the delta manifest */ int szB; /* Size of the baseline manifest */ url_proxy_options(); noSign = find_option("nosign",0,0)!=0; forceDelta = find_option("delta",0,0)!=0; forceBaseline = find_option("baseline",0,0)!=0; if( forceDelta && forceBaseline ){ fossil_fatal("cannot use --delta and --baseline together"); } testRun = find_option("test",0,0)!=0; zComment = find_option("comment","m",1); forceFlag = find_option("force", "f", 0)!=0; zBranch = find_option("branch","b",1); zBgColor = find_option("bgcolor",0,1); zTag = find_option("tag",0,1); zComFile = find_option("message-file", "M", 1); if( find_option("private",0,0) ){ g.markPrivate = 1; if( zBranch==0 ) zBranch = "private"; if( zBgColor==0 ) zBgColor = "#fec084"; /* Orange */ } zDateOvrd = find_option("date-override",0,1); |
︙ | ︙ | |||
805 806 807 808 809 810 811 | g.markPrivate = 1; } /* ** Autosync if autosync is enabled and this is not a private check-in. */ if( !g.markPrivate ){ | | > > > > > > > | 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 | g.markPrivate = 1; } /* ** Autosync if autosync is enabled and this is not a private check-in. */ if( !g.markPrivate ){ if( autosync(AUTOSYNC_PULL) ){ Blob ans; blob_zero(&ans); prompt_user("continue in spite of sync failure (y/N)? ", &ans); if( blob_str(&ans)[0]!='y' ){ fossil_exit(1); } } } /* Require confirmation to continue with the check-in if there is ** clock skew */ if( g.clockSkewSeen ){ Blob ans; |
︙ | ︙ | |||
859 860 861 862 863 864 865 | ** been modified, bail out now. */ if( g.aCommitFile ){ Blob unmodified; memset(&unmodified, 0, sizeof(Blob)); blob_init(&unmodified, 0, 0); db_blob(&unmodified, | | > | | 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 | ** been modified, bail out now. */ if( g.aCommitFile ){ Blob unmodified; memset(&unmodified, 0, sizeof(Blob)); blob_init(&unmodified, 0, 0); db_blob(&unmodified, "SELECT pathname FROM vfile" " WHERE chnged = 0 AND origname IS NULL AND file_is_selected(id)" ); if( strlen(blob_str(&unmodified)) ){ fossil_fatal("file %s has not changed", blob_str(&unmodified)); } } /* ** Do not allow a commit that will cause a fork unless the --force flag ** is used or unless this is a private check-in. */ |
︙ | ︙ | |||
913 914 915 916 917 918 919 | } /* Step 1: Insert records for all modified files into the blob ** table. If there were arguments passed to this command, only ** the identified fils are inserted (if they have been modified). */ db_prepare(&q, | | | | > > > | | | | 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 | } /* Step 1: Insert records for all modified files into the blob ** table. If there were arguments passed to this command, only ** the identified fils are inserted (if they have been modified). */ db_prepare(&q, "SELECT id, %Q || pathname, mrid, %s FROM vfile " "WHERE chnged==1 AND NOT deleted AND file_is_selected(id)", g.zLocalRoot, glob_expr("pathname", db_get("crnl-glob","")) ); while( db_step(&q)==SQLITE_ROW ){ int id, rid; const char *zFullname; Blob content; int crnlOk; id = db_column_int(&q, 0); zFullname = db_column_text(&q, 1); rid = db_column_int(&q, 2); crnlOk = db_column_int(&q, 3); blob_zero(&content); blob_read_from_file(&content, zFullname); if( !crnlOk ) cr_warning(&content, zFullname); nrid = content_put(&content); blob_reset(&content); if( rid>0 ){ content_deltify(rid, nrid, 0); } db_multi_exec("UPDATE vfile SET mrid=%d, rid=%d WHERE id=%d", nrid,nrid,id); db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); } db_finalize(&q); /* Create the new manifest */ if( blob_size(&comment)==0 ){ blob_append(&comment, "(no comment)", -1); } if( forceDelta ){ blob_zero(&manifest); }else{ create_manifest(&manifest, 0, 0, &comment, vid, !forceFlag, useCksum ? &cksum1 : 0, zDateOvrd, zUserOvrd, zBranch, zBgColor, zTag, &szB); } /* See if a delta-manifest would be more appropriate */ if( !forceBaseline ){ const char *zBaselineUuid; Manifest *pParent; Manifest *pBaseline; pParent = manifest_get(vid, CFTYPE_MANIFEST); if( pParent && pParent->zBaseline ){ zBaselineUuid = pParent->zBaseline; pBaseline = manifest_get_by_name(zBaselineUuid, 0); }else{ zBaselineUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); pBaseline = pParent; } if( pBaseline ){ Blob delta; create_manifest(&delta, zBaselineUuid, pBaseline, &comment, vid, !forceFlag, useCksum ? &cksum1 : 0, zDateOvrd, zUserOvrd, zBranch, zBgColor, zTag, &szD); /* ** At this point, two manifests have been constructed, either of ** which would work for this checkin. The first manifest (held ** in the "manifest" variable) is a baseline manifest and the second ** (held in variable named "delta") is a delta manifest. The ** question now is: which manifest should we use? ** |
︙ | ︙ | |||
1020 1021 1022 1023 1024 1025 1026 | if( outputManifest ){ zManifestFile = mprintf("%smanifest", g.zLocalRoot); blob_write_to_file(&manifest, zManifestFile); blob_reset(&manifest); blob_read_from_file(&manifest, zManifestFile); free(zManifestFile); } | | > | 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 | if( outputManifest ){ zManifestFile = mprintf("%smanifest", g.zLocalRoot); blob_write_to_file(&manifest, zManifestFile); blob_reset(&manifest); blob_read_from_file(&manifest, zManifestFile); free(zManifestFile); } nvid = content_put(&manifest); if( nvid==0 ){ fossil_panic("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nvid); manifest_crosslink(nvid, &manifest); assert( blob_is_reset(&manifest) ); content_deltify(vid, nvid, 0); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nvid); printf("New_Version: %s\n", zUuid); if( outputManifest ){ zManifestFile = mprintf("%smanifest.uuid", g.zLocalRoot); blob_zero(&muuid); blob_appendf(&muuid, "%s\n", zUuid); |
︙ | ︙ | |||
1057 1058 1059 1060 1061 1062 1063 | if( useCksum ){ /* Verify that the repository checksum matches the expected checksum ** calculated before the checkin started (and stored as the R record ** of the manifest file). */ vfile_aggregate_checksum_repository(nvid, &cksum2); if( blob_compare(&cksum1, &cksum2) ){ | > | > > | > | | | | 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 | if( useCksum ){ /* Verify that the repository checksum matches the expected checksum ** calculated before the checkin started (and stored as the R record ** of the manifest file). */ vfile_aggregate_checksum_repository(nvid, &cksum2); if( blob_compare(&cksum1, &cksum2) ){ vfile_compare_repository_to_disk(nvid); fossil_fatal("working checkout does not match what would have ended " "up in the repository: %b versus %b", &cksum1, &cksum2); } /* Verify that the manifest checksum matches the expected checksum */ vfile_aggregate_checksum_manifest(nvid, &cksum2, &cksum1b); if( blob_compare(&cksum1, &cksum1b) ){ fossil_fatal("manifest checksum self-test failed: " "%b versus %b", &cksum1, &cksum1b); } if( blob_compare(&cksum1, &cksum2) ){ fossil_fatal( "working checkout does not match manifest after commit: " "%b versus %b", &cksum1, &cksum2); } /* Verify that the commit did not modify any disk images. */ vfile_aggregate_checksum_disk(nvid, &cksum2); if( blob_compare(&cksum1, &cksum2) ){ fossil_fatal("working checkout before and after commit does not match"); } } /* Clear the undo/redo stack */ undo_reset(); /* Commit */ |
︙ | ︙ |
Changes to src/checkout.c.
︙ | ︙ | |||
31 32 33 34 35 36 37 | ** 2: There is no existing checkout */ int unsaved_changes(void){ int vid; db_must_be_within_tree(); vid = db_lget_int("checkout",0); if( vid==0 ) return 2; | | | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | ** 2: There is no existing checkout */ int unsaved_changes(void){ int vid; db_must_be_within_tree(); vid = db_lget_int("checkout",0); if( vid==0 ) return 2; vfile_check_signature(vid, 1, 0); return db_exists("SELECT 1 FROM vfile WHERE chnged" " OR coalesce(origname!=pathname,0)"); } /* ** Undo the current check-out. Unlink all files from the disk. ** Clear the VFILE table. |
︙ | ︙ | |||
198 199 200 201 202 203 204 | Blob cksum1, cksum1b, cksum2; db_must_be_within_tree(); db_begin_transaction(); forceFlag = find_option("force","f",0)!=0; keepFlag = find_option("keep",0,0)!=0; latestFlag = find_option("latest",0,0)!=0; | | | 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 | Blob cksum1, cksum1b, cksum2; db_must_be_within_tree(); db_begin_transaction(); forceFlag = find_option("force","f",0)!=0; keepFlag = find_option("keep",0,0)!=0; latestFlag = find_option("latest",0,0)!=0; promptFlag = find_option("prompt",0,0)!=0 || forceFlag==0; if( (latestFlag!=0 && g.argc!=2) || (latestFlag==0 && g.argc!=3) ){ usage("VERSION|--latest ?--force? ?--keep?"); } if( !forceFlag && unsaved_changes()==1 ){ fossil_fatal("there are unsaved changes in the current checkout"); } if( forceFlag ){ |
︙ | ︙ | |||
243 244 245 246 247 248 249 | vfile_to_disk(vid, 0, 1, promptFlag); } checkout_set_all_exe(vid); manifest_to_disk(vid); db_lset_int("checkout", vid); undo_reset(); db_multi_exec("DELETE FROM vmerge"); | | | | < < < < < < < < < | > | > | | > > | | | 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 | vfile_to_disk(vid, 0, 1, promptFlag); } checkout_set_all_exe(vid); manifest_to_disk(vid); db_lset_int("checkout", vid); undo_reset(); db_multi_exec("DELETE FROM vmerge"); if( !keepFlag && db_get_boolean("repo-cksum",1) ){ vfile_aggregate_checksum_manifest(vid, &cksum1, &cksum1b); vfile_aggregate_checksum_disk(vid, &cksum2); if( blob_compare(&cksum1, &cksum2) ){ printf("WARNING: manifest checksum does not agree with disk\n"); } if( blob_size(&cksum1b) && blob_compare(&cksum1, &cksum1b) ){ printf("WARNING: manifest checksum does not agree with manifest\n"); } } db_end_transaction(0); } /* ** Unlink the local database file */ static void unlink_local_database(int manifestOnly){ const char *zReserved; int i; for(i=0; (zReserved = fossil_reserved_name(i))!=0; i++){ if( manifestOnly==0 || zReserved[0]=='m' ){ char *z; z = mprintf("%s%s", g.zLocalRoot, zReserved); unlink(z); free(z); } } } /* ** COMMAND: close ** ** Usage: %fossil close ?-f|--force? ** ** The opposite of "open". Close the current database connection. ** Require a -f or --force flag if there are unsaved changed in the ** current check-out. */ void close_cmd(void){ int forceFlag = find_option("force","f",0)!=0; db_must_be_within_tree(); if( !forceFlag && unsaved_changes()==1 ){ fossil_fatal("there are unsaved changes in the current checkout"); } unlink_local_database(1); db_close(1); unlink_local_database(0); } |
Changes to src/clone.c.
︙ | ︙ | |||
33 34 35 36 37 38 39 | ** ** By default, your current login name is used to create the default ** admin user. This can be overridden using the -A|--admin-user ** parameter. ** ** Options: ** | | > > > > | | | | | | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | ** ** By default, your current login name is used to create the default ** admin user. This can be overridden using the -A|--admin-user ** parameter. ** ** Options: ** ** --admin-user|-A USERNAME Make USERNAME the administrator ** --private Also clone private branches ** */ void clone_cmd(void){ char *zPassword; const char *zDefaultUser; /* Optional name of the default user */ int nErr = 0; int bPrivate; /* Also clone private branches */ bPrivate = find_option("private",0,0)!=0; url_proxy_options(); if( g.argc < 4 ){ usage("?OPTIONS? FILE-OR-URL NEW-REPOSITORY"); } db_open_config(0); if( file_size(g.argv[3])>0 ){ fossil_panic("file already exists: %s", g.argv[3]); } zDefaultUser = find_option("admin-user","A",1); url_parse(g.argv[2]); if( g.urlIsFile ){ file_copy(g.urlName, g.argv[3]); db_close(1); db_open_repository(g.argv[3]); db_record_repository_filename(g.argv[3]); db_multi_exec( "REPLACE INTO config(name,value,mtime)" " VALUES('server-code', lower(hex(randomblob(20))),now());" "REPLACE INTO config(name,value,mtime)" " VALUES('last-sync-url', '%q',now());", g.urlCanonical ); db_multi_exec( "DELETE FROM blob WHERE rid IN private;" "DELETE FROM delta wHERE rid IN private;" "DELETE FROM private;" ); |
︙ | ︙ | |||
86 87 88 89 90 91 92 | db_record_repository_filename(g.argv[3]); db_initial_setup(0, zDefaultUser, 0); user_select(); db_set("content-schema", CONTENT_SCHEMA, 0); db_set("aux-schema", AUX_SCHEMA, 0); db_set("last-sync-url", g.argv[2], 0); db_multi_exec( | | | > | | > > > > | | 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | db_record_repository_filename(g.argv[3]); db_initial_setup(0, zDefaultUser, 0); user_select(); db_set("content-schema", CONTENT_SCHEMA, 0); db_set("aux-schema", AUX_SCHEMA, 0); db_set("last-sync-url", g.argv[2], 0); db_multi_exec( "REPLACE INTO config(name,value,mtime)" " VALUES('server-code', lower(hex(randomblob(20))), now());" ); url_enable_proxy(0); url_get_password_if_needed(); g.xlinkClusterOnly = 1; nErr = client_sync(0,0,1,bPrivate,CONFIGSET_ALL,0); g.xlinkClusterOnly = 0; verify_cancel(); db_end_transaction(0); db_close(1); if( nErr ){ unlink(g.argv[3]); fossil_fatal("server returned an error - clone aborted"); } db_open_repository(g.argv[3]); } db_begin_transaction(); printf("Rebuilding repository meta-data...\n"); rebuild_db(0, 1, 0); printf("project-id: %s\n", db_get("project-code", 0)); printf("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); printf("admin-user: %s (password is \"%s\")\n", g.zLogin, zPassword); db_end_transaction(0); } |
Changes to src/config.h.
︙ | ︙ | |||
87 88 89 90 91 92 93 | /* ** Typedef for a 64-bit integer */ typedef sqlite3_int64 i64; typedef sqlite3_uint64 u64; /* | | > | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | /* ** Typedef for a 64-bit integer */ typedef sqlite3_int64 i64; typedef sqlite3_uint64 u64; /* ** 8-bit types */ typedef unsigned char u8; typedef signed char i8; /* In the timeline, check-in messages are truncated at the first space ** that is more than MX_CKIN_MSG from the beginning, or at the first ** paragraph break that is more than MN_CKIN_MSG from the beginning. */ #define MN_CKIN_MSG 100 #define MX_CKIN_MSG 300 |
︙ | ︙ | |||
120 121 122 123 124 125 126 | # define FOSSIL_INT_TO_PTR(X) ((void*)&((char*)0)[X]) # define FOSSIL_PTR_TO_INT(X) ((int)(((char*)X)-(char*)0)) #else /* Generates a warning - but it always works */ # define FOSSIL_INT_TO_PTR(X) ((void*)(X)) # define FOSSIL_PTR_TO_INT(X) ((int)(X)) #endif | < < < < < < < < < < < < < < < | 121 122 123 124 125 126 127 128 | # define FOSSIL_INT_TO_PTR(X) ((void*)&((char*)0)[X]) # define FOSSIL_PTR_TO_INT(X) ((int)(((char*)X)-(char*)0)) #else /* Generates a warning - but it always works */ # define FOSSIL_INT_TO_PTR(X) ((void*)(X)) # define FOSSIL_PTR_TO_INT(X) ((int)(X)) #endif #endif /* _RC_COMPILE_ */ |
Changes to src/configure.c.
1 2 3 4 5 6 | /* ** Copyright (c) 2008 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) | | > | | | | | | | > > > | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | /* ** Copyright (c) 2008 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to manage repository configurations. ** ** By "responsitory configure" we mean the local state of a repository ** distinct from the versioned files. */ #include "config.h" #include "configure.h" #include <assert.h> #if INTERFACE /* ** Configuration transfers occur in groups. These are the allowed ** groupings: */ #define CONFIGSET_SKIN 0x000001 /* WWW interface appearance */ #define CONFIGSET_TKT 0x000002 /* Ticket configuration */ #define CONFIGSET_PROJ 0x000004 /* Project name */ #define CONFIGSET_SHUN 0x000008 /* Shun settings */ #define CONFIGSET_USER 0x000010 /* The USER table */ #define CONFIGSET_ADDR 0x000020 /* The CONCEALED table */ #define CONFIGSET_ALL 0x0000ff /* Everything */ #define CONFIGSET_OVERWRITE 0x100000 /* Causes overwrite instead of merge */ #define CONFIGSET_OLDFORMAT 0x200000 /* Use the legacy format */ #endif /* INTERFACE */ /* ** Names of the configuration sets */ static struct { const char *zName; /* Name of the configuration set */ int groupMask; /* Mask for that configuration set */ const char *zHelp; /* What it does */ } aGroupName[] = { { "/email", CONFIGSET_ADDR, "Concealed email addresses in tickets" }, { "/project", CONFIGSET_PROJ, "Project name and description" }, { "/skin", CONFIGSET_SKIN, "Web interface apparance settings" }, { "/shun", CONFIGSET_SHUN, "List of shunned artifacts" }, { "/ticket", CONFIGSET_TKT, "Ticket setup", }, { "/user", CONFIGSET_USER, "Users and privilege settings" }, { "/all", CONFIGSET_ALL, "All of the above" }, }; /* ** The following is a list of settings that we are willing to ** transfer. ** |
︙ | ︙ | |||
73 74 75 76 77 78 79 80 81 82 83 84 85 86 | { "header", CONFIGSET_SKIN }, { "footer", CONFIGSET_SKIN }, { "logo-mimetype", CONFIGSET_SKIN }, { "logo-image", CONFIGSET_SKIN }, { "project-name", CONFIGSET_PROJ }, { "project-description", CONFIGSET_PROJ }, { "manifest", CONFIGSET_PROJ }, { "index-page", CONFIGSET_SKIN }, { "timeline-block-markup", CONFIGSET_SKIN }, { "timeline-max-comment", CONFIGSET_SKIN }, { "ticket-table", CONFIGSET_TKT }, { "ticket-common", CONFIGSET_TKT }, { "ticket-newpage", CONFIGSET_TKT }, { "ticket-viewpage", CONFIGSET_TKT }, | > > | 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | { "header", CONFIGSET_SKIN }, { "footer", CONFIGSET_SKIN }, { "logo-mimetype", CONFIGSET_SKIN }, { "logo-image", CONFIGSET_SKIN }, { "project-name", CONFIGSET_PROJ }, { "project-description", CONFIGSET_PROJ }, { "manifest", CONFIGSET_PROJ }, { "ignore-glob", CONFIGSET_PROJ }, { "crnl-glob", CONFIGSET_PROJ }, { "index-page", CONFIGSET_SKIN }, { "timeline-block-markup", CONFIGSET_SKIN }, { "timeline-max-comment", CONFIGSET_SKIN }, { "ticket-table", CONFIGSET_TKT }, { "ticket-common", CONFIGSET_TKT }, { "ticket-newpage", CONFIGSET_TKT }, { "ticket-viewpage", CONFIGSET_TKT }, |
︙ | ︙ | |||
101 102 103 104 105 106 107 | ** Return name of first configuration property matching the given mask. */ const char *configure_first_name(int iMask){ iConfig = 0; return configure_next_name(iMask); } const char *configure_next_name(int iMask){ | > | | | | | > > > > > > > > > > > > > > > > > > > > > > > | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | ** Return name of first configuration property matching the given mask. */ const char *configure_first_name(int iMask){ iConfig = 0; return configure_next_name(iMask); } const char *configure_next_name(int iMask){ if( iMask & CONFIGSET_OLDFORMAT ){ while( iConfig<count(aConfig) ){ if( aConfig[iConfig].groupMask & iMask ){ return aConfig[iConfig++].zName; }else{ iConfig++; } } }else{ if( iConfig==0 && (iMask & CONFIGSET_ALL)==CONFIGSET_ALL ){ iConfig = count(aGroupName); return "/all"; } while( iConfig<count(aGroupName)-1 ){ if( aGroupName[iConfig].groupMask & iMask ){ return aGroupName[iConfig++].zName; }else{ iConfig++; } } } return 0; } /* ** Return the mask for the named configuration parameter if it can be ** safely exported. Return 0 if the parameter is not safe to export. ** ** "Safe" in the previous paragraph means the permission is created to ** export the property. In other words, the requesting side has presented ** login credentials and has sufficient capabilities to access the requested ** information. */ int configure_is_exportable(const char *zName){ int i; int n = strlen(zName); if( n>2 && zName[0]=='\'' && zName[n-1]=='\'' ){ zName++; n -= 2; } for(i=0; i<count(aConfig); i++){ if( memcmp(zName, aConfig[i].zName, n)==0 && aConfig[i].zName[n]==0 ){ int m = aConfig[i].groupMask; if( !g.okAdmin ){ m &= ~CONFIGSET_USER; } if( !g.okRdAddr ){ m &= ~CONFIGSET_ADDR; } |
︙ | ︙ | |||
140 141 142 143 144 145 146 | ** zName is one of the special configuration names that refers to an entire ** table rather than a single entry in CONFIG. Special names are "@reportfmt" ** and "@shun" and "@user". This routine writes SQL text into pOut that when ** evaluated will populate the corresponding table with data. */ void configure_render_special_name(const char *zName, Blob *pOut){ Stmt q; | | | | | | | | | | > | | | | | | | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 | ** zName is one of the special configuration names that refers to an entire ** table rather than a single entry in CONFIG. Special names are "@reportfmt" ** and "@shun" and "@user". This routine writes SQL text into pOut that when ** evaluated will populate the corresponding table with data. */ void configure_render_special_name(const char *zName, Blob *pOut){ Stmt q; if( fossil_strcmp(zName, "@shun")==0 ){ db_prepare(&q, "SELECT uuid FROM shun"); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pOut, "INSERT OR IGNORE INTO shun VALUES('%s');\n", db_column_text(&q, 0) ); } db_finalize(&q); }else if( fossil_strcmp(zName, "@reportfmt")==0 ){ db_prepare(&q, "SELECT title, cols, sqlcode FROM reportfmt"); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pOut, "INSERT INTO _xfer_reportfmt(title,cols,sqlcode)" " VALUES(%Q,%Q,%Q);\n", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2) ); } db_finalize(&q); }else if( fossil_strcmp(zName, "@user")==0 ){ db_prepare(&q, "SELECT login, CASE WHEN length(pw)==40 THEN pw END," " cap, info, quote(photo) FROM user"); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pOut, "INSERT INTO _xfer_user(login,pw,cap,info,photo)" " VALUES(%Q,%Q,%Q,%Q,%s);\n", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2), db_column_text(&q, 3), db_column_text(&q, 4) ); } db_finalize(&q); }else if( fossil_strcmp(zName, "@concealed")==0 ){ db_prepare(&q, "SELECT hash, content FROM concealed"); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pOut, "INSERT OR IGNORE INTO concealed(hash,content)" " VALUES(%Q,%Q);\n", db_column_text(&q, 0), db_column_text(&q, 1) ); } db_finalize(&q); } } /* ** Two SQL functions: ** ** config_is_reset(int) ** config_reset(int) ** ** The config_is_reset() function takes the integer valued argument and ** ANDs it against the static variable "configHasBeenReset" below. The ** function returns TRUE or FALSE depending on the result depending on ** whether or not the corresponding configuration table has been reset. The ** config_reset() function adds the bits to "configHasBeenReset" that ** are given in the argument. ** ** These functions are used below in the WHEN clause of a trigger to ** get the trigger to fire exactly once. */ static int configHasBeenReset = 0; static void config_is_reset_function( sqlite3_context *context, int argc, sqlite3_value **argv ){ int m = sqlite3_value_int(argv[0]); sqlite3_result_int(context, (configHasBeenReset&m)!=0 ); } static void config_reset_function( sqlite3_context *context, int argc, sqlite3_value **argv ){ int m = sqlite3_value_int(argv[0]); configHasBeenReset |= m; } /* ** Create the temporary _xfer_reportfmt and _xfer_user tables that are ** necessary in order to evalute the SQL text generated by the ** configure_render_special_name() routine. ** |
︙ | ︙ | |||
251 252 253 254 255 256 257 | @ cap TEXT, -- Capabilities of this user @ cookie TEXT, -- WWW login cookie @ ipaddr TEXT, -- IP address for which cookie is valid @ cexpire DATETIME, -- Time when cookie expires @ info TEXT, -- contact information @ photo BLOB -- JPEG image of this user @ ); | | > | > | | | | | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > | | | | | > | > < | | | < < < < < < < < < < < < | < < < | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 | @ cap TEXT, -- Capabilities of this user @ cookie TEXT, -- WWW login cookie @ ipaddr TEXT, -- IP address for which cookie is valid @ cexpire DATETIME, -- Time when cookie expires @ info TEXT, -- contact information @ photo BLOB -- JPEG image of this user @ ); @ INSERT INTO _xfer_reportfmt @ SELECT rn,owner,title,cols,sqlcode FROM reportfmt; @ INSERT INTO _xfer_user @ SELECT uid,login,pw,cap,cookie,ipaddr,cexpire,info,photo FROM user; ; db_multi_exec(zSQL1); /* When the replace flag is set, add triggers that run the first time ** that new data is seen. The triggers run only once and delete all the ** existing data. */ if( replaceFlag ){ static const char zSQL2[] = @ CREATE TRIGGER _xfer_r1 BEFORE INSERT ON _xfer_reportfmt @ WHEN NOT config_is_reset(2) BEGIN @ DELETE FROM _xfer_reportfmt; @ SELECT config_reset(2); @ END; @ CREATE TRIGGER _xfer_r2 BEFORE INSERT ON _xfer_user @ WHEN NOT config_is_reset(16) BEGIN @ DELETE FROM _xfer_user; @ SELECT config_reset(16); @ END; @ CREATE TEMP TRIGGER _xfer_r3 BEFORE INSERT ON shun @ WHEN NOT config_is_reset(8) BEGIN @ DELETE FROM shun; @ SELECT config_reset(8); @ END; ; sqlite3_create_function(g.db, "config_is_reset", 1, SQLITE_UTF8, 0, config_is_reset_function, 0, 0); sqlite3_create_function(g.db, "config_reset", 1, SQLITE_UTF8, 0, config_reset_function, 0, 0); configHasBeenReset = 0; db_multi_exec(zSQL2); } } /* ** After receiving configuration data, call this routine to transfer ** the results into the main database. */ void configure_finalize_receive(void){ static const char zSQL[] = @ DELETE FROM user; @ INSERT INTO user SELECT * FROM _xfer_user; @ DELETE FROM reportfmt; @ INSERT INTO reportfmt SELECT * FROM _xfer_reportfmt; @ DROP TABLE _xfer_user; @ DROP TABLE _xfer_reportfmt; ; db_multi_exec(zSQL); } /* ** Return true if z[] is not a "safe" SQL token. A safe token is one of: ** ** * A string literal ** * A blob literal ** * An integer literal (no floating point) ** * NULL */ static int safeSql(const char *z){ int i; if( z==0 || z[0]==0 ) return 0; if( (z[0]=='x' || z[0]=='X') && z[1]=='\'' ) z++; if( z[0]=='\'' ){ for(i=1; z[i]; i++){ if( z[i]=='\'' ){ i++; if( z[i]=='\'' ){ continue; } return z[i]==0; } } return 0; }else{ char c; for(i=0; (c = z[i])!=0; i++){ if( !fossil_isalnum(c) ) return 0; } } return 1; } /* ** Return true if z[] consists of nothing but digits */ static int safeInt(const char *z){ int i; if( z==0 || z[0]==0 ) return 0; for(i=0; fossil_isdigit(z[i]); i++){} return z[i]==0; } /* ** Process a single "config" card received from the other side of a ** sync session. ** ** Mask consists of one or more CONFIGSET_* values ORed together, to ** designate what types of configuration we are allowed to receive. ** ** NEW FORMAT: ** ** zName is one of "/config", "/user", "/shun", "/reportfmt", or "/concealed". ** zName indicates the table that holds the configuration information being ** transferred. pContent is a string that consist of alternating Fossil ** and SQL tokens. The First token is a timestamp in seconds since 1970. ** The second token is a primary key for the table identified by zName. If ** The entry with the corresponding primary key exists and has a more recent ** mtime, then nothing happens. If the entry does not exist or if it has ** an older mtime, then the content described by subsequent token pairs is ** inserted. The first element of each token pair is a column name and ** the second is its value. ** ** In overview, we have: ** ** NAME CONTENT ** ------- ----------------------------------------------------------- ** /config $MTIME $NAME value $VALUE ** /user $MTIME $LOGIN pw $VALUE cap $VALUE info $VALUE photo $VALUE ** /shun $MTIME $UUID scom $VALUE ** /reportfmt $MTIME $TITLE owner $VALUE cols $VALUE sqlcode $VALUE ** /concealed $MTIME $HASH content $VALUE ** ** OLD FORMAT: ** ** The old format is retained for backwards compatiblity, but is deprecated. ** The cutover from old format to new was on 2011-04-25. After sufficient ** time has passed, support for the old format will be removed. ** ** zName is either the NAME of an element of the CONFIG table, or else ** one of the special names "@shun", "@reportfmt", "@user", or "@concealed". ** If zName is a CONFIG table name, then CONTENT replaces (overwrites) the ** element in the CONFIG table. For one of the @-labels, CONTENT is raw ** SQL that is evaluated. Note that the raw SQL in CONTENT might not ** insert directly into the target table but might instead use a proxy ** table like _fer_reportfmt or _xfer_user. Such tables must be created ** ahead of time using configure_prepare_to_receive(). Then after multiple ** calls to this routine, configure_finalize_receive() to transfer the ** information received into the true target table. */ void configure_receive(const char *zName, Blob *pContent, int groupMask){ if( zName[0]=='/' ){ /* The new format */ char *azToken[12]; int nToken = 0; int ii, jj; int thisMask; Blob name, value, sql; static const struct receiveType { const char *zName; const char *zPrimKey; int nField; const char *azField[4]; } aType[] = { { "/config", "name", 1, { "value", 0, 0, 0 } }, { "@user", "login", 4, { "pw", "cap", "info", "photo" } }, { "@shun", "uuid", 1, { "scom", 0, 0, 0 } }, { "@reportfmt", "title", 3, { "owner", "cols", "sqlcode", 0 } }, { "@concealed", "hash", 1, { "content", 0, 0, 0 } }, }; for(ii=0; ii<count(aType); ii++){ if( fossil_strcmp(&aType[ii].zName[1],&zName[1])==0 ) break; } if( ii>=count(aType) ) return; while( blob_token(pContent, &name) && blob_sqltoken(pContent, &value) ){ char *z = blob_terminate(&name); if( !safeSql(z) ) return; if( nToken>0 ){ for(jj=0; jj<aType[ii].nField; jj++){ if( fossil_strcmp(aType[ii].azField[jj], z)==0 ) break; } if( jj>=aType[ii].nField ) continue; }else{ if( !safeInt(z) ) return; } azToken[nToken++] = z; azToken[nToken++] = z = blob_terminate(&value); if( !safeSql(z) ) return; if( nToken>=count(azToken) ) break; } if( nToken<2 ) return; if( aType[ii].zName[0]=='/' ){ thisMask = configure_is_exportable(azToken[1]); }else{ thisMask = configure_is_exportable(aType[ii].zName); } if( (thisMask & groupMask)==0 ) return; blob_zero(&sql); if( groupMask & CONFIGSET_OVERWRITE ){ if( (thisMask & configHasBeenReset)==0 && aType[ii].zName[0]!='/' ){ db_multi_exec("DELETE FROM %s", &aType[ii].zName[1]); configHasBeenReset |= thisMask; } blob_append(&sql, "REPLACE INTO ", -1); }else{ blob_append(&sql, "INSERT OR IGNORE INTO ", -1); } blob_appendf(&sql, "%s(%s, mtime", &zName[1], aType[ii].zPrimKey); for(jj=2; jj<nToken; jj+=2){ blob_appendf(&sql, ",%s", azToken[jj]); } blob_appendf(&sql,") VALUES(%s,%s", azToken[1], azToken[0]); for(jj=2; jj<nToken; jj+=2){ blob_appendf(&sql, ",%s", azToken[jj+1]); } db_multi_exec("%s)", blob_str(&sql)); if( db_changes()==0 ){ blob_reset(&sql); blob_appendf(&sql, "UPDATE %s SET mtime=%s", &zName[1], azToken[0]); for(jj=2; jj<nToken; jj+=2){ blob_appendf(&sql, ", %s=%s", azToken[jj], azToken[jj+1]); } blob_appendf(&sql, " WHERE %s=%s AND mtime<%s", aType[ii].zPrimKey, azToken[1], azToken[0]); db_multi_exec("%s", blob_str(&sql)); } blob_reset(&sql); }else{ /* Otherwise, the old format */ if( (configure_is_exportable(zName) & groupMask)==0 ) return; if( strcmp(zName, "logo-image")==0 ){ Stmt ins; db_prepare(&ins, "REPLACE INTO config(name, value, mtime) VALUES(:name, :value, now())" ); db_bind_text(&ins, ":name", zName); db_bind_blob(&ins, ":value", pContent); db_step(&ins); db_finalize(&ins); }else if( zName[0]=='@' ){ /* Notice that we are evaluating arbitrary SQL received from the ** client. But this can only happen if the client has authenticated ** as an administrator, so presumably we trust the client at this ** point. */ db_multi_exec("%s", blob_str(pContent)); }else{ db_multi_exec( "REPLACE INTO config(name,value,mtime) VALUES(%Q,%Q,now())", zName, blob_str(pContent) ); } } } /* ** Process a file full of "config" cards. */ void configure_receive_all(Blob *pIn, int groupMask){ Blob line; int nToken; int size; Blob aToken[4]; configHasBeenReset = 0; while( blob_line(pIn, &line) ){ if( blob_buffer(&line)[0]=='#' ) continue; nToken = blob_tokenize(&line, aToken, count(aToken)); if( blob_eq(&aToken[0],"config") && nToken==3 && blob_is_int(&aToken[2], &size) ){ const char *zName = blob_str(&aToken[1]); Blob content; blob_zero(&content); blob_extract(pIn, size, &content); g.okAdmin = g.okRdAddr = 1; configure_receive(zName, &content, groupMask); blob_reset(&content); blob_seek(pIn, 1, BLOB_SEEK_CUR); } } } /* ** Send "config" cards using the new format for all elements of a group ** that have recently changed. ** ** Output goes into pOut. The groupMask identifies the group(s) to be sent. ** Send only entries whose timestamp is later than or equal to iStart. ** ** Return the number of cards sent. */ int configure_send_group( Blob *pOut, /* Write output here */ int groupMask, /* Mask of groups to be send */ sqlite3_int64 iStart /* Only write values changed since this time */ ){ Stmt q; Blob rec; int ii; int nCard = 0; blob_zero(&rec); if( groupMask & CONFIGSET_SHUN ){ db_prepare(&q, "SELECT mtime, quote(uuid), quote(scom) FROM shun" " WHERE mtime>=%lld", iStart); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(&rec,"%s %s scom %s", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2) ); blob_appendf(pOut, "config /shun %d\n%s\n", blob_size(&rec), blob_str(&rec)); nCard++; blob_reset(&rec); } db_finalize(&q); } if( groupMask & CONFIGSET_USER ){ db_prepare(&q, "SELECT mtime, quote(login), quote(pw), quote(cap)," " quote(info), quote(photo) FROM user" " WHERE mtime>=%lld", iStart); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(&rec,"%s %s pw %s cap %s info %s photo %s", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2), db_column_text(&q, 3), db_column_text(&q, 4), db_column_text(&q, 5) ); blob_appendf(pOut, "config /user %d\n%s\n", blob_size(&rec), blob_str(&rec)); nCard++; blob_reset(&rec); } db_finalize(&q); } if( groupMask & CONFIGSET_TKT ){ db_prepare(&q, "SELECT mtime, quote(title), quote(owner), quote(cols)," " quote(sqlcode) FROM reportfmt" " WHERE mtime>=%lld", iStart); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(&rec,"%s %s owner %s cols %s sqlcode %s", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2), db_column_text(&q, 3), db_column_text(&q, 4) ); blob_appendf(pOut, "config /reportfmt %d\n%s\n", blob_size(&rec), blob_str(&rec)); nCard++; blob_reset(&rec); } db_finalize(&q); } if( groupMask & CONFIGSET_ADDR ){ db_prepare(&q, "SELECT mtime, quote(hash), quote(content) FROM concealed" " WHERE mtime>=%lld", iStart); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(&rec,"%s %s content %s", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2) ); blob_appendf(pOut, "config /concealed %d\n%s\n", blob_size(&rec), blob_str(&rec)); nCard++; blob_reset(&rec); } db_finalize(&q); } db_prepare(&q, "SELECT mtime, quote(name), quote(value) FROM config" " WHERE name=:name AND mtime>=%lld", iStart); for(ii=0; ii<count(aConfig); ii++){ if( (aConfig[ii].groupMask & groupMask)!=0 && aConfig[ii].zName[0]!='@' ){ db_bind_text(&q, ":name", aConfig[ii].zName); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(&rec,"%s %s value %s", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2) ); blob_appendf(pOut, "config /config %d\n%s\n", blob_size(&rec), blob_str(&rec)); nCard++; blob_reset(&rec); } db_reset(&q); } } db_finalize(&q); return nCard; } /* ** Identify a configuration group by name. Return its mask. ** Throw an error if no match. */ int configure_name_to_mask(const char *z, int notFoundIsFatal){ int i; int n = strlen(z); for(i=0; i<count(aGroupName); i++){ if( strncmp(z, &aGroupName[i].zName[1], n)==0 ){ return aGroupName[i].groupMask; } } if( notFoundIsFatal ){ printf("Available configuration areas:\n"); for(i=0; i<count(aGroupName); i++){ printf(" %-10s %s\n", &aGroupName[i].zName[1], aGroupName[i].zHelp); } fossil_fatal("no such configuration area: \"%s\"", z); } return 0; } /* ** Write SQL text into file zFilename that will restore the configuration ** area identified by mask to its current state from any other state. */ static void export_config( int groupMask, /* Mask indicating which configuration to export */ const char *zMask, /* Name of the configuration */ sqlite3_int64 iStart, /* Start date */ const char *zFilename /* Write into this file */ ){ Blob out; blob_zero(&out); blob_appendf(&out, "# The \"%s\" configuration exported from\n" "# repository \"%s\"\n" "# on %s\n", zMask, g.zRepositoryName, db_text(0, "SELECT datetime('now')") ); configure_send_group(&out, groupMask, iStart); blob_write_to_file(&out, zFilename); blob_reset(&out); } /* ** COMMAND: configuration |
︙ | ︙ | |||
391 392 393 394 395 396 397 | ** the current configuration. Existing values take priority over ** values read from FILENAME. ** ** %fossil configuration pull AREA ?URL? ** ** Pull and install the configuration from a different server ** identified by URL. If no URL is specified, then the default | | > > > | > > | > | < > | > > | > > > > > > > > | > | | > > > | | > > > > > > > > > | > > | > > | | | | | | | | | 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 | ** the current configuration. Existing values take priority over ** values read from FILENAME. ** ** %fossil configuration pull AREA ?URL? ** ** Pull and install the configuration from a different server ** identified by URL. If no URL is specified, then the default ** server is used. Use the --legacy option for the older protocol ** (when talking to servers compiled prior to 2011-04-27.) Use ** the --overwrite flag to completely replace local settings with ** content received from URL. ** ** %fossil configuration push AREA ?URL? ** ** Push the local configuration into the remote server identified ** by URL. Admin privilege is required on the remote server for ** this to work. When the same record exists both locally and on ** the remote end, the one that was most recently changed wins. ** Use the --legacy flag when talking to holder servers. ** ** %fossil configuration reset AREA ** ** Restore the configuration to the default. AREA as above. ** ** %fossil configuration sync AREA ?URL? ** ** Synchronize configuration changes in the local repository with ** the remote repository at URL. */ void configuration_cmd(void){ int n; const char *zMethod; if( g.argc<3 ){ usage("export|import|merge|pull|reset ..."); } db_find_and_open_repository(0, 0); zMethod = g.argv[2]; n = strlen(zMethod); if( strncmp(zMethod, "export", n)==0 ){ int mask; const char *zSince = find_option("since",0,1); sqlite3_int64 iStart; if( g.argc!=5 ){ usage("export AREA FILENAME"); } mask = configure_name_to_mask(g.argv[3], 1); if( zSince ){ iStart = db_multi_exec( "SELECT coalesce(strftime('%%s',%Q),strftime('%%s','now',%Q))+0", zSince, zSince ); }else{ iStart = 0; } export_config(mask, g.argv[3], iStart, g.argv[4]); }else if( strncmp(zMethod, "import", n)==0 || strncmp(zMethod, "merge", n)==0 ){ Blob in; int groupMask; if( g.argc!=4 ) usage(mprintf("%s FILENAME",zMethod)); blob_read_from_file(&in, g.argv[3]); db_begin_transaction(); if( zMethod[0]=='i' ){ groupMask = CONFIGSET_ALL | CONFIGSET_OVERWRITE; }else{ groupMask = CONFIGSET_ALL; } configure_receive_all(&in, groupMask); db_end_transaction(0); }else if( strncmp(zMethod, "pull", n)==0 || strncmp(zMethod, "push", n)==0 || strncmp(zMethod, "sync", n)==0 ){ int mask; const char *zServer; const char *zPw; int legacyFlag = 0; int overwriteFlag = 0; if( zMethod[0]!='s' ) legacyFlag = find_option("legacy",0,0)!=0; if( strncmp(zMethod,"pull",n)==0 ){ overwriteFlag = find_option("overwrite",0,0)!=0; } url_proxy_options(); if( g.argc!=4 && g.argc!=5 ){ usage("pull AREA ?URL?"); } mask = configure_name_to_mask(g.argv[3], 1); if( g.argc==5 ){ zServer = g.argv[4]; zPw = 0; g.dontKeepUrl = 1; }else{ zServer = db_get("last-sync-url", 0); if( zServer==0 ){ fossil_fatal("no server specified"); } zPw = unobscure(db_get("last-sync-pw", 0)); } url_parse(zServer); if( g.urlPasswd==0 && zPw ) g.urlPasswd = mprintf("%s", zPw); user_select(); url_enable_proxy("via proxy: "); if( legacyFlag ) mask |= CONFIGSET_OLDFORMAT; if( overwriteFlag ) mask |= CONFIGSET_OVERWRITE; if( strncmp(zMethod, "push", n)==0 ){ client_sync(0,0,0,0,0,mask); }else if( strncmp(zMethod, "pull", n)==0 ){ client_sync(0,0,0,0,mask,0); }else{ client_sync(0,0,0,0,mask,mask); } }else if( strncmp(zMethod, "reset", n)==0 ){ int mask, i; char *zBackup; if( g.argc!=4 ) usage("reset AREA"); mask = configure_name_to_mask(g.argv[3], 1); zBackup = db_text(0, "SELECT strftime('config-backup-%%Y%%m%%d%%H%%M%%f','now')"); db_begin_transaction(); export_config(mask, g.argv[3], 0, zBackup); for(i=0; i<count(aConfig); i++){ const char *zName = aConfig[i].zName; if( (aConfig[i].groupMask & mask)==0 ) continue; if( zName[0]!='@' ){ db_multi_exec("DELETE FROM config WHERE name=%Q", zName); }else if( fossil_strcmp(zName,"@user")==0 ){ db_multi_exec("DELETE FROM user"); db_create_default_users(0, 0); }else if( fossil_strcmp(zName,"@concealed")==0 ){ db_multi_exec("DELETE FROM concealed"); }else if( fossil_strcmp(zName,"@shun")==0 ){ db_multi_exec("DELETE FROM shun"); }else if( fossil_strcmp(zName,"@reportfmt")==0 ){ db_multi_exec("DELETE FROM reportfmt"); } } db_end_transaction(0); printf("Configuration reset to factory defaults.\n"); printf("To recover, use: %s %s import %s\n", fossil_nameofexe(), g.argv[1], zBackup); }else { fossil_fatal("METHOD should be one of:" " export import merge pull push reset"); } } |
Changes to src/content.c.
︙ | ︙ | |||
69 70 71 72 73 74 75 | blob_reset(&contentCache.a[mn].content); contentCache.n--; contentCache.a[mn] = contentCache.a[contentCache.n]; } } /* | | > > > > > > | > | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | blob_reset(&contentCache.a[mn].content); contentCache.n--; contentCache.a[mn] = contentCache.a[contentCache.n]; } } /* ** Add an entry to the content cache. ** ** This routines hands responsibility for the artifact over to the cache. ** The cache will deallocate memory when it has finished with it. */ void content_cache_insert(int rid, Blob *pBlob){ struct cacheLine *p; if( contentCache.n>500 || contentCache.szTotal>50000000 ){ i64 szBefore; do{ szBefore = contentCache.szTotal; content_cache_expire_oldest(); }while( contentCache.szTotal>50000000 && contentCache.szTotal<szBefore ); } if( contentCache.n>=contentCache.nAlloc ){ contentCache.nAlloc = contentCache.nAlloc*2 + 10; contentCache.a = fossil_realloc(contentCache.a, contentCache.nAlloc*sizeof(contentCache.a[0])); } p = &contentCache.a[contentCache.n++]; |
︙ | ︙ | |||
122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | srcid = db_column_int(&q, 0); }else{ srcid = 0; } db_reset(&q); return srcid; } /* ** Check to see if content is available for artifact "rid". Return ** true if it is. Return false if rid is a phantom or depends on ** a phantom. */ int content_is_available(int rid){ int srcid; int depth = 0; /* Limit to recursion depth */ while( depth++ < 10000000 ){ if( bag_find(&contentCache.missing, rid) ){ return 0; } if( bag_find(&contentCache.available, rid) ){ return 1; } | > > > > > > > > > > > > > > > | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 | srcid = db_column_int(&q, 0); }else{ srcid = 0; } db_reset(&q); return srcid; } /* ** Return the blob.size field given blob.rid */ int content_size(int rid, int dflt){ static Stmt q; int sz = dflt; db_static_prepare(&q, "SELECT size FROM blob WHERE rid=:r"); db_bind_int(&q, ":r", rid); if( db_step(&q)==SQLITE_ROW ){ sz = db_column_int(&q, 0); } db_reset(&q); return sz; } /* ** Check to see if content is available for artifact "rid". Return ** true if it is. Return false if rid is a phantom or depends on ** a phantom. */ int content_is_available(int rid){ int srcid; int depth = 0; /* Limit to recursion depth */ while( depth++ < 10000000 ){ if( bag_find(&contentCache.missing, rid) ){ return 0; } if( bag_find(&contentCache.available, rid) ){ return 1; } if( content_size(rid, -1)<0 ){ bag_insert(&contentCache.missing, rid); return 0; } srcid = findSrcid(rid); if( srcid==0 ){ bag_insert(&contentCache.available, rid); return 1; |
︙ | ︙ | |||
295 296 297 298 299 300 301 | ** ** -R|--repository FILE Extract artifacts from repository FILE */ void artifact_cmd(void){ int rid; Blob content; const char *zFile; | | | 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | ** ** -R|--repository FILE Extract artifacts from repository FILE */ void artifact_cmd(void){ int rid; Blob content; const char *zFile; db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); if( g.argc!=4 && g.argc!=3 ) usage("ARTIFACT-ID ?FILENAME? ?OPTIONS?"); zFile = g.argc==4 ? g.argv[3] : "-"; rid = name_to_rid(g.argv[2]); content_get(rid, &content); blob_write_to_file(&content, zFile); } |
︙ | ︙ | |||
359 360 361 362 363 364 365 | int nChildUsed = 0; int i; /* Parse the object rid itself */ if( linkFlag ){ content_get(rid, &content); manifest_crosslink(rid, &content); | | | | > | > > > > > | 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 | int nChildUsed = 0; int i; /* Parse the object rid itself */ if( linkFlag ){ content_get(rid, &content); manifest_crosslink(rid, &content); assert( blob_is_reset(&content) ); } /* Parse all delta-manifests that depend on baseline-manifest rid */ db_prepare(&q, "SELECT rid FROM orphan WHERE baseline=%d", rid); while( db_step(&q)==SQLITE_ROW ){ int child = db_column_int(&q, 0); if( nChildUsed>=nChildAlloc ){ nChildAlloc = nChildAlloc*2 + 10; aChild = fossil_realloc(aChild, nChildAlloc*sizeof(aChild)); } aChild[nChildUsed++] = child; } db_finalize(&q); for(i=0; i<nChildUsed; i++){ content_get(aChild[i], &content); manifest_crosslink(aChild[i], &content); assert( blob_is_reset(&content) ); } if( nChildUsed ){ db_multi_exec("DELETE FROM orphan WHERE baseline=%d", rid); } /* Recursively dephantomize all artifacts that are derived by ** delta from artifact rid and which have not already been ** cross-linked. */ nChildUsed = 0; db_prepare(&q, "SELECT rid FROM delta" " WHERE srcid=%d" " AND NOT EXISTS(SELECT 1 FROM mlink WHERE mid=delta.rid)", rid ); while( db_step(&q)==SQLITE_ROW ){ int child = db_column_int(&q, 0); if( nChildUsed>=nChildAlloc ){ nChildAlloc = nChildAlloc*2 + 10; aChild = fossil_realloc(aChild, nChildAlloc*sizeof(aChild)); } aChild[nChildUsed++] = child; |
︙ | ︙ | |||
419 420 421 422 423 424 425 | } /* ** Write content into the database. Return the record ID. If the ** content is already in the database, just return the record ID. ** ** If srcId is specified, then pBlob is delta content from | | > > > > > > > > | > > > > > > > > > > | > > > > | 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 | } /* ** Write content into the database. Return the record ID. If the ** content is already in the database, just return the record ID. ** ** If srcId is specified, then pBlob is delta content from ** the srcId record. srcId might be a phantom. ** ** pBlob is normally uncompressed text. But if nBlob>0 then the ** pBlob value has already been compressed and nBlob is its uncompressed ** size. If nBlob>0 then zUuid must be valid. ** ** zUuid is the UUID of the artifact, if it is specified. When srcId is ** specified then zUuid must always be specified. If srcId is zero, ** and zUuid is zero then the correct zUuid is computed from pBlob. ** ** If the record already exists but is a phantom, the pBlob content ** is inserted and the phatom becomes a real record. ** ** The original content of pBlob is not disturbed. The caller continues ** to be responsible for pBlob. This routine does *not* take over ** responsiblity for freeing pBlob. */ int content_put_ex( Blob *pBlob, /* Content to add to the repository */ const char *zUuid, /* SHA1 hash of reconstructed pBlob */ int srcId, /* pBlob is a delta from this entry */ int nBlob, /* pBlob is compressed. Original size is this */ int isPrivate /* The content should be marked private */ ){ int size; int rid; Stmt s1; Blob cmpr; Blob hash; int markAsUnclustered = 0; int isDephantomize = 0; assert( g.repositoryOpen ); assert( pBlob!=0 ); assert( srcId==0 || zUuid!=0 ); if( zUuid==0 ){ assert( pBlob!=0 ); assert( nBlob==0 ); sha1sum_blob(pBlob, &hash); }else{ blob_init(&hash, zUuid, -1); } if( nBlob ){ size = nBlob; }else{ size = blob_size(pBlob); if( srcId ){ size = delta_output_size(blob_buffer(pBlob), size); } } db_begin_transaction(); /* Check to see if the entry already exists and if it does whether ** or not the entry is a phantom */ db_prepare(&s1, "SELECT rid, size FROM blob WHERE uuid=%B", &hash); if( db_step(&s1)==SQLITE_ROW ){ |
︙ | ︙ | |||
479 480 481 482 483 484 485 | "INSERT INTO rcvfrom(uid, mtime, nonce, ipaddr)" "VALUES(%d, julianday('now'), %Q, %Q)", g.userUid, g.zNonce, g.zIpAddr ); g.rcvid = db_last_insert_rowid(); } | > > > | > | 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 | "INSERT INTO rcvfrom(uid, mtime, nonce, ipaddr)" "VALUES(%d, julianday('now'), %Q, %Q)", g.userUid, g.zNonce, g.zIpAddr ); g.rcvid = db_last_insert_rowid(); } if( nBlob ){ cmpr = pBlob[0]; }else{ blob_compress(pBlob, &cmpr); } if( rid>0 ){ /* We are just adding data to a phantom */ db_prepare(&s1, "UPDATE blob SET rcvid=%d, size=%d, content=:data WHERE rid=%d", g.rcvid, size, rid ); db_bind_blob(&s1, ":data", &cmpr); |
︙ | ︙ | |||
506 507 508 509 510 511 512 | ); db_bind_blob(&s1, ":data", &cmpr); db_exec(&s1); rid = db_last_insert_rowid(); if( !pBlob ){ db_multi_exec("INSERT OR IGNORE INTO phantom VALUES(%d)", rid); } | | | | 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | ); db_bind_blob(&s1, ":data", &cmpr); db_exec(&s1); rid = db_last_insert_rowid(); if( !pBlob ){ db_multi_exec("INSERT OR IGNORE INTO phantom VALUES(%d)", rid); } if( g.markPrivate || isPrivate ){ db_multi_exec("INSERT INTO private VALUES(%d)", rid); markAsUnclustered = 0; } } if( nBlob==0 ) blob_reset(&cmpr); /* If the srcId is specified, then the data we just added is ** really a delta. Record this fact in the delta table. */ if( srcId ){ db_multi_exec("REPLACE INTO delta(rid,srcid) VALUES(%d,%d)", rid, srcId); } |
︙ | ︙ | |||
544 545 546 547 548 549 550 551 552 553 554 | blob_reset(&hash); /* Make arrangements to verify that the data can be recovered ** before we commit */ verify_before_commit(rid); return rid; } /* ** Create a new phantom with the given UUID and return its artifact ID. */ | > > > > > > > > > > > > > > > > | | 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 | blob_reset(&hash); /* Make arrangements to verify that the data can be recovered ** before we commit */ verify_before_commit(rid); return rid; } /* ** This is the simple common case for inserting content into the ** repository. pBlob is the content to be inserted. ** ** pBlob is uncompressed and is not deltaed. It is exactly the content ** to be inserted. ** ** The original content of pBlob is not disturbed. The caller continues ** to be responsible for pBlob. This routine does *not* take over ** responsiblity for freeing pBlob. */ int content_put(Blob *pBlob){ return content_put_ex(pBlob, 0, 0, 0, 0); } /* ** Create a new phantom with the given UUID and return its artifact ID. */ int content_new(const char *zUuid, int isPrivate){ int rid; static Stmt s1, s2, s3; assert( g.repositoryOpen ); db_begin_transaction(); if( uuid_is_shunned(zUuid) ){ db_end_transaction(0); |
︙ | ︙ | |||
570 571 572 573 574 575 576 | db_exec(&s1); rid = db_last_insert_rowid(); db_static_prepare(&s2, "INSERT INTO phantom VALUES(:rid)" ); db_bind_int(&s2, ":rid", rid); db_exec(&s2); | | | | | 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 | db_exec(&s1); rid = db_last_insert_rowid(); db_static_prepare(&s2, "INSERT INTO phantom VALUES(:rid)" ); db_bind_int(&s2, ":rid", rid); db_exec(&s2); if( g.markPrivate || isPrivate ){ db_multi_exec("INSERT INTO private VALUES(%d)", rid); }else{ db_static_prepare(&s3, "INSERT INTO unclustered VALUES(:rid)" ); db_bind_int(&s3, ":rid", rid); db_exec(&s3); } bag_insert(&contentCache.missing, rid); db_end_transaction(0); return rid; } /* ** COMMAND: test-content-put ** ** Extract a blob from a file and write it into the database */ void test_content_put_cmd(void){ int rid; Blob content; if( g.argc!=3 ) usage("FILENAME"); db_must_be_within_tree(); user_select(); blob_read_from_file(&content, g.argv[2]); rid = content_put(&content); printf("inserted as record %d\n", rid); } /* ** Make sure the content at rid is the original content and is not a ** delta. */ |
︙ | ︙ | |||
678 679 680 681 682 683 684 | ** the source of the delta. It is OK to delta private->private and ** public->private and public->public. Just no private->public delta. ** ** If srcid is a delta that depends on rid, then srcid is ** converted to undeltaed text. ** ** If either rid or srcid contain less than 50 bytes, or if the | | | > > | > > | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 | ** the source of the delta. It is OK to delta private->private and ** public->private and public->public. Just no private->public delta. ** ** If srcid is a delta that depends on rid, then srcid is ** converted to undeltaed text. ** ** If either rid or srcid contain less than 50 bytes, or if the ** resulting delta does not achieve a compression of at least 25% ** the rid is left untouched. ** ** Return 1 if a delta is made and 0 if no delta occurs. */ int content_deltify(int rid, int srcid, int force){ int s; Blob data, src, delta; Stmt s1, s2; int rc = 0; if( srcid==rid ) return 0; if( !force && findSrcid(rid)>0 ) return 0; if( content_is_private(srcid) && !content_is_private(rid) ){ return 0; } s = srcid; while( (s = findSrcid(s))>0 ){ if( s==rid ){ content_undelta(srcid); break; } } content_get(srcid, &src); if( blob_size(&src)<50 ){ blob_reset(&src); return 0; } content_get(rid, &data); if( blob_size(&data)<50 ){ blob_reset(&src); blob_reset(&data); return 0; } blob_delta_create(&src, &data, &delta); if( blob_size(&delta) <= blob_size(&data)*0.75 ){ blob_compress(&delta, &delta); db_prepare(&s1, "UPDATE blob SET content=:data WHERE rid=%d", rid); db_prepare(&s2, "REPLACE INTO delta(rid,srcid)VALUES(%d,%d)", rid, srcid); db_bind_blob(&s1, ":data", &delta); db_begin_transaction(); db_exec(&s1); db_exec(&s2); db_end_transaction(0); db_finalize(&s1); db_finalize(&s2); verify_before_commit(rid); rc = 1; } blob_reset(&src); blob_reset(&data); blob_reset(&delta); return rc; } /* ** COMMAND: test-content-deltify ** ** Convert the content at RID into a delta from SRCID. */ void test_content_deltify_cmd(void){ if( g.argc!=5 ) usage("RID SRCID FORCE"); db_must_be_within_tree(); content_deltify(atoi(g.argv[2]), atoi(g.argv[3]), atoi(g.argv[4])); } /* ** COMMAND: test-integrity ** ** Verify that all content can be extracted from the BLOB table correctly. ** If the BLOB table is correct, then the repository can always be ** successfully reconstructed using "fossil rebuild". */ void test_integrity(void){ Stmt q; Blob content; Blob cksum; int n1 = 0; int n2 = 0; int total; db_find_and_open_repository(OPEN_ANY_SCHEMA, 2); db_prepare(&q, "SELECT rid, uuid, size FROM blob ORDER BY rid"); total = db_int(0, "SELECT max(rid) FROM blob"); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); const char *zUuid = db_column_text(&q, 1); int size = db_column_int(&q, 2); n1++; printf(" %d/%d\r", n1, total); fflush(stdout); if( size<0 ){ printf("skip phantom %d %s\n", rid, zUuid); continue; /* Ignore phantoms */ } content_get(rid, &content); if( blob_size(&content)!=size ){ fossil_warning("size mismatch on blob rid=%d: %d vs %d", rid, blob_size(&content), size); } sha1sum_blob(&content, &cksum); if( strcmp(blob_str(&cksum), zUuid)!=0 ){ fossil_fatal("checksum mismatch on blob rid=%d: %s vs %s", rid, blob_str(&cksum), zUuid); } blob_reset(&cksum); blob_reset(&content); n2++; } db_finalize(&q); printf("%d non-phantom blobs (out of %d total) verified\n", n2, n1); } |
Changes to src/db.c.
︙ | ︙ | |||
32 33 34 35 36 37 38 39 40 41 42 43 44 45 | #if ! defined(_WIN32) # include <pwd.h> #endif #include <sqlite3.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include "db.h" #if INTERFACE /* ** An single SQL statement is represented as an instance of the following ** structure. */ | > | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | #if ! defined(_WIN32) # include <pwd.h> #endif #include <sqlite3.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <time.h> #include "db.h" #if INTERFACE /* ** An single SQL statement is represented as an instance of the following ** structure. */ |
︙ | ︙ | |||
71 72 73 74 75 76 77 | } if( g.cgiOutput ){ g.cgiOutput = 0; cgi_printf("<h1>Database Error</h1>\n" "<pre>%h</pre><p>%s</p>", z, zRebuildMsg); cgi_reply(); }else{ | | < > > > > > > > > > > > > | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | } if( g.cgiOutput ){ g.cgiOutput = 0; cgi_printf("<h1>Database Error</h1>\n" "<pre>%h</pre><p>%s</p>", z, zRebuildMsg); cgi_reply(); }else{ fprintf(stderr, "%s: %s\n\n%s", fossil_nameofexe(), z, zRebuildMsg); } db_force_rollback(); fossil_exit(1); } static int nBegin = 0; /* Nesting depth of BEGIN */ static int doRollback = 0; /* True to force a rollback */ static int nCommitHook = 0; /* Number of commit hooks */ static struct sCommitHook { int (*xHook)(void); /* Functions to call at db_end_transaction() */ int sequence; /* Call functions in sequence order */ } aHook[5]; static Stmt *pAllStmt = 0; /* List of all unfinalized statements */ static int nPrepare = 0; /* Number of calls to sqlite3_prepare() */ static int nDeleteOnFail = 0; /* Number of entries in azDeleteOnFail[] */ static char *azDeleteOnFail[3]; /* Files to delete on a failure */ /* ** Arrange for the given file to be deleted on a failure. */ void db_delete_on_failure(const char *zFilename){ assert( nDeleteOnFail<count(azDeleteOnFail) ); azDeleteOnFail[nDeleteOnFail++] = fossil_strdup(zFilename); } /* ** This routine is called by the SQLite commit-hook mechanism ** just prior to each commit. All this routine does is verify ** that nBegin really is zero. That insures that transactions ** cannot commit by any means other than by calling db_end_transaction() ** below. |
︙ | ︙ | |||
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | void db_end_transaction(int rollbackFlag){ if( g.db==0 ) return; if( nBegin<=0 ) return; if( rollbackFlag ) doRollback = 1; nBegin--; if( nBegin==0 ){ int i; for(i=0; doRollback==0 && i<nCommitHook; i++){ doRollback |= aHook[i].xHook(); } db_multi_exec(doRollback ? "ROLLBACK" : "COMMIT"); doRollback = 0; } } void db_force_rollback(void){ static int busy = 0; if( busy || g.db==0 ) return; busy = 1; undo_rollback(); if( nBegin ){ sqlite3_exec(g.db, "ROLLBACK", 0, 0, 0); nBegin = 0; | > > > > > > > > > > > > < > > | > | | < < | 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | void db_end_transaction(int rollbackFlag){ if( g.db==0 ) return; if( nBegin<=0 ) return; if( rollbackFlag ) doRollback = 1; nBegin--; if( nBegin==0 ){ int i; if( doRollback==0 ) leaf_do_pending_checks(); for(i=0; doRollback==0 && i<nCommitHook; i++){ doRollback |= aHook[i].xHook(); } while( pAllStmt ){ db_finalize(pAllStmt); } db_multi_exec(doRollback ? "ROLLBACK" : "COMMIT"); doRollback = 0; } } /* ** Force a rollback and shutdown the database */ void db_force_rollback(void){ int i; static int busy = 0; if( busy || g.db==0 ) return; busy = 1; undo_rollback(); while( pAllStmt ){ db_finalize(pAllStmt); } if( nBegin ){ sqlite3_exec(g.db, "ROLLBACK", 0, 0, 0); nBegin = 0; } busy = 0; db_close(0); for(i=0; i<nDeleteOnFail; i++){ unlink(azDeleteOnFail[i]); } } /* ** Install a commit hook. Hooks are installed in sequence order. ** It is an error to install the same commit hook more than once. ** ** Each commit hook is called (in order of accending sequence) at |
︙ | ︙ | |||
178 179 180 181 182 183 184 | } /* ** Prepare a Stmt. Assume that the Stmt is previously uninitialized. ** If the input string contains multiple SQL statements, only the first ** one is processed. All statements beyond the first are silently ignored. */ | | > > | > | | > > > > > > > > | | 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | } /* ** Prepare a Stmt. Assume that the Stmt is previously uninitialized. ** If the input string contains multiple SQL statements, only the first ** one is processed. All statements beyond the first are silently ignored. */ int db_vprepare(Stmt *pStmt, int errOk, const char *zFormat, va_list ap){ int rc; char *zSql; blob_zero(&pStmt->sql); blob_vappendf(&pStmt->sql, zFormat, ap); va_end(ap); zSql = blob_str(&pStmt->sql); nPrepare++; rc = sqlite3_prepare_v2(g.db, zSql, -1, &pStmt->pStmt, 0); if( rc!=0 && !errOk ){ db_err("%s\n%s", sqlite3_errmsg(g.db), zSql); } pStmt->pNext = pStmt->pPrev = 0; pStmt->nStep = 0; return rc; } int db_prepare(Stmt *pStmt, const char *zFormat, ...){ int rc; va_list ap; va_start(ap, zFormat); rc = db_vprepare(pStmt, 0, zFormat, ap); va_end(ap); return rc; } int db_prepare_ignore_error(Stmt *pStmt, const char *zFormat, ...){ int rc; va_list ap; va_start(ap, zFormat); rc = db_vprepare(pStmt, 1, zFormat, ap); va_end(ap); return rc; } int db_static_prepare(Stmt *pStmt, const char *zFormat, ...){ int rc = SQLITE_OK; if( blob_size(&pStmt->sql)==0 ){ va_list ap; va_start(ap, zFormat); rc = db_vprepare(pStmt, 0, zFormat, ap); pStmt->pNext = pAllStmt; pStmt->pPrev = 0; if( pAllStmt ) pAllStmt->pPrev = pStmt; pAllStmt = pStmt; va_end(ap); } return rc; |
︙ | ︙ | |||
355 356 357 358 359 360 361 362 363 364 365 366 367 368 | } double db_column_double(Stmt *pStmt, int N){ return sqlite3_column_double(pStmt->pStmt, N); } const char *db_column_text(Stmt *pStmt, int N){ return (char*)sqlite3_column_text(pStmt->pStmt, N); } const char *db_column_name(Stmt *pStmt, int N){ return (char*)sqlite3_column_name(pStmt->pStmt, N); } int db_column_count(Stmt *pStmt){ return sqlite3_column_count(pStmt->pStmt); } char *db_column_malloc(Stmt *pStmt, int N){ | > > > | 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 | } double db_column_double(Stmt *pStmt, int N){ return sqlite3_column_double(pStmt->pStmt, N); } const char *db_column_text(Stmt *pStmt, int N){ return (char*)sqlite3_column_text(pStmt->pStmt, N); } const char *db_column_raw(Stmt *pStmt, int N){ return (const char*)sqlite3_column_blob(pStmt->pStmt, N); } const char *db_column_name(Stmt *pStmt, int N){ return (char*)sqlite3_column_name(pStmt->pStmt, N); } int db_column_count(Stmt *pStmt){ return sqlite3_column_count(pStmt->pStmt); } char *db_column_malloc(Stmt *pStmt, int N){ |
︙ | ︙ | |||
428 429 430 431 432 433 434 | ** Execute a query and return a single integer value. */ i64 db_int64(i64 iDflt, const char *zSql, ...){ va_list ap; Stmt s; i64 rc; va_start(ap, zSql); | | | | | | | < < < < < < < | 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 | ** Execute a query and return a single integer value. */ i64 db_int64(i64 iDflt, const char *zSql, ...){ va_list ap; Stmt s; i64 rc; va_start(ap, zSql); db_vprepare(&s, 0, zSql, ap); va_end(ap); if( db_step(&s)!=SQLITE_ROW ){ rc = iDflt; }else{ rc = db_column_int64(&s, 0); } db_finalize(&s); return rc; } int db_int(int iDflt, const char *zSql, ...){ va_list ap; Stmt s; int rc; va_start(ap, zSql); db_vprepare(&s, 0, zSql, ap); va_end(ap); if( db_step(&s)!=SQLITE_ROW ){ rc = iDflt; }else{ rc = db_column_int(&s, 0); } db_finalize(&s); return rc; } /* ** Return TRUE if the query would return 1 or more rows. Return ** FALSE if the query result would be an empty set. */ int db_exists(const char *zSql, ...){ va_list ap; Stmt s; int rc; va_start(ap, zSql); db_vprepare(&s, 0, zSql, ap); va_end(ap); if( db_step(&s)!=SQLITE_ROW ){ rc = 0; }else{ rc = 1; } db_finalize(&s); return rc; } /* ** Execute a query and return a floating-point value. */ double db_double(double rDflt, const char *zSql, ...){ va_list ap; Stmt s; double r; va_start(ap, zSql); db_vprepare(&s, 0, zSql, ap); va_end(ap); if( db_step(&s)!=SQLITE_ROW ){ r = rDflt; }else{ r = db_column_double(&s, 0); } db_finalize(&s); return r; } /* ** Execute a query and append the first column of the first row ** of the result set to blob given in the first argument. */ void db_blob(Blob *pResult, const char *zSql, ...){ va_list ap; Stmt s; va_start(ap, zSql); db_vprepare(&s, 0, zSql, ap); va_end(ap); if( db_step(&s)==SQLITE_ROW ){ blob_append(pResult, sqlite3_column_blob(s.pStmt, 0), sqlite3_column_bytes(s.pStmt, 0)); } db_finalize(&s); } /* ** Execute a query. Return the first column of the first row ** of the result set as a string. Space to hold the string is ** obtained from malloc(). If the result set is empty, return ** zDefault instead. */ char *db_text(char *zDefault, const char *zSql, ...){ va_list ap; Stmt s; char *z; va_start(ap, zSql); db_vprepare(&s, 0, zSql, ap); va_end(ap); if( db_step(&s)==SQLITE_ROW ){ z = mprintf("%s", sqlite3_column_text(s.pStmt, 0)); }else if( zDefault ){ z = mprintf("%s", zDefault); }else{ z = 0; } db_finalize(&s); return z; } /* ** Initialize a new database file with the given schema. If anything ** goes wrong, call db_err() to exit. */ void db_init_database( const char *zFileName, /* Name of database file to create */ const char *zSchema, /* First part of schema */ ... /* Additional SQL to run. Terminate with NULL. */ ){ sqlite3 *db; int rc; const char *zSql; va_list ap; rc = sqlite3_open(zFileName, &db); if( rc!=SQLITE_OK ){ db_err(sqlite3_errmsg(db)); } sqlite3_busy_timeout(db, 5000); sqlite3_exec(db, "BEGIN EXCLUSIVE", 0, 0, 0); rc = sqlite3_exec(db, zSchema, 0, 0, 0); |
︙ | ︙ | |||
577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 | db_err(sqlite3_errmsg(db)); } } va_end(ap); sqlite3_exec(db, "COMMIT", 0, 0, 0); sqlite3_close(db); } /* ** Open a database file. Return a pointer to the new database ** connection. An error results in process abort. */ static sqlite3 *openDatabase(const char *zDbName){ int rc; const char *zVfs; sqlite3 *db; zVfs = getenv("FOSSIL_VFS"); | > > > > > > > > > > > > > < < < > | | < < < < | 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 | db_err(sqlite3_errmsg(db)); } } va_end(ap); sqlite3_exec(db, "COMMIT", 0, 0, 0); sqlite3_close(db); } /* ** Function to return the number of seconds since 1970. This is ** the same as strftime('%s','now') but is more compact. */ static void db_now_function( sqlite3_context *context, int argc, sqlite3_value **argv ){ sqlite3_result_int64(context, time(0)); } /* ** Open a database file. Return a pointer to the new database ** connection. An error results in process abort. */ static sqlite3 *openDatabase(const char *zDbName){ int rc; const char *zVfs; sqlite3 *db; zVfs = getenv("FOSSIL_VFS"); rc = sqlite3_open_v2( zDbName, &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, zVfs ); if( rc!=SQLITE_OK ){ db_err(sqlite3_errmsg(db)); } sqlite3_busy_timeout(db, 5000); sqlite3_wal_autocheckpoint(db, 1); /* Set to checkpoint frequently */ sqlite3_create_function(db, "now", 0, SQLITE_ANY, 0, db_now_function, 0, 0); return db; } /* ** zDbName is the name of a database file. If no other database ** file is open, then open this one. If another database file is ** already open, then attach zDbName using the name zLabel. */ static void db_open_or_attach(const char *zDbName, const char *zLabel){ if( !g.db ){ g.db = openDatabase(zDbName); g.zMainDbType = zLabel; db_connection_init(); }else{ db_multi_exec("ATTACH DATABASE %Q AS %s", zDbName, zLabel); } } /* ** Open the user database in "~/.fossil". Create the database anew if ** it does not already exist. ** |
︙ | ︙ | |||
645 646 647 648 649 650 651 652 653 654 655 656 | const char *zHome; if( g.configOpen ) return; #if defined(_WIN32) zHome = getenv("LOCALAPPDATA"); if( zHome==0 ){ zHome = getenv("APPDATA"); if( zHome==0 ){ zHome = getenv("HOMEPATH"); } } if( zHome==0 ){ fossil_fatal("cannot locate home directory - " | > > > | | 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | const char *zHome; if( g.configOpen ) return; #if defined(_WIN32) zHome = getenv("LOCALAPPDATA"); if( zHome==0 ){ zHome = getenv("APPDATA"); if( zHome==0 ){ char *zDrive = getenv("HOMEDRIVE"); zHome = getenv("HOMEPATH"); if( zDrive && zHome ) zHome = mprintf("%s%s", zDrive, zHome); } } if( zHome==0 ){ fossil_fatal("cannot locate home directory - " "please set the LOCALAPPDATA or APPDATA or HOMEPATH " "environment variables"); } #else zHome = getenv("HOME"); if( zHome==0 ){ fossil_fatal("cannot locate home directory - " "please set the HOME environment variable"); } |
︙ | ︙ | |||
685 686 687 688 689 690 691 692 693 694 695 696 697 698 | if( useAttach ){ db_open_or_attach(zDbName, "configdb"); g.dbConfig = 0; }else{ g.dbConfig = openDatabase(zDbName); } g.configOpen = 1; } /* ** If zDbName is a valid local database file, open it and return ** true. If it is not a valid local database file, return 0. */ static int isValidLocalDb(const char *zDbName){ | > | 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 | if( useAttach ){ db_open_or_attach(zDbName, "configdb"); g.dbConfig = 0; }else{ g.dbConfig = openDatabase(zDbName); } g.configOpen = 1; free(zDbName); } /* ** If zDbName is a valid local database file, open it and return ** true. If it is not a valid local database file, return 0. */ static int isValidLocalDb(const char *zDbName){ |
︙ | ︙ | |||
709 710 711 712 713 714 715 716 717 718 719 720 721 722 | db_open_repository(0); /* If the "isexe" column is missing from the vfile table, then ** add it now. This code added on 2010-03-06. After all users have ** upgraded, this code can be safely deleted. */ rc = sqlite3_prepare(g.db, "SELECT isexe FROM vfile", -1, &pStmt, 0); sqlite3_finalize(pStmt); if( rc==SQLITE_ERROR ){ sqlite3_exec(g.db, "ALTER TABLE vfile ADD COLUMN isexe BOOLEAN", 0, 0, 0); } #if 0 /* If the "mtime" column is missing from the vfile table, then | > | 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 | db_open_repository(0); /* If the "isexe" column is missing from the vfile table, then ** add it now. This code added on 2010-03-06. After all users have ** upgraded, this code can be safely deleted. */ rc = sqlite3_prepare(g.db, "SELECT isexe FROM vfile", -1, &pStmt, 0); nPrepare++; sqlite3_finalize(pStmt); if( rc==SQLITE_ERROR ){ sqlite3_exec(g.db, "ALTER TABLE vfile ADD COLUMN isexe BOOLEAN", 0, 0, 0); } #if 0 /* If the "mtime" column is missing from the vfile table, then |
︙ | ︙ | |||
772 773 774 775 776 777 778 | n = strlen(zPwd); zPwdConv = mprintf("%/", zPwd); strncpy(zPwd, zPwdConv, 2000-20); free(zPwdConv); while( n>0 ){ if( access(zPwd, W_OK) ) break; for(i=0; i<sizeof(aDbName)/sizeof(aDbName[0]); i++){ | | | 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 | n = strlen(zPwd); zPwdConv = mprintf("%/", zPwd); strncpy(zPwd, zPwdConv, 2000-20); free(zPwdConv); while( n>0 ){ if( access(zPwd, W_OK) ) break; for(i=0; i<sizeof(aDbName)/sizeof(aDbName[0]); i++){ sqlite3_snprintf(sizeof(zPwd)-n, &zPwd[n], "%s", aDbName[i]); if( isValidLocalDb(zPwd) ){ /* Found a valid checkout database file */ zPwd[n] = 0; while( n>1 && zPwd[n-1]=='/' ){ n--; zPwd[n] = 0; } |
︙ | ︙ | |||
823 824 825 826 827 828 829 830 831 832 833 834 835 836 | } } db_open_or_attach(zDbName, "repository"); g.repositoryOpen = 1; g.zRepositoryName = mprintf("%s", zDbName); } /* ** Try to find the repository and open it. Use the -R or --repository ** option to locate the repository. If no such option is available, then ** use the repository of the open checkout if there is one. ** ** Error out if the repository cannot be opened. */ | > > > > > > > > | > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 | } } db_open_or_attach(zDbName, "repository"); g.repositoryOpen = 1; g.zRepositoryName = mprintf("%s", zDbName); } /* ** Flags for the db_find_and_open_repository() function. */ #if INTERFACE #define OPEN_OK_NOT_FOUND 0x001 /* Do not error out if not found */ #define OPEN_ANY_SCHEMA 0x002 /* Do not error if schema is wrong */ #endif /* ** Try to find the repository and open it. Use the -R or --repository ** option to locate the repository. If no such option is available, then ** use the repository of the open checkout if there is one. ** ** Error out if the repository cannot be opened. */ void db_find_and_open_repository(int bFlags, int nArgUsed){ const char *zRep = find_option("repository", "R", 1); if( zRep==0 && nArgUsed && g.argc==nArgUsed+1 ){ zRep = g.argv[nArgUsed]; } if( zRep==0 ){ if( db_open_local()==0 ){ goto rep_not_found; } zRep = db_lget("repository", 0); if( zRep==0 ){ goto rep_not_found; } } db_open_repository(zRep); if( g.repositoryOpen ){ if( (bFlags & OPEN_ANY_SCHEMA)==0 ) db_verify_schema(); return; } rep_not_found: if( (bFlags & OPEN_OK_NOT_FOUND)==0 ){ fossil_fatal("use --repository or -R to specify the repository database"); } } /* ** Return the name of the database "localdb", "configdb", or "repository". */ const char *db_name(const char *zDb){ assert( strcmp(zDb,"localdb")==0 || strcmp(zDb,"configdb")==0 || strcmp(zDb,"repository")==0 ); if( strcmp(zDb, g.zMainDbType)==0 ) zDb = "main"; return zDb; } /* ** Return TRUE if the schema is out-of-date */ int db_schema_is_outofdate(void){ return db_exists("SELECT 1 FROM config" " WHERE name='aux-schema'" " AND value<>'%s'", AUX_SCHEMA); } /* ** Verify that the repository schema is correct. If it is not correct, ** issue a fatal error and die. */ void db_verify_schema(void){ if( db_schema_is_outofdate() ){ fossil_warning("incorrect repository schema version"); fossil_warning("your repository has schema version \"%s\" " "but this binary expects version \"%s\"", db_get("aux-schema",0), AUX_SCHEMA); fossil_fatal("run \"fossil rebuild\" to fix this problem"); } } /* ** COMMAND: test-move-repository ** ** Usage: %fossil test-move-repository PATHNAME ** ** Change the location of the repository database on a local check-out. |
︙ | ︙ | |||
877 878 879 880 881 882 883 | file_canonical_name(g.argv[2], &repo); zRepo = blob_str(&repo); if( access(zRepo, 0) ){ fossil_fatal("no such file: %s", zRepo); } db_open_or_attach(zRepo, "test_repo"); db_lset("repository", blob_str(&repo)); | | > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > | 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 | file_canonical_name(g.argv[2], &repo); zRepo = blob_str(&repo); if( access(zRepo, 0) ){ fossil_fatal("no such file: %s", zRepo); } db_open_or_attach(zRepo, "test_repo"); db_lset("repository", blob_str(&repo)); db_close(1); } /* ** Open the local database. If unable, exit with an error. */ void db_must_be_within_tree(void){ if( db_open_local()==0 ){ fossil_fatal("not within an open checkout"); } db_open_repository(0); db_verify_schema(); } /* ** Close the database connection. ** ** Check for unfinalized statements and report errors if the reportErrors ** argument is true. Ignore unfinalized statements when false. */ void db_close(int reportErrors){ sqlite3_stmt *pStmt; if( g.db==0 ) return; if( g.fSqlStats ){ int cur, hiwtr; sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_USED, &cur, &hiwtr, 0); fprintf(stderr, "-- LOOKASIDE_USED %10d %10d\n", cur, hiwtr); sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_HIT, &cur, &hiwtr, 0); fprintf(stderr, "-- LOOKASIDE_HIT %10d\n", hiwtr); sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE, &cur,&hiwtr,0); fprintf(stderr, "-- LOOKASIDE_MISS_SIZE %10d\n", hiwtr); sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL, &cur,&hiwtr,0); fprintf(stderr, "-- LOOKASIDE_MISS_FULL %10d\n", hiwtr); sqlite3_db_status(g.db, SQLITE_DBSTATUS_CACHE_USED, &cur, &hiwtr, 0); fprintf(stderr, "-- CACHE_USED %10d\n", cur); sqlite3_db_status(g.db, SQLITE_DBSTATUS_SCHEMA_USED, &cur, &hiwtr, 0); fprintf(stderr, "-- SCHEMA_USED %10d\n", cur); sqlite3_db_status(g.db, SQLITE_DBSTATUS_STMT_USED, &cur, &hiwtr, 0); fprintf(stderr, "-- STMT_USED %10d\n", cur); sqlite3_status(SQLITE_STATUS_MEMORY_USED, &cur, &hiwtr, 0); fprintf(stderr, "-- MEMORY_USED %10d %10d\n", cur, hiwtr); sqlite3_status(SQLITE_STATUS_MALLOC_SIZE, &cur, &hiwtr, 0); fprintf(stderr, "-- MALLOC_SIZE %10d\n", hiwtr); sqlite3_status(SQLITE_STATUS_MALLOC_COUNT, &cur, &hiwtr, 0); fprintf(stderr, "-- MALLOC_COUNT %10d %10d\n", cur, hiwtr); sqlite3_status(SQLITE_STATUS_PAGECACHE_OVERFLOW, &cur, &hiwtr, 0); fprintf(stderr, "-- PCACHE_OVFLOW %10d %10d\n", cur, hiwtr); fprintf(stderr, "-- prepared statements %10d\n", nPrepare); } while( pAllStmt ){ db_finalize(pAllStmt); } db_end_transaction(1); pStmt = 0; if( reportErrors ){ while( (pStmt = sqlite3_next_stmt(g.db, pStmt))!=0 ){ fossil_warning("unfinalized SQL statement: [%s]", sqlite3_sql(pStmt)); } } g.repositoryOpen = 0; g.localOpen = 0; g.configOpen = 0; sqlite3_wal_checkpoint(g.db, 0); sqlite3_close(g.db); g.db = 0; |
︙ | ︙ | |||
932 933 934 935 936 937 938 | void db_create_repository(const char *zFilename){ db_init_database( zFilename, zRepositorySchema1, zRepositorySchema2, (char*)0 ); | | | 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 | void db_create_repository(const char *zFilename){ db_init_database( zFilename, zRepositorySchema1, zRepositorySchema2, (char*)0 ); db_delete_on_failure(zFilename); } /* ** Create the default user accounts in the USER table. */ void db_create_default_users(int setupUserOnly, const char *zDefaultUser){ const char *zUser; |
︙ | ︙ | |||
961 962 963 964 965 966 967 | db_multi_exec( "INSERT INTO user(login, pw, cap, info)" "VALUES(%Q,lower(hex(randomblob(3))),'s','')", zUser ); if( !setupUserOnly ){ db_multi_exec( "INSERT INTO user(login,pw,cap,info)" | | | | 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 | db_multi_exec( "INSERT INTO user(login, pw, cap, info)" "VALUES(%Q,lower(hex(randomblob(3))),'s','')", zUser ); if( !setupUserOnly ){ db_multi_exec( "INSERT INTO user(login,pw,cap,info)" " VALUES('anonymous',hex(randomblob(8)),'hmncz','Anon');" "INSERT INTO user(login,pw,cap,info)" " VALUES('nobody','','gjor','Nobody');" "INSERT INTO user(login,pw,cap,info)" " VALUES('developer','','dei','Dev');" "INSERT INTO user(login,pw,cap,info)" " VALUES('reader','','kptw','Reader');" ); } } |
︙ | ︙ | |||
996 997 998 999 1000 1001 1002 | Blob hash; Blob manifest; db_set("content-schema", CONTENT_SCHEMA, 0); db_set("aux-schema", AUX_SCHEMA, 0); if( makeServerCodes ){ db_multi_exec( | | | | | | 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 | Blob hash; Blob manifest; db_set("content-schema", CONTENT_SCHEMA, 0); db_set("aux-schema", AUX_SCHEMA, 0); if( makeServerCodes ){ db_multi_exec( "INSERT INTO config(name,value,mtime)" " VALUES('server-code', lower(hex(randomblob(20))),now());" "INSERT INTO config(name,value,mtime)" " VALUES('project-code', lower(hex(randomblob(20))),now());" ); } if( !db_is_global("autosync") ) db_set_int("autosync", 1, 0); if( !db_is_global("localauth") ) db_set_int("localauth", 0, 0); db_create_default_users(0, zDefaultUser); user_select(); |
︙ | ︙ | |||
1022 1023 1024 1025 1026 1027 1028 | blob_appendf(&manifest, "R %s\n", md5sum_finish(0)); blob_appendf(&manifest, "T *branch * trunk\n"); blob_appendf(&manifest, "T *sym-trunk *\n"); blob_appendf(&manifest, "U %F\n", g.zLogin); md5sum_blob(&manifest, &hash); blob_appendf(&manifest, "Z %b\n", &hash); blob_reset(&hash); | | | 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 | blob_appendf(&manifest, "R %s\n", md5sum_finish(0)); blob_appendf(&manifest, "T *branch * trunk\n"); blob_appendf(&manifest, "T *sym-trunk *\n"); blob_appendf(&manifest, "U %F\n", g.zLogin); md5sum_blob(&manifest, &hash); blob_appendf(&manifest, "Z %b\n", &hash); blob_reset(&hash); rid = content_put(&manifest); manifest_crosslink(rid, &manifest); } } /* ** COMMAND: new ** COMMAND: init |
︙ | ︙ | |||
1065 1066 1067 1068 1069 1070 1071 | } db_create_repository(g.argv[2]); db_open_repository(g.argv[2]); db_open_config(0); db_begin_transaction(); db_initial_setup(zDate, zDefaultUser, 1); db_end_transaction(0); | | | | > | | 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 | } db_create_repository(g.argv[2]); db_open_repository(g.argv[2]); db_open_config(0); db_begin_transaction(); db_initial_setup(zDate, zDefaultUser, 1); db_end_transaction(0); fossil_print("project-id: %s\n", db_get("project-code", 0)); fossil_print("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); fossil_print("admin-user: %s (initial password is \"%s\")\n", g.zLogin, zPassword); } /* ** SQL functions for debugging. ** ** The print() function writes its arguments on stdout, but only ** if the -sqlprint command-line option is turned on. */ static void db_sql_print( sqlite3_context *context, int argc, sqlite3_value **argv ){ int i; if( g.fSqlPrint ){ for(i=0; i<argc; i++){ char c = i==argc-1 ? '\n' : ' '; fossil_print("%s%c", sqlite3_value_text(argv[i]), c); } } } static void db_sql_trace(void *notUsed, const char *zSql){ int n = strlen(zSql); fprintf(stderr, "%s%s\n", zSql, (n>0 && zSql[n-1]==';') ? "" : ";"); } |
︙ | ︙ | |||
1183 1184 1185 1186 1187 1188 1189 | Blob out; if( n==40 && validate16(zContent, n) ){ memcpy(zHash, zContent, n); zHash[n] = 0; }else{ sha1sum_step_text(zContent, n); sha1sum_finish(&out); | | | > | 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 | Blob out; if( n==40 && validate16(zContent, n) ){ memcpy(zHash, zContent, n); zHash[n] = 0; }else{ sha1sum_step_text(zContent, n); sha1sum_finish(&out); sqlite3_snprintf(sizeof(zHash), zHash, "%s", blob_str(&out)); blob_reset(&out); db_multi_exec( "INSERT OR IGNORE INTO concealed(hash,content,mtime)" " VALUES(%Q,%#Q,now())", zHash, n, zContent ); } return zHash; } /* |
︙ | ︙ | |||
1240 1241 1242 1243 1244 1245 1246 | /* ** Return true if the string zVal represents "true" (or "false"). */ int is_truth(const char *zVal){ static const char *azOn[] = { "on", "yes", "true", "1" }; int i; for(i=0; i<sizeof(azOn)/sizeof(azOn[0]); i++){ | | | | 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 | /* ** Return true if the string zVal represents "true" (or "false"). */ int is_truth(const char *zVal){ static const char *azOn[] = { "on", "yes", "true", "1" }; int i; for(i=0; i<sizeof(azOn)/sizeof(azOn[0]); i++){ if( fossil_strcmp(zVal,azOn[i])==0 ) return 1; } return 0; } int is_false(const char *zVal){ static const char *azOff[] = { "off", "no", "false", "0" }; int i; for(i=0; i<sizeof(azOff)/sizeof(azOff[0]); i++){ if( fossil_strcmp(zVal,azOff[i])==0 ) return 1; } return 0; } /* ** Swap the g.db and g.dbConfig connections so that the various db_* routines ** work on the ~/.fossil database instead of on the repository database. |
︙ | ︙ | |||
1297 1298 1299 1300 1301 1302 1303 | db_begin_transaction(); if( globalFlag ){ db_swap_connections(); db_multi_exec("REPLACE INTO global_config(name,value) VALUES(%Q,%Q)", zName, zValue); db_swap_connections(); }else{ | | | 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 | db_begin_transaction(); if( globalFlag ){ db_swap_connections(); db_multi_exec("REPLACE INTO global_config(name,value) VALUES(%Q,%Q)", zName, zValue); db_swap_connections(); }else{ db_multi_exec("REPLACE INTO config(name,value,mtime) VALUES(%Q,%Q,now())", zName, zValue); } if( globalFlag && g.repositoryOpen ){ db_multi_exec("DELETE FROM config WHERE name=%Q", zName); } db_end_transaction(0); } |
︙ | ︙ | |||
1356 1357 1358 1359 1360 1361 1362 | void db_set_int(const char *zName, int value, int globalFlag){ if( globalFlag ){ db_swap_connections(); db_multi_exec("REPLACE INTO global_config(name,value) VALUES(%Q,%d)", zName, value); db_swap_connections(); }else{ | | | 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 | void db_set_int(const char *zName, int value, int globalFlag){ if( globalFlag ){ db_swap_connections(); db_multi_exec("REPLACE INTO global_config(name,value) VALUES(%Q,%d)", zName, value); db_swap_connections(); }else{ db_multi_exec("REPLACE INTO config(name,value,mtime) VALUES(%Q,%d,now())", zName, value); } if( globalFlag && g.repositoryOpen ){ db_multi_exec("DELETE FROM config WHERE name=%Q", zName); } } int db_get_boolean(const char *zName, int dflt){ |
︙ | ︙ | |||
1412 1413 1414 1415 1416 1417 1418 | db_swap_connections(); blob_reset(&full); } /* ** COMMAND: open ** | | > | > > | > | > > | 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 | db_swap_connections(); blob_reset(&full); } /* ** COMMAND: open ** ** Usage: %fossil open FILENAME ?VERSION? ?--keep? ?--nested? ** ** Open a connection to the local repository in FILENAME. A checkout ** for the repository is created with its root at the working directory. ** If VERSION is specified then that version is checked out. Otherwise ** the latest version is checked out. No files other than "manifest" ** and "manifest.uuid" are modified if the --keep option is present. ** ** See also the "close" command. */ void cmd_open(void){ Blob path; int vid; int keepFlag; int allowNested; static char *azNewArgv[] = { 0, "checkout", "--prompt", 0, 0, 0 }; url_proxy_options(); keepFlag = find_option("keep",0,0)!=0; allowNested = find_option("nested",0,0)!=0; if( g.argc!=3 && g.argc!=4 ){ usage("REPOSITORY-FILENAME ?VERSION?"); } if( !allowNested && db_open_local() ){ fossil_panic("already within an open tree rooted at %s", g.zLocalRoot); } file_canonical_name(g.argv[2], &path); db_open_repository(blob_str(&path)); db_init_database("./_FOSSIL_", zLocalSchema, (char*)0); db_delete_on_failure("./_FOSSIL_"); db_open_local(); db_lset("repository", blob_str(&path)); db_record_repository_filename(blob_str(&path)); vid = db_int(0, "SELECT pid FROM plink y" " WHERE NOT EXISTS(SELECT 1 FROM plink x WHERE x.cid=y.pid)"); if( vid==0 ){ db_lset_int("checkout", 1); }else{ char **oldArgv = g.argv; int oldArgc = g.argc; db_lset_int("checkout", vid); azNewArgv[0] = g.argv[0]; g.argv = azNewArgv; g.argc = 3; if( oldArgc==4 ){ azNewArgv[g.argc-1] = oldArgv[3]; }else{ azNewArgv[g.argc-1] = db_get("main-branch", "trunk"); } if( keepFlag ){ azNewArgv[g.argc++] = "--keep"; } checkout_cmd(); g.argc = 2; info_cmd(); |
︙ | ︙ | |||
1483 1484 1485 1486 1487 1488 1489 | }else{ db_prepare(&q, "SELECT '(global)', value FROM global_config WHERE name=%Q", zName ); } if( db_step(&q)==SQLITE_ROW ){ | | | | 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 | }else{ db_prepare(&q, "SELECT '(global)', value FROM global_config WHERE name=%Q", zName ); } if( db_step(&q)==SQLITE_ROW ){ fossil_print("%-20s %-8s %s\n", zName, db_column_text(&q, 0), db_column_text(&q, 1)); }else{ fossil_print("%-20s\n", zName); } db_finalize(&q); } /* ** define all settings, which can be controlled via the set/unset |
︙ | ︙ | |||
1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 | char const *name; /* Name of the setting */ char const *var; /* Internal variable name used by db_set() */ int width; /* Width of display. 0 for boolean values */ char const *def; /* Default value */ }; #endif /* INTERFACE */ struct stControlSettings const ctrlSettings[] = { { "auto-captcha", "autocaptcha", 0, "on" }, { "auto-shun", 0, 0, "on" }, { "autosync", 0, 0, "on" }, { "binary-glob", 0, 32, "" }, { "clearsign", 0, 0, "off" }, { "diff-command", 0, 16, "" }, { "dont-push", 0, 0, "off" }, { "editor", 0, 16, "" }, { "gdiff-command", 0, 16, "gdiff" }, { "ignore-glob", 0, 40, "" }, { "http-port", 0, 16, "8080" }, { "localauth", 0, 0, "off" }, { "manifest", 0, 0, "off" }, { "mtime-changes", 0, 0, "on" }, { "pgp-command", 0, 32, "gpg --clearsign -o " }, { "proxy", 0, 32, "off" }, { "repo-cksum", 0, 0, "on" }, { "ssh-command", 0, 32, "" }, { "web-browser", 0, 32, "" }, { 0,0,0,0 } }; /* ** COMMAND: settings ** COMMAND: unset ** %fossil settings ?PROPERTY? ?VALUE? ?-global? ** %fossil unset PROPERTY ?-global? ** ** The "settings" command with no arguments lists all properties and their ** values. With just a property name it shows the value of that property. ** With a value argument it changes the property for the current repository. ** | > > > > > > > > > | 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 | char const *name; /* Name of the setting */ char const *var; /* Internal variable name used by db_set() */ int width; /* Width of display. 0 for boolean values */ char const *def; /* Default value */ }; #endif /* INTERFACE */ struct stControlSettings const ctrlSettings[] = { { "access-log", 0, 0, "off" }, { "auto-captcha", "autocaptcha", 0, "on" }, { "auto-shun", 0, 0, "on" }, { "autosync", 0, 0, "on" }, { "binary-glob", 0, 32, "" }, { "clearsign", 0, 0, "off" }, { "crnl-glob", 0, 16, "" }, { "default-perms", 0, 16, "u" }, { "diff-command", 0, 16, "" }, { "dont-push", 0, 0, "off" }, { "editor", 0, 16, "" }, { "gdiff-command", 0, 16, "gdiff" }, { "gmerge-command",0, 40, "" }, { "https-login", 0, 0, "off" }, { "ignore-glob", 0, 40, "" }, { "http-port", 0, 16, "8080" }, { "localauth", 0, 0, "off" }, { "main-branch", 0, 40, "trunk" }, { "manifest", 0, 0, "off" }, { "max-upload", 0, 25, "250000" }, { "mtime-changes", 0, 0, "on" }, { "pgp-command", 0, 32, "gpg --clearsign -o " }, { "proxy", 0, 32, "off" }, { "repo-cksum", 0, 0, "on" }, { "self-register", 0, 0, "off" }, { "ssh-command", 0, 32, "" }, { "web-browser", 0, 32, "" }, { 0,0,0,0 } }; /* ** COMMAND: settings ** COMMAND: unset ** ** %fossil settings ?PROPERTY? ?VALUE? ?-global? ** %fossil unset PROPERTY ?-global? ** ** The "settings" command with no arguments lists all properties and their ** values. With just a property name it shows the value of that property. ** With a value argument it changes the property for the current repository. ** |
︙ | ︙ | |||
1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 | ** binary-glob The VALUE is a comma-separated list of GLOB patterns ** that should be treated as binary files for merging ** purposes. Example: *.xml ** ** clearsign When enabled, fossil will attempt to sign all commits ** with gpg. When disabled (the default), commits will ** be unsigned. Default: off ** ** diff-command External command to run when performing a diff. ** If undefined, the internal text diff will be used. ** ** dont-push Prevent this repository from pushing from client to ** server. Useful when setting up a private branch. ** ** editor Text editor command used for check-in comments. ** ** gdiff-command External command to run when performing a graphical ** diff. If undefined, text diff will be used. ** ** http-port The TCP/IP port number to use by the "server" ** and "ui" commands. Default: 8080 ** ** ignore-glob The VALUE is a comma-separated list of GLOB patterns ** specifying files that the "extra" command will ignore. ** Example: *.o,*.obj,*.exe ** ** localauth If enabled, require that HTTP connections from ** 127.0.0.1 be authenticated by password. If ** false, all HTTP requests from localhost have ** unrestricted access to the repository. ** ** manifest If enabled, automatically create files "manifest" and ** "manifest.uuid" in every checkout. The SQLite and ** Fossil repositories both require this. Default: off. ** ** mtime-changes Use file modification times (mtimes) to detect when ** files have been modified. (Default "on".) ** ** pgp-command Command used to clear-sign manifests at check-in. ** The default is "gpg --clearsign -o ". ** ** proxy URL of the HTTP proxy. If undefined or "off" then ** the "http_proxy" environment variable is consulted. ** If the http_proxy environment variable is undefined ** then a direct HTTP connection is used. ** ** repo-cksum Compute checksums over all files in each checkout ** as a double-check of correctness. Defaults to "on". ** Disable on large repositories for a performance ** improvement. ** ** ssh-command Command used to talk to a remote machine with ** the "ssh://" protocol. ** ** web-browser A shell command used to launch your preferred ** web browser when given a URL as an argument. ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. */ void setting_cmd(void){ int i; int globalFlag = find_option("global","g",0)!=0; int unsetFlag = g.argv[1][0]=='u'; db_open_config(1); | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | | 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 | ** binary-glob The VALUE is a comma-separated list of GLOB patterns ** that should be treated as binary files for merging ** purposes. Example: *.xml ** ** clearsign When enabled, fossil will attempt to sign all commits ** with gpg. When disabled (the default), commits will ** be unsigned. Default: off ** ** crnl-glob A comma-separated list of GLOB patterns for text files ** in which it is ok to have CR+NL line endings. ** Set to "*" to disable CR+NL checking. ** ** default-perms Permissions given automatically to new users. For more ** information on permissions see Users page in Server ** Administration of the HTTP UI. Default: u. ** ** diff-command External command to run when performing a diff. ** If undefined, the internal text diff will be used. ** ** dont-push Prevent this repository from pushing from client to ** server. Useful when setting up a private branch. ** ** editor Text editor command used for check-in comments. ** ** gdiff-command External command to run when performing a graphical ** diff. If undefined, text diff will be used. ** ** gmerge-command A graphical merge conflict resolver command operating ** on four files. ** Ex: kdiff3 "%baseline" "%original" "%merge" -o "%output" ** Ex: xxdiff "%original" "%baseline" "%merge" -M "%output" ** Ex: meld "%baseline" "%original" "%merge" "%output" ** ** http-port The TCP/IP port number to use by the "server" ** and "ui" commands. Default: 8080 ** ** https-login Send login creditials using HTTPS instead of HTTP ** even if the login page request came via HTTP. ** ** ignore-glob The VALUE is a comma-separated list of GLOB patterns ** specifying files that the "extra" command will ignore. ** Example: *.o,*.obj,*.exe ** ** localauth If enabled, require that HTTP connections from ** 127.0.0.1 be authenticated by password. If ** false, all HTTP requests from localhost have ** unrestricted access to the repository. ** ** main-branch The primary branch for the project. Default: trunk ** ** manifest If enabled, automatically create files "manifest" and ** "manifest.uuid" in every checkout. The SQLite and ** Fossil repositories both require this. Default: off. ** ** max-upload A limit on the size of uplink HTTP requests. The ** default is 250000 bytes. ** ** mtime-changes Use file modification times (mtimes) to detect when ** files have been modified. (Default "on".) ** ** pgp-command Command used to clear-sign manifests at check-in. ** The default is "gpg --clearsign -o ". ** ** proxy URL of the HTTP proxy. If undefined or "off" then ** the "http_proxy" environment variable is consulted. ** If the http_proxy environment variable is undefined ** then a direct HTTP connection is used. ** ** repo-cksum Compute checksums over all files in each checkout ** as a double-check of correctness. Defaults to "on". ** Disable on large repositories for a performance ** improvement. ** ** self-register Allow users to register themselves through the HTTP UI. ** This is useful if you want to see other names than ** "Anonymous" in e.g. ticketing system. On the other hand ** users can not be deleted. Default: off. ** ** ssh-command Command used to talk to a remote machine with ** the "ssh://" protocol. ** ** web-browser A shell command used to launch your preferred ** web browser when given a URL as an argument. ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. */ void setting_cmd(void){ int i; int globalFlag = find_option("global","g",0)!=0; int unsetFlag = g.argv[1][0]=='u'; db_open_config(1); if( !globalFlag ){ db_find_and_open_repository(OPEN_ANY_SCHEMA | OPEN_OK_NOT_FOUND, 0); } if( !g.repositoryOpen ){ globalFlag = 1; } if( unsetFlag && g.argc!=3 ){ usage("PROPERTY ?-global?"); } if( g.argc==2 ){ for(i=0; ctrlSettings[i].name; i++){ print_setting(ctrlSettings[i].name); } }else if( g.argc==3 || g.argc==4 ){ const char *zName = g.argv[2]; int isManifest; int n = strlen(zName); for(i=0; ctrlSettings[i].name; i++){ if( strncmp(ctrlSettings[i].name, zName, n)==0 ) break; } if( !ctrlSettings[i].name ){ fossil_fatal("no such setting: %s", zName); } isManifest = fossil_strcmp(ctrlSettings[i].name, "manifest")==0; if( isManifest && globalFlag ){ fossil_fatal("cannot set 'manifest' globally"); } if( unsetFlag ){ db_unset(ctrlSettings[i].name, globalFlag); }else if( g.argc==4 ){ db_set(ctrlSettings[i].name, g.argv[3], globalFlag); |
︙ | ︙ | |||
1701 1702 1703 1704 1705 1706 1707 | ** Print the approximate span of time from now to TIMESTAMP. */ void test_timespan_cmd(void){ double rDiff; if( g.argc!=3 ) usage("TIMESTAMP"); sqlite3_open(":memory:", &g.db); rDiff = db_double(0.0, "SELECT julianday('now') - julianday(%Q)", g.argv[2]); | | | 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 | ** Print the approximate span of time from now to TIMESTAMP. */ void test_timespan_cmd(void){ double rDiff; if( g.argc!=3 ) usage("TIMESTAMP"); sqlite3_open(":memory:", &g.db); rDiff = db_double(0.0, "SELECT julianday('now') - julianday(%Q)", g.argv[2]); fossil_print("Time differences: %s\n", db_timespan_name(rDiff)); sqlite3_close(g.db); g.db = 0; } |
Changes to src/delta.c.
︙ | ︙ | |||
436 437 438 439 440 441 442 | DEBUG2( printf("lastRead becomes %d\n", lastRead); ) } bestCnt = 0; break; } /* If we reach this point, it means no match is found so far */ | | | 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 | DEBUG2( printf("lastRead becomes %d\n", lastRead); ) } bestCnt = 0; break; } /* If we reach this point, it means no match is found so far */ if( base+i+NHASH>=lenOut ){ /* We have reached the end of the file and have not found any ** matches. Do an "insert" for everything that does not match */ putInt(lenOut-base, &zDelta); *(zDelta++) = ':'; memcpy(zDelta, &zOut[base], lenOut-base); zDelta += lenOut-base; base = lenOut; |
︙ | ︙ | |||
534 535 536 537 538 539 540 | while( *zDelta && lenDelta>0 ){ unsigned int cnt, ofst; cnt = getInt(&zDelta, &lenDelta); switch( zDelta[0] ){ case '@': { zDelta++; lenDelta--; ofst = getInt(&zDelta, &lenDelta); | | | 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 | while( *zDelta && lenDelta>0 ){ unsigned int cnt, ofst; cnt = getInt(&zDelta, &lenDelta); switch( zDelta[0] ){ case '@': { zDelta++; lenDelta--; ofst = getInt(&zDelta, &lenDelta); if( lenDelta>0 && zDelta[0]!=',' ){ /* ERROR: copy command not terminated by ',' */ return -1; } zDelta++; lenDelta--; DEBUG1( printf("COPY %d from %d\n", cnt, ofst); ) total += cnt; if( total>limit ){ |
︙ | ︙ |
Changes to src/deltacmd.c.
︙ | ︙ | |||
47 48 49 50 51 52 53 | ** ** Given two input files, create and output a delta that carries ** the first file into the second. */ void delta_create_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ | | < | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | ** ** Given two input files, create and output a delta that carries ** the first file into the second. */ void delta_create_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN TARGET DELTA"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fprintf(stderr,"cannot read %s\n", g.argv[2]); fossil_exit(1); } if( blob_read_from_file(&target, g.argv[3])<0 ){ fprintf(stderr,"cannot read %s\n", g.argv[3]); |
︙ | ︙ | |||
110 111 112 113 114 115 116 | ** ** Given an input files and a delta, apply the delta to the input file ** and write the result. */ void delta_apply_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ | | < | 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | ** ** Given an input files and a delta, apply the delta to the input file ** and write the result. */ void delta_apply_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN DELTA TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fprintf(stderr,"cannot read %s\n", g.argv[2]); fossil_exit(1); } if( blob_read_from_file(&delta, g.argv[3])<0 ){ fprintf(stderr,"cannot read %s\n", g.argv[3]); |
︙ | ︙ |
Changes to src/descendants.c.
︙ | ︙ | |||
23 24 25 26 27 28 29 | #include <assert.h> /* ** Create a temporary table named "leaves" if it does not ** already exist. Load this table with the RID of all ** check-ins that are leaves which are decended from | | < > > > | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | #include <assert.h> /* ** Create a temporary table named "leaves" if it does not ** already exist. Load this table with the RID of all ** check-ins that are leaves which are decended from ** check-in iBase. ** ** A "leaf" is a check-in that has no children in the same branch. ** There is a separate permanent table LEAF that contains all leaves ** in the tree. This routine is used to compute a subset of that ** table consisting of leaves that are descended from a single checkin. ** ** The closeMode flag determines behavior associated with the "closed" ** tag: ** ** closeMode==0 Show all leaves regardless of the "closed" tag. ** ** closeMode==1 Show only leaves without the "closed" tag. |
︙ | ︙ | |||
53 54 55 56 57 58 59 | db_multi_exec( "CREATE TEMP TABLE IF NOT EXISTS leaves(" " rid INTEGER PRIMARY KEY" ");" "DELETE FROM leaves;" ); | < < < | < < < < < < < < < < < < | 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | db_multi_exec( "CREATE TEMP TABLE IF NOT EXISTS leaves(" " rid INTEGER PRIMARY KEY" ");" "DELETE FROM leaves;" ); if( iBase>0 ){ Bag seen; /* Descendants seen */ Bag pending; /* Unpropagated descendants */ Stmt q1; /* Query to find children of a check-in */ Stmt isBr; /* Query to check to see if a check-in starts a new branch */ Stmt ins; /* INSERT statement for a new record */ /* Initialize the bags. */ |
︙ | ︙ | |||
170 171 172 173 174 175 176 177 178 179 | /* ** Load the record ID rid and up to N-1 closest ancestors into ** the "ok" table. */ void compute_ancestors(int rid, int N){ Bag seen; PQueue queue; bag_init(&seen); pqueue_init(&queue); bag_insert(&seen, rid); | > > | < < | | | | | > > > > > | | > > > > > | < < | | > > > > > | | > > | 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 | /* ** Load the record ID rid and up to N-1 closest ancestors into ** the "ok" table. */ void compute_ancestors(int rid, int N){ Bag seen; PQueue queue; Stmt ins; Stmt q; bag_init(&seen); pqueue_init(&queue); bag_insert(&seen, rid); pqueue_insert(&queue, rid, 0.0, 0); db_prepare(&ins, "INSERT OR IGNORE INTO ok VALUES(:rid)"); db_prepare(&q, "SELECT a.pid, b.mtime FROM plink a LEFT JOIN plink b ON b.cid=a.pid" " WHERE a.cid=:rid" ); while( (N--)>0 && (rid = pqueue_extract(&queue, 0))!=0 ){ db_bind_int(&ins, ":rid", rid); db_step(&ins); db_reset(&ins); db_bind_int(&q, ":rid", rid); while( db_step(&q)==SQLITE_ROW ){ int pid = db_column_int(&q, 0); double mtime = db_column_double(&q, 1); if( bag_insert(&seen, pid) ){ pqueue_insert(&queue, pid, -mtime, 0); } } db_reset(&q); } bag_clear(&seen); pqueue_clear(&queue); db_finalize(&ins); db_finalize(&q); } /* ** Load the record ID rid and up to N-1 closest descendants into ** the "ok" table. */ void compute_descendants(int rid, int N){ Bag seen; PQueue queue; Stmt ins; Stmt q; bag_init(&seen); pqueue_init(&queue); bag_insert(&seen, rid); pqueue_insert(&queue, rid, 0.0, 0); db_prepare(&ins, "INSERT OR IGNORE INTO ok VALUES(:rid)"); db_prepare(&q, "SELECT cid, mtime FROM plink WHERE pid=:rid"); while( (N--)>0 && (rid = pqueue_extract(&queue, 0))!=0 ){ db_bind_int(&ins, ":rid", rid); db_step(&ins); db_reset(&ins); db_bind_int(&q, ":rid", rid); while( db_step(&q)==SQLITE_ROW ){ int pid = db_column_int(&q, 0); double mtime = db_column_double(&q, 1); if( bag_insert(&seen, pid) ){ pqueue_insert(&queue, pid, mtime, 0); } } db_reset(&q); } bag_clear(&seen); pqueue_clear(&queue); db_finalize(&ins); db_finalize(&q); } /* ** COMMAND: descendants ** ** Usage: %fossil descendants ?BASELINE-ID? ** |
︙ | ︙ | |||
260 261 262 263 264 265 266 267 268 269 270 271 272 273 | ** COMMAND: leaves ** ** Usage: %fossil leaves ?--all? ?--closed? ** ** Find leaves of all branches. By default show only open leaves. ** The --all flag causes all leaves (closed and open) to be shown. ** The --closed flag shows only closed leaves. */ void leaves_cmd(void){ Stmt q; int showAll = find_option("all", 0, 0)!=0; int showClosed = find_option("closed", 0, 0)!=0; db_must_be_within_tree(); | > > > > > | | < > | > > > > > | < | < < < < < < < < < < < > < | | | | > > > > > | < | | | 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 | ** COMMAND: leaves ** ** Usage: %fossil leaves ?--all? ?--closed? ** ** Find leaves of all branches. By default show only open leaves. ** The --all flag causes all leaves (closed and open) to be shown. ** The --closed flag shows only closed leaves. ** ** The --recompute flag causes the content of the "leaf" table in the ** repository database to be recomputed. */ void leaves_cmd(void){ Stmt q; Blob sql; int showAll = find_option("all", 0, 0)!=0; int showClosed = find_option("closed", 0, 0)!=0; int recomputeFlag = find_option("recompute",0,0)!=0; db_must_be_within_tree(); if( recomputeFlag ) leaf_rebuild(); blob_zero(&sql); blob_append(&sql, timeline_query_for_tty(), -1); blob_appendf(&sql, " AND blob.rid IN leaf"); if( showClosed ){ blob_appendf(&sql," AND %z", leaf_is_closed_sql("blob.rid")); }else if( !showAll ){ blob_appendf(&sql," AND NOT %z", leaf_is_closed_sql("blob.rid")); } db_prepare(&q, "%s ORDER BY event.mtime DESC", blob_str(&sql)); blob_reset(&sql); print_timeline(&q, 2000); db_finalize(&q); } /* ** WEBPAGE: leaves ** ** Find leaves of all branches. */ void leaves_page(void){ Blob sql; Stmt q; int showAll = P("all")!=0; int showClosed = P("closed")!=0; login_check_credentials(); if( !g.okRead ){ login_needed(); return; } if( !showAll ){ style_submenu_element("All", "All", "leaves?all"); } if( !showClosed ){ style_submenu_element("Closed", "Closed", "leaves?closed"); } if( showClosed || showAll ){ style_submenu_element("Open", "Open", "leaves"); } style_header("Leaves"); login_anonymous_available(); style_sidebox_begin("Nomenclature:", "33%"); @ <ol> @ <li> A <div class="sideboxDescribed">leaf</div> @ is a check-in with no descendants in the same branch.</li> @ <li> An <div class="sideboxDescribed">open leaf</div> @ is a leaf that does not have a "closed" tag @ and is thus assumed to still be in use.</li> @ <li> A <div class="sideboxDescribed">closed leaf</div> @ has a "closed" tag and is thus assumed to @ be historical and no longer in active use.</li> @ </ol> style_sidebox_end(); if( showAll ){ @ <h1>All leaves, both open and closed:</h1> }else if( showClosed ){ @ <h1>Closed leaves:</h1> }else{ @ <h1>Open leaves:</h1> } blob_zero(&sql); blob_append(&sql, timeline_query_for_www(), -1); blob_appendf(&sql, " AND blob.rid IN leaf"); if( showClosed ){ blob_appendf(&sql," AND %z", leaf_is_closed_sql("blob.rid")); }else if( !showAll ){ blob_appendf(&sql," AND NOT %z", leaf_is_closed_sql("blob.rid")); } db_prepare(&q, "%s ORDER BY event.mtime DESC", blob_str(&sql)); blob_reset(&sql); www_print_timeline(&q, TIMELINE_LEAFONLY, 0, 0, 0); db_finalize(&q); @ <br /> @ <script type="text/JavaScript"> @ function xin(id){ @ } @ function xout(id){ @ } @ </script> style_footer(); } |
Changes to src/diff.c.
︙ | ︙ | |||
233 234 235 236 237 238 239 | na += R[r+nr*3]; nb += R[r+nr*3]; } for(i=1; i<nr; i++){ na += R[r+i*3]; nb += R[r+i*3]; } | > > > > > | > > | 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 | na += R[r+nr*3]; nb += R[r+nr*3]; } for(i=1; i<nr; i++){ na += R[r+i*3]; nb += R[r+i*3]; } /* * If the patch changes an empty file or results in an empty file, * the block header must use 0,0 as position indicator and not 1,0. * Otherwise, patch would be confused and may reject the diff. */ blob_appendf(pOut,"@@ -%d,%d +%d,%d @@\n", na ? a+skip+1 : 0, na, nb ? b+skip+1 : 0, nb); /* Show the initial common area */ a += skip; b += skip; m = R[r] - skip; for(j=0; j<m; j++){ appendDiffLine(pOut, " ", &A[a+j]); |
︙ | ︙ | |||
617 618 619 620 621 622 623 | /* ** The status of an annotation operation is recorded by an instance ** of the following structure. */ typedef struct Annotator Annotator; struct Annotator { DContext c; /* The diff-engine context */ | | | > > > > | 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 | /* ** The status of an annotation operation is recorded by an instance ** of the following structure. */ typedef struct Annotator Annotator; struct Annotator { DContext c; /* The diff-engine context */ struct AnnLine { /* Lines of the original files... */ const char *z; /* The text of the line */ short int n; /* Number of bytes (omitting trailing space and \n) */ short int iLevel; /* Level at which tag was set */ const char *zSrc; /* Tag showing origin of this line */ } *aOrig; int nOrig; /* Number of elements in aOrig[] */ int nNoSrc; /* Number of entries where aOrig[].zSrc==NULL */ int iLevel; /* Current level */ int nVers; /* Number of versions analyzed */ char **azVers; /* Names of versions analyzed */ }; /* ** Initialize the annotation process by specifying the file that is ** to be annotated. The annotator takes control of the input Blob and ** will release it when it is finished with it. */ |
︙ | ︙ | |||
659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 | ** if additional annotation is required. zPName is the tag to insert ** on each line of the file being annotated that was contributed by ** pParent. Memory to hold zPName is leaked. */ static int annotation_step(Annotator *p, Blob *pParent, char *zPName){ int i, j; int lnTo; /* Prepare the parent file to be diffed */ p->c.aFrom = break_into_lines(blob_str(pParent), blob_size(pParent), &p->c.nFrom, 1); if( p->c.aFrom==0 ){ return 1; } /* Compute the differences going from pParent to the file being ** annotated. */ diff_all(&p->c); /* Where new lines are inserted on this difference, record the ** zPName as the source of the new line. */ for(i=lnTo=0; i<p->c.nEdit; i+=3){ | > > > > > > | > | > > | 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 | ** if additional annotation is required. zPName is the tag to insert ** on each line of the file being annotated that was contributed by ** pParent. Memory to hold zPName is leaked. */ static int annotation_step(Annotator *p, Blob *pParent, char *zPName){ int i, j; int lnTo; int iPrevLevel; int iThisLevel; /* Prepare the parent file to be diffed */ p->c.aFrom = break_into_lines(blob_str(pParent), blob_size(pParent), &p->c.nFrom, 1); if( p->c.aFrom==0 ){ return 1; } /* Compute the differences going from pParent to the file being ** annotated. */ diff_all(&p->c); /* Where new lines are inserted on this difference, record the ** zPName as the source of the new line. */ iPrevLevel = p->iLevel; p->iLevel++; iThisLevel = p->iLevel; for(i=lnTo=0; i<p->c.nEdit; i+=3){ struct AnnLine *x = &p->aOrig[lnTo]; for(j=0; j<p->c.aEdit[i]; j++, lnTo++, x++){ if( x->zSrc==0 || x->iLevel==iPrevLevel ){ x->zSrc = zPName; x->iLevel = iThisLevel; } } lnTo += p->c.aEdit[i+2]; } /* Clear out the diff results */ free(p->c.aEdit); p->c.aEdit = 0; |
︙ | ︙ | |||
725 726 727 728 729 730 731 732 733 734 735 736 | for(i=0; i<x.nOrig; i++){ const char *zSrc = x.aOrig[i].zSrc; if( zSrc==0 ) zSrc = g.argv[g.argc-1]; printf("%10s: %.*s\n", zSrc, x.aOrig[i].n, x.aOrig[i].z); } } /* ** Compute a complete annotation on a file. The file is identified ** by its filename number (filename.fnid) and the baseline in which ** it was checked in (mlink.mid). */ | > > > | > > > > > > > | > > | < | > > | > | > | > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > | 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 | for(i=0; i<x.nOrig; i++){ const char *zSrc = x.aOrig[i].zSrc; if( zSrc==0 ) zSrc = g.argv[g.argc-1]; printf("%10s: %.*s\n", zSrc, x.aOrig[i].n, x.aOrig[i].z); } } /* Annotation flags */ #define ANN_FILE_VERS 0x001 /* Show file version rather than commit version */ /* ** Compute a complete annotation on a file. The file is identified ** by its filename number (filename.fnid) and the baseline in which ** it was checked in (mlink.mid). */ static void annotate_file( Annotator *p, /* The annotator */ int fnid, /* The name of the file to be annotated */ int mid, /* The specific version of the file for this step */ int webLabel, /* Use web-style annotations if true */ int iLimit, /* Limit the number of levels if greater than zero */ int annFlags /* Flags to alter the annotation */ ){ Blob toAnnotate; /* Text of the final version of the file */ Blob step; /* Text of previous revision */ int rid; /* Artifact ID of the file being annotated */ char *zLabel; /* Label to apply to a line */ Stmt q; /* Query returning all ancestor versions */ /* Initialize the annotation */ rid = db_int(0, "SELECT fid FROM mlink WHERE mid=%d AND fnid=%d",mid,fnid); if( rid==0 ){ fossil_panic("file #%d is unchanged in manifest #%d", fnid, mid); } if( !content_get(rid, &toAnnotate) ){ fossil_panic("unable to retrieve content of artifact #%d", rid); } db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY)"); compute_ancestors(mid, 1000000000); annotation_start(p, &toAnnotate); db_prepare(&q, "SELECT mlink.fid," " (SELECT uuid FROM blob WHERE rid=mlink.%s)," " date(event.mtime), " " coalesce(event.euser,event.user) " " FROM mlink, event" " WHERE mlink.fnid=%d" " AND mlink.mid IN ok" " AND event.objid=mlink.mid" " ORDER BY event.mtime DESC" " LIMIT %d", (annFlags & ANN_FILE_VERS)!=0 ? "fid" : "mid", fnid, iLimit>0 ? iLimit : 10000000 ); while( db_step(&q)==SQLITE_ROW ){ int pid = db_column_int(&q, 0); const char *zUuid = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); const char *zUser = db_column_text(&q, 3); if( webLabel ){ zLabel = mprintf( "<a href='%s/info/%s' target='infowindow'>%.10s</a> %s %9.9s", g.zTop, zUuid, zUuid, zDate, zUser ); }else{ zLabel = mprintf("%.10s %s %9.9s", zUuid, zDate, zUser); } p->nVers++; p->azVers = fossil_realloc(p->azVers, p->nVers*sizeof(p->azVers[0]) ); p->azVers[p->nVers-1] = zLabel; content_get(pid, &step); annotation_step(p, &step, zLabel); blob_reset(&step); } db_finalize(&q); } /* ** WEBPAGE: annotate ** ** Query parameters: ** ** checkin=ID The manifest ID at which to start the annotation ** filename=FILENAME The filename. */ void annotation_page(void){ int mid; int fnid; int i; int iLimit; int annFlags = 0; Annotator ann; login_check_credentials(); if( !g.okRead ){ login_needed(); return; } mid = name_to_rid(PD("checkin","0")); fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", P("filename")); if( mid==0 || fnid==0 ){ fossil_redirect_home(); } iLimit = atoi(PD("limit","-1")); if( !db_exists("SELECT 1 FROM mlink WHERE mid=%d AND fnid=%d",mid,fnid) ){ fossil_redirect_home(); } style_header("File Annotation"); if( P("filevers") ) annFlags |= ANN_FILE_VERS; annotate_file(&ann, fnid, mid, g.okHistory, iLimit, annFlags); if( P("log") ){ int i; @ <h2>Versions analyzed:</h2> @ <ol> for(i=0; i<ann.nVers; i++){ @ <li><tt>%s(ann.azVers[i])</tt></li> } @ </ol> @ <hr> @ <h2>Annotation:</h2> } @ <pre> for(i=0; i<ann.nOrig; i++){ ((char*)ann.aOrig[i].z)[ann.aOrig[i].n] = 0; @ %s(ann.aOrig[i].zSrc): %h(ann.aOrig[i].z) } @ </pre> style_footer(); } /* ** COMMAND: annotate ** ** %fossil annotate FILENAME ** ** Output the text of a file with markings to show when each line of ** the file was last modified. ** ** Options: ** --limit N Only look backwards in time by N versions ** --log List all versions analyzed ** --filevers Show file version numbers rather than check-in versions */ void annotate_cmd(void){ int fnid; /* Filename ID */ int fid; /* File instance ID */ int mid; /* Manifest where file was checked in */ Blob treename; /* FILENAME translated to canonical form */ char *zFilename; /* Cannonical filename */ Annotator ann; /* The annotation of the file */ int i; /* Loop counter */ const char *zLimit; /* The value to the --limit option */ int iLimit; /* How far back in time to look */ int showLog; /* True to show the log */ int fileVers; /* Show file version instead of check-in versions */ int annFlags = 0; /* Flags to control annotation properties */ zLimit = find_option("limit",0,1); if( zLimit==0 || zLimit[0]==0 ) zLimit = "-1"; iLimit = atoi(zLimit); showLog = find_option("log",0,0)!=0; fileVers = find_option("filevers",0,0)!=0; db_must_be_within_tree(); if (g.argc<3) { usage("FILENAME"); } file_tree_name(g.argv[2], &treename, 1); zFilename = blob_str(&treename); fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( fnid==0 ){ fossil_fatal("no such file: %s", zFilename); } fid = db_int(0, "SELECT rid FROM vfile WHERE pathname=%Q", zFilename); if( fid==0 ){ fossil_fatal("not part of current checkout: %s", zFilename); } mid = db_int(0, "SELECT mid FROM mlink WHERE fid=%d AND fnid=%d", fid, fnid); if( mid==0 ){ fossil_panic("unable to find manifest"); } if( fileVers ) annFlags |= ANN_FILE_VERS; annotate_file(&ann, fnid, mid, 0, iLimit, annFlags); if( showLog ){ for(i=0; i<ann.nVers; i++){ printf("version %3d: %s\n", i+1, ann.azVers[i]); } printf("---------------------------------------------------\n"); } for(i=0; i<ann.nOrig; i++){ printf("%s: %.*s\n", ann.aOrig[i].zSrc, ann.aOrig[i].n, ann.aOrig[i].z); } } |
Changes to src/diffcmd.c.
︙ | ︙ | |||
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | #include <assert.h> /* ** Diff option flags */ #define DIFF_NEWFILE 0x01 /* Treat non-existing fails as empty files */ #define DIFF_NOEOLWS 0x02 /* Ignore whitespace at the end of lines */ /* ** Show the difference between two files, one in memory and one on disk. ** ** The difference is the set of edits needed to transform pFile1 into ** zFile2. The content of pFile1 is in memory. zFile2 exists on disk. ** ** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the ** command zDiffCmd to do the diffing. */ | > > > > > > > > > > > > > > > > > > > > > > > | | 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | #include <assert.h> /* ** Diff option flags */ #define DIFF_NEWFILE 0x01 /* Treat non-existing fails as empty files */ #define DIFF_NOEOLWS 0x02 /* Ignore whitespace at the end of lines */ /* ** Output the results of a diff. Output goes to stdout for command-line ** or to the CGI/HTTP result buffer for web pages. */ static void diff_printf(const char *zFormat, ...){ va_list ap; va_start(ap, zFormat); if( g.cgiOutput ){ cgi_vprintf(zFormat, ap); }else{ vprintf(zFormat, ap); } va_end(ap); } /* ** Print the "Index:" message that patch wants to see at the top of a diff. */ void diff_print_index(const char *zFile){ diff_printf("Index: %s\n=======================================" "============================\n", zFile); } /* ** Show the difference between two files, one in memory and one on disk. ** ** The difference is the set of edits needed to transform pFile1 into ** zFile2. The content of pFile1 is in memory. zFile2 exists on disk. ** ** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the ** command zDiffCmd to do the diffing. */ void diff_file( Blob *pFile1, /* In memory content to compare from */ const char *zFile2, /* On disk content to compare to */ const char *zName, /* Display name of the file */ const char *zDiffCmd, /* Command for comparison */ int ignoreEolWs /* Ignore whitespace at end of line */ ){ if( zDiffCmd==0 ){ |
︙ | ︙ | |||
56 57 58 59 60 61 62 | blob_read_from_file(&file2, zFile2); zName2 = zName; } /* Compute and output the differences */ blob_zero(&out); text_diff(pFile1, &file2, &out, 5, ignoreEolWs); | > | | > | 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | blob_read_from_file(&file2, zFile2); zName2 = zName; } /* Compute and output the differences */ blob_zero(&out); text_diff(pFile1, &file2, &out, 5, ignoreEolWs); if( blob_size(&out) ){ diff_printf("--- %s\n+++ %s\n", zName, zName2); diff_printf("%s\n", blob_str(&out)); } /* Release memory resources */ blob_reset(&file2); blob_reset(&out); }else{ int cnt = 0; Blob nameFile1; /* Name of temporary file to old pFile1 content */ |
︙ | ︙ | |||
102 103 104 105 106 107 108 | ** ** The difference is the set of edits needed to transform pFile1 into ** pFile2. ** ** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the ** command zDiffCmd to do the diffing. */ | | | | | 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | ** ** The difference is the set of edits needed to transform pFile1 into ** pFile2. ** ** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the ** command zDiffCmd to do the diffing. */ void diff_file_mem( Blob *pFile1, /* In memory content to compare from */ Blob *pFile2, /* In memory content to compare to */ const char *zName, /* Display name of the file */ const char *zDiffCmd, /* Command for comparison */ int ignoreEolWs /* Ignore whitespace at end of lines */ ){ if( zDiffCmd==0 ){ Blob out; /* Diff output text */ blob_zero(&out); text_diff(pFile1, pFile2, &out, 5, ignoreEolWs); diff_printf("--- %s\n+++ %s\n", zName, zName); diff_printf("%s\n", blob_str(&out)); /* Release memory resources */ blob_reset(&out); }else{ Blob cmd; char zTemp1[300]; char zTemp2[300]; |
︙ | ︙ | |||
159 160 161 162 163 164 165 | const char *zFrom, /* Name of file */ const char *zDiffCmd, /* Use this "diff" command */ int ignoreEolWs /* Ignore whitespace changes at end of lines */ ){ Blob fname; Blob content; file_tree_name(g.argv[2], &fname, 1); | | | 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 | const char *zFrom, /* Name of file */ const char *zDiffCmd, /* Use this "diff" command */ int ignoreEolWs /* Ignore whitespace changes at end of lines */ ){ Blob fname; Blob content; file_tree_name(g.argv[2], &fname, 1); historical_version_of_file(zFrom, blob_str(&fname), &content, 0, 0); diff_file(&content, g.argv[2], g.argv[2], zDiffCmd, ignoreEolWs); blob_reset(&content); blob_reset(&fname); } /* ** Run a diff between the version zFrom and files on disk. zFrom might |
︙ | ︙ | |||
184 185 186 187 188 189 190 | Stmt q; int ignoreEolWs; /* Ignore end-of-line whitespace */ int asNewFile; /* Treat non-existant files as empty files */ ignoreEolWs = (diffFlags & DIFF_NOEOLWS)!=0; asNewFile = (diffFlags & DIFF_NEWFILE)!=0; vid = db_lget_int("checkout", 0); | | | | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 | Stmt q; int ignoreEolWs; /* Ignore end-of-line whitespace */ int asNewFile; /* Treat non-existant files as empty files */ ignoreEolWs = (diffFlags & DIFF_NOEOLWS)!=0; asNewFile = (diffFlags & DIFF_NEWFILE)!=0; vid = db_lget_int("checkout", 0); vfile_check_signature(vid, 1, 0); blob_zero(&sql); db_begin_transaction(); if( zFrom ){ int rid = name_to_rid(zFrom); if( !is_a_version(rid) ){ fossil_fatal("no such check-in: %s", zFrom); } load_vfile_from_rid(rid); blob_appendf(&sql, "SELECT v2.pathname, v2.deleted, v2.chnged, v2.rid==0, v1.rid" " FROM vfile v1, vfile v2 " " WHERE v1.pathname=v2.pathname AND v1.vid=%d AND v2.vid=%d" " AND (v2.deleted OR v2.chnged OR v1.mrid!=v2.rid)" "UNION " "SELECT pathname, 1, 0, 0, 0" " FROM vfile v1" " WHERE v1.vid=%d" " AND NOT EXISTS(SELECT 1 FROM vfile v2" " WHERE v2.vid=%d AND v2.pathname=v1.pathname)" "UNION " |
︙ | ︙ | |||
234 235 236 237 238 239 240 | int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3); int srcid = db_column_int(&q, 4); char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); char *zToFree = zFullName; int showDiff = 1; if( isDeleted ){ | | | | | < < | < | 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3); int srcid = db_column_int(&q, 4); char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); char *zToFree = zFullName; int showDiff = 1; if( isDeleted ){ diff_printf("DELETED %s\n", zPathname); if( !asNewFile ){ showDiff = 0; zFullName = "/dev/null"; } }else if( access(zFullName, 0) ){ diff_printf("MISSING %s\n", zPathname); if( !asNewFile ){ showDiff = 0; } }else if( isNew ){ diff_printf("ADDED %s\n", zPathname); srcid = 0; if( !asNewFile ){ showDiff = 0; } }else if( isChnged==3 ){ diff_printf("ADDED_BY_MERGE %s\n", zPathname); srcid = 0; if( !asNewFile ){ showDiff = 0; } } if( showDiff ){ Blob content; if( srcid>0 ){ content_get(srcid, &content); }else{ blob_zero(&content); } diff_print_index(zPathname); diff_file(&content, zFullName, zPathname, zDiffCmd, ignoreEolWs); blob_reset(&content); } free(zToFree); } db_finalize(&q); db_end_transaction(1); /* ROLLBACK */ |
︙ | ︙ | |||
284 285 286 287 288 289 290 | int ignoreEolWs ){ char *zName; Blob fname; Blob v1, v2; file_tree_name(g.argv[2], &fname, 1); zName = blob_str(&fname); | | | < | | 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 | int ignoreEolWs ){ char *zName; Blob fname; Blob v1, v2; file_tree_name(g.argv[2], &fname, 1); zName = blob_str(&fname); historical_version_of_file(zFrom, zName, &v1, 0, 0); historical_version_of_file(zTo, zName, &v2, 0, 0); diff_file_mem(&v1, &v2, zName, zDiffCmd, ignoreEolWs); blob_reset(&v1); blob_reset(&v2); blob_reset(&fname); } /* ** Show the difference between two files identified by ManifestFile ** entries. */ static void diff_manifest_entry( struct ManifestFile *pFrom, struct ManifestFile *pTo, const char *zDiffCmd, int ignoreEolWs ){ Blob f1, f2; int rid; const char *zName = pFrom ? pFrom->zName : pTo->zName; diff_print_index(zName); if( pFrom ){ rid = uuid_to_rid(pFrom->zUuid, 0); content_get(rid, &f1); }else{ blob_zero(&f1); } if( pTo ){ |
︙ | ︙ | |||
352 353 354 355 356 357 358 | while( pFromFile || pToFile ){ int cmp; if( pFromFile==0 ){ cmp = +1; }else if( pToFile==0 ){ cmp = -1; }else{ | | | | | | | 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 | while( pFromFile || pToFile ){ int cmp; if( pFromFile==0 ){ cmp = +1; }else if( pToFile==0 ){ cmp = -1; }else{ cmp = fossil_strcmp(pFromFile->zName, pToFile->zName); } if( cmp<0 ){ diff_printf("DELETED %s\n", pFromFile->zName); if( asNewFlag ){ diff_manifest_entry(pFromFile, 0, zDiffCmd, ignoreEolWs); } pFromFile = manifest_file_next(pFrom,0); }else if( cmp>0 ){ diff_printf("ADDED %s\n", pToFile->zName); if( asNewFlag ){ diff_manifest_entry(0, pToFile, zDiffCmd, ignoreEolWs); } pToFile = manifest_file_next(pTo,0); }else if( fossil_strcmp(pFromFile->zUuid, pToFile->zUuid)==0 ){ /* No changes */ pFromFile = manifest_file_next(pFrom,0); pToFile = manifest_file_next(pTo,0); }else{ /* diff_printf("CHANGED %s\n", pFromFile->zName); */ diff_manifest_entry(pFromFile, pToFile, zDiffCmd, ignoreEolWs); pFromFile = manifest_file_next(pFrom,0); pToFile = manifest_file_next(pTo,0); } } manifest_destroy(pFrom); manifest_destroy(pTo); |
︙ | ︙ | |||
440 441 442 443 444 445 446 | diff_one_against_disk(zFrom, zDiffCmd, 0); }else{ diff_all_against_disk(zFrom, zDiffCmd, diffFlags); } }else if( zFrom==0 ){ fossil_fatal("must use --from if --to is present"); }else{ | | > > > > > > > > > > > > > > > | 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 | diff_one_against_disk(zFrom, zDiffCmd, 0); }else{ diff_all_against_disk(zFrom, zDiffCmd, diffFlags); } }else if( zFrom==0 ){ fossil_fatal("must use --from if --to is present"); }else{ db_find_and_open_repository(0, 0); verify_all_options(); if( !isInternDiff ){ zDiffCmd = db_get(isGDiff ? "gdiff-command" : "diff-command", 0); } if( g.argc==3 ){ diff_one_two_versions(zFrom, zTo, zDiffCmd, 0); }else{ diff_all_two_versions(zFrom, zTo, zDiffCmd, diffFlags); } } } /* ** WEBPAGE: vpatch ** URL vpatch?from=UUID&to=UUID */ void vpatch_page(void){ const char *zFrom = P("from"); const char *zTo = P("to"); login_check_credentials(); if( !g.okRead ){ login_needed(); return; } if( zFrom==0 || zTo==0 ) fossil_redirect_home(); cgi_set_content_type("text/plain"); diff_all_two_versions(zFrom, zTo, 0, DIFF_NEWFILE); } |
Changes to src/doc.c.
︙ | ︙ | |||
47 48 49 50 51 52 53 | static const struct { const char *zPrefix; /* The file prefix */ int size; /* Length of the prefix */ const char *zMimetype; /* The corresponding mimetype */ } aMime[] = { { "GIF87a", 6, "image/gif" }, { "GIF89a", 6, "image/gif" }, | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | static const struct { const char *zPrefix; /* The file prefix */ int size; /* Length of the prefix */ const char *zMimetype; /* The corresponding mimetype */ } aMime[] = { { "GIF87a", 6, "image/gif" }, { "GIF89a", 6, "image/gif" }, { "\211PNG\r\n\032\n", 8, "image/png" }, { "\377\332\377", 3, "image/jpeg" }, { "\377\330\377", 3, "image/jpeg" }, }; x = (const unsigned char*)blob_buffer(pBlob); n = blob_size(pBlob); for(i=0; i<n; i++){ |
︙ | ︙ | |||
288 289 290 291 292 293 294 | z = zName; for(i=0; zName[i]; i++){ if( zName[i]=='.' ) z = &zName[i+1]; } len = strlen(z); if( len<sizeof(zSuffix)-1 ){ | | | | 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 | z = zName; for(i=0; zName[i]; i++){ if( zName[i]=='.' ) z = &zName[i+1]; } len = strlen(z); if( len<sizeof(zSuffix)-1 ){ sqlite3_snprintf(sizeof(zSuffix), zSuffix, "%s", z); for(i=0; zSuffix[i]; i++) zSuffix[i] = fossil_tolower(zSuffix[i]); first = 0; last = sizeof(aMime)/sizeof(aMime[0]); while( first<=last ){ int c; i = (first+last)/2; c = fossil_strcmp(zSuffix, aMime[i].zSuffix); if( c==0 ) return aMime[i].zMimetype; if( c<0 ){ last = i-1; }else{ first = i+1; } } |
︙ | ︙ | |||
344 345 346 347 348 349 350 | memcpy(zBaseline, zName, i); zBaseline[i] = 0; zName += i; while( zName[0]=='/' ){ zName++; } if( !file_is_simple_pathname(zName) ){ goto doc_not_found; } | | | | | | 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 | memcpy(zBaseline, zName, i); zBaseline[i] = 0; zName += i; while( zName[0]=='/' ){ zName++; } if( !file_is_simple_pathname(zName) ){ goto doc_not_found; } if( fossil_strcmp(zBaseline,"ckout")==0 && db_open_local()==0 ){ sqlite3_snprintf(sizeof(zBaseline), zBaseline, "tip"); } if( fossil_strcmp(zBaseline,"ckout")==0 ){ /* Read from the local checkout */ char *zFullpath; db_must_be_within_tree(); zFullpath = mprintf("%s/%s", g.zLocalRoot, zName); if( !file_isfile(zFullpath) ){ goto doc_not_found; } if( blob_read_from_file(&filebody, zFullpath)<0 ){ goto doc_not_found; } }else{ db_begin_transaction(); if( fossil_strcmp(zBaseline,"tip")==0 ){ vid = db_int(0, "SELECT objid FROM event WHERE type='ci'" " ORDER BY mtime DESC LIMIT 1"); }else{ vid = name_to_rid(zBaseline); } /* Create the baseline cache if it does not already exist */ |
︙ | ︙ | |||
436 437 438 439 440 441 442 | /* The file is now contained in the filebody blob. Deliver the ** file to the user */ zMime = P("mimetype"); if( zMime==0 ){ zMime = mimetype_from_name(zName); } | > > > > > | | | 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | /* The file is now contained in the filebody blob. Deliver the ** file to the user */ zMime = P("mimetype"); if( zMime==0 ){ zMime = mimetype_from_name(zName); } Th_Store("doc_name", zName); Th_Store("doc_version", db_text(0, "SELECT '[' || substr(uuid,1,10) || ']'" " FROM blob WHERE rid=%d", vid)); Th_Store("doc_date", db_text(0, "SELECT datetime(mtime) FROM event" " WHERE objid=%d AND type='ci'", vid)); if( fossil_strcmp(zMime, "application/x-fossil-wiki")==0 ){ Blob title, tail; if( wiki_find_title(&filebody, &title, &tail) ){ style_header(blob_str(&title)); wiki_convert(&tail, 0, 0); }else{ style_header("Documentation"); wiki_convert(&filebody, 0, 0); } style_footer(); }else if( fossil_strcmp(zMime, "text/plain")==0 ){ style_header("Documentation"); @ <blockquote><pre> @ %h(blob_str(&filebody)) @ </pre></blockquote> style_footer(); }else{ cgi_set_content_type(zMime); |
︙ | ︙ |
Changes to src/encode.c.
︙ | ︙ | |||
444 445 446 447 448 449 450 | ** case is the same. */ static const char zDecode[] = { 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 64, 64, 64, 64, 64, 64, | | | | 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 | ** case is the same. */ static const char zDecode[] = { 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 64, 64, 64, 64, 64, 64, 64, 10, 11, 12, 13, 14, 15, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 10, 11, 12, 13, 14, 15, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, |
︙ | ︙ |
Changes to src/event.c.
︙ | ︙ | |||
167 168 169 170 171 172 173 | zATime = db_text(0, "SELECT datetime(%.17g)", pEvent->rDate); @ <p>Event [<a href="%s(g.zTop)/artifact/%s(zUuid)">%S(zUuid)</a>] at @ [<a href="%s(g.zTop)/timeline?c=%T(zETime)">%s(zETime)</a>] @ entered by user <b>%h(pEvent->zUser)</b> on @ [<a href="%s(g.zTop)/timeline?c=%T(zATime)">%s(zATime)</a>]:</p> @ <blockquote> for(i=0; i<pEvent->nTag; i++){ | | | 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | zATime = db_text(0, "SELECT datetime(%.17g)", pEvent->rDate); @ <p>Event [<a href="%s(g.zTop)/artifact/%s(zUuid)">%S(zUuid)</a>] at @ [<a href="%s(g.zTop)/timeline?c=%T(zETime)">%s(zETime)</a>] @ entered by user <b>%h(pEvent->zUser)</b> on @ [<a href="%s(g.zTop)/timeline?c=%T(zATime)">%s(zATime)</a>]:</p> @ <blockquote> for(i=0; i<pEvent->nTag; i++){ if( fossil_strcmp(pEvent->aTag[i].zName,"+bgcolor")==0 ){ zClr = pEvent->aTag[i].zValue; } } if( zClr && zClr[0]==0 ) zClr = 0; if( zClr ){ @ <div style="background-color: %h(zClr);"> }else{ |
︙ | ︙ | |||
246 247 248 249 250 251 252 | /* Figure out the color */ if( rid ){ zClr = db_text("", "SELECT bgcolor FROM event WHERE objid=%d", rid); }else{ zClr = ""; } zClr = PD("clr",zClr); | | | 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | /* Figure out the color */ if( rid ){ zClr = db_text("", "SELECT bgcolor FROM event WHERE objid=%d", rid); }else{ zClr = ""; } zClr = PD("clr",zClr); if( fossil_strcmp(zClr,"##")==0 ) zClr = PD("cclr",""); /* If editing an existing event, extract the key fields to use as ** a starting point for the edit. */ if( rid && (zBody==0 || zETime==0 || zComment==0 || zTags==0) ){ Manifest *pEvent; |
︙ | ︙ | |||
282 283 284 285 286 287 288 | char *zDate; Blob cksum; int nrid; blob_zero(&event); db_begin_transaction(); login_verify_csrf_secret(); blob_appendf(&event, "C %F\n", zComment); | | < | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | char *zDate; Blob cksum; int nrid; blob_zero(&event); db_begin_transaction(); login_verify_csrf_secret(); blob_appendf(&event, "C %F\n", zComment); zDate = date_in_standard_format("now"); blob_appendf(&event, "D %s\n", zDate); free(zDate); zETime[10] = 'T'; blob_appendf(&event, "E %s %s\n", zETime, zEventId); zETime[10] = ' '; if( rid ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); |
︙ | ︙ | |||
342 343 344 345 346 347 348 | if( g.zLogin ){ blob_appendf(&event, "U %F\n", g.zLogin); } blob_appendf(&event, "W %d\n%s\n", strlen(zBody), zBody); md5sum_blob(&event, &cksum); blob_appendf(&event, "Z %b\n", &cksum); blob_reset(&cksum); | | | | 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 | if( g.zLogin ){ blob_appendf(&event, "U %F\n", g.zLogin); } blob_appendf(&event, "W %d\n%s\n", strlen(zBody), zBody); md5sum_blob(&event, &cksum); blob_appendf(&event, "Z %b\n", &cksum); blob_reset(&cksum); nrid = content_put(&event); db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); manifest_crosslink(nrid, &event); assert( blob_is_reset(&event) ); content_deltify(rid, nrid, 0); db_end_transaction(0); cgi_redirectf("event?name=%T", zEventId); } if( P("cancel")!=0 ){ cgi_redirectf("event?name=%T", zEventId); return; |
︙ | ︙ | |||
392 393 394 395 396 397 398 | blob_reset(&event); } for(n=2, z=zBody; z[0]; z++){ if( z[0]=='\n' ) n++; } if( n<20 ) n = 20; if( n>40 ) n = 40; | | | 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 | blob_reset(&event); } for(n=2, z=zBody; z[0]; z++){ if( z[0]=='\n' ) n++; } if( n<20 ) n = 20; if( n>40 ) n = 40; @ <form method="post" action="%s(g.zTop)/eventedit"><div> login_insert_csrf_secret(); @ <input type="hidden" name="name" value="%h(zEventId)" /> @ <table border="0" cellspacing="10"> @ <tr><td align="right" valign="top"><b>Event Time:</b></td> @ <td valign="top"> @ <input type="text" name="t" size="25" value="%h(zETime)" /> |
︙ | ︙ |
Changes to src/export.c.
︙ | ︙ | |||
80 81 82 83 84 85 86 | db_reset(&q); } /* ** COMMAND: export ** | | | > > > | | > > | > > > > | | | | | | > | | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 | db_reset(&q); } /* ** COMMAND: export ** ** Usage: %fossil export --git ?REPOSITORY? ** ** Write an export of all check-ins to standard output. The export is ** written in the git-fast-export file format assuming the --git option is ** provided. The git-fast-export format is currently the only VCS ** interchange format supported, though other formats may be added in ** the future. ** ** Run this command within a checkout. Or use the -R or --repository ** option to specify a Fossil repository to be exported. ** ** Only check-ins are exported using --git. Git does not support tickets ** or wiki or events or attachments, so none of those are exported. */ void export_cmd(void){ Stmt q; int i; int firstCkin; /* Integer offset to check-in marks */ Bag blobs, vers; bag_init(&blobs); bag_init(&vers); find_option("git", 0, 0); /* Ignore the --git option for now */ db_find_and_open_repository(0, 2); verify_all_options(); if( g.argc!=2 && g.argc!=3 ){ usage("--git ?REPOSITORY?"); } /* Step 1: Generate "blob" records for every artifact that is part ** of a check-in */ fossil_binary_mode(stdout); db_prepare(&q, "SELECT DISTINCT fid FROM mlink WHERE fid>0"); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); Blob content; content_get(rid, &content); printf("blob\nmark :%d\ndata %d\n", rid, blob_size(&content)); bag_insert(&blobs, rid); fwrite(blob_buffer(&content), 1, blob_size(&content), stdout); printf("\n"); blob_reset(&content); } db_finalize(&q); /* Output the commit records. */ firstCkin = db_int(0, "SELECT max(rid) FROM blob")+1; db_prepare(&q, "SELECT strftime('%%s',mtime), objid, coalesce(comment,ecomment)," " coalesce(user,euser)," " (SELECT value FROM tagxref WHERE rid=objid AND tagid=%d)" " FROM event" " WHERE type='ci'" " ORDER BY mtime ASC", TAG_BRANCH ); while( db_step(&q)==SQLITE_ROW ){ const char *zSecondsSince1970 = db_column_text(&q, 0); int ckinId = db_column_int(&q, 1); const char *zComment = db_column_text(&q, 2); const char *zUser = db_column_text(&q, 3); const char *zBranch = db_column_text(&q, 4); char *zBr; Manifest *p; ManifestFile *pFile; const char *zFromType; bag_insert(&vers, ckinId); if( zBranch==0 ) zBranch = "trunk"; zBr = mprintf("%s", zBranch); for(i=0; zBr[i]; i++){ if( !fossil_isalnum(zBr[i]) ) zBr[i] = '_'; } printf("commit refs/heads/%s\nmark :%d\n", zBr, ckinId+firstCkin); free(zBr); printf("committer"); print_person(zUser); printf(" %s +0000\n", zSecondsSince1970); if( zComment==0 ) zComment = "null comment"; printf("data %d\n%s\n", (int)strlen(zComment), zComment); p = manifest_get(ckinId, CFTYPE_ANY); zFromType = "from"; for(i=0; i<p->nParent; i++){ int pid = fast_uuid_to_rid(p->azParent[i]); if( pid==0 || !bag_find(&vers, pid) ) continue; printf("%s :%d\n", zFromType, fast_uuid_to_rid(p->azParent[i])+firstCkin); zFromType = "merge"; } printf("deleteall\n"); manifest_file_rewind(p); while( (pFile=manifest_file_next(p, 0))!=0 ){ int fid = fast_uuid_to_rid(pFile->zUuid); const char *zPerm = "100644"; if( fid==0 ) continue; if( pFile->zPerm && strstr(pFile->zPerm,"x") ) zPerm = "100755"; if( !bag_find(&blobs, fid) ) continue; printf("M %s :%d %s\n", zPerm, fid, pFile->zName); } manifest_cache_insert(p); printf("\n"); } db_finalize(&q); bag_clear(&blobs); manifest_cache_clear(); /* Output tags */ db_prepare(&q, "SELECT tagname, rid, strftime('%%s',mtime)" " FROM tagxref JOIN tag USING(tagid)" " WHERE tagtype=1 AND tagname GLOB 'sym-*'" ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagname = db_column_text(&q, 0); int rid = db_column_int(&q, 1); const char *zSecSince1970 = db_column_text(&q, 2); if( rid==0 || !bag_find(&vers, rid) ) continue; zTagname += 4; printf("tag %s\n", zTagname); printf("from :%d\n", rid+firstCkin); printf("tagger <tagger> %s +0000\n", zSecSince1970); printf("data 0\n"); } db_finalize(&q); bag_clear(&vers); } |
Changes to src/file.c.
︙ | ︙ | |||
118 119 120 121 122 123 124 125 126 127 128 129 130 131 | rc = getStat(zFN); free(zFN); }else{ rc = getStat(0); } return rc ? 0 : (S_ISDIR(fileStat.st_mode) ? 1 : 2); } /* ** Return the tail of a file pathname. The tail is the last component ** of the path. For example, the tail of "/a/b/c.d" is "c.d". */ const char *file_tail(const char *z){ const char *zTail = z; | > > > > > > > > > > > > > > > > > > > > > > > > > | 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | rc = getStat(zFN); free(zFN); }else{ rc = getStat(0); } return rc ? 0 : (S_ISDIR(fileStat.st_mode) ? 1 : 2); } /* ** Find an unused filename similar to zBase with zSuffix appended. ** ** Make the name relative to the working directory if relFlag is true. ** ** Space to hold the new filename is obtained form mprintf() and should ** be freed by the caller. */ char *file_newname(const char *zBase, const char *zSuffix, int relFlag){ char *z = 0; int cnt = 0; z = mprintf("%s-%s", zBase, zSuffix); while( file_size(z)>=0 ){ fossil_free(z); z = mprintf("%s-%s-%d", zBase, zSuffix, cnt++); } if( relFlag ){ Blob x; file_relative_name(z, &x); fossil_free(z); z = blob_str(&x); } return z; } /* ** Return the tail of a file pathname. The tail is the last component ** of the path. For example, the tail of "/a/b/c.d" is "c.d". */ const char *file_tail(const char *z){ const char *zTail = z; |
︙ | ︙ | |||
151 152 153 154 155 156 157 | fwrite(zBuf, 1, got, out); } fclose(in); fclose(out); } /* | | > | > | > > > | 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | fwrite(zBuf, 1, got, out); } fclose(in); fclose(out); } /* ** Set or clear the execute bit on a file. Return true if a change ** occurred and false if this routine is a no-op. */ int file_setexe(const char *zFilename, int onoff){ int rc = 0; #if !defined(_WIN32) struct stat buf; if( stat(zFilename, &buf)!=0 ) return 0; if( onoff ){ if( (buf.st_mode & 0111)!=0111 ){ chmod(zFilename, buf.st_mode | 0111); rc = 1; } }else{ if( (buf.st_mode & 0111)!=0 ){ chmod(zFilename, buf.st_mode & ~0111); rc = 1; } } #endif /* _WIN32 */ return rc; } /* ** Create the directory named in the argument, if it does not already ** exist. If forceFlag is 1, delete any prior non-directory object ** with the same name. ** |
︙ | ︙ | |||
227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 | if( z[i+2]=='.' && (z[i+3]=='/' || z[i+3]==0) ) return 0; } } } if( z[i-1]=='/' ) return 0; return 1; } /* ** Simplify a filename by ** ** * removing any trailing and duplicate / ** * removing /./ ** * removing /A/../ ** ** Changes are made in-place. Return the new name length. */ int file_simplify_name(char *z, int n){ int i, j; if( n<0 ) n = strlen(z); #if defined(_WIN32) for(i=0; i<n; i++){ if( z[i]=='\\' ) z[i] = '/'; } #endif while( n>1 && z[n-1]=='/' ){ n--; } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | < > > > > > > | | | | > > > > > > > > > > > > > > > > > > > > > | 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 | if( z[i+2]=='.' && (z[i+3]=='/' || z[i+3]==0) ) return 0; } } } if( z[i-1]=='/' ) return 0; return 1; } /* ** If the last component of the pathname in z[0]..z[j-1] is something ** other than ".." then back it out and return true. If the last ** component is empty or if it is ".." then return false. */ static int backup_dir(const char *z, int *pJ){ int j = *pJ; int i; if( j<=0 ) return 0; for(i=j-1; i>0 && z[i-1]!='/'; i--){} if( z[i]=='.' && i==j-2 && z[i+1]=='.' ) return 0; *pJ = i-1; return 1; } /* ** Simplify a filename by ** ** * Convert all \ into / on windows ** * removing any trailing and duplicate / ** * removing /./ ** * removing /A/../ ** ** Changes are made in-place. Return the new name length. */ int file_simplify_name(char *z, int n){ int i, j; if( n<0 ) n = strlen(z); /* On windows convert all \ characters to / */ #if defined(_WIN32) for(i=0; i<n; i++){ if( z[i]=='\\' ) z[i] = '/'; } #endif /* Removing trailing "/" characters */ while( n>1 && z[n-1]=='/' ){ n--; } /* Remove duplicate '/' characters. Except, two // at the beginning ** of a pathname is allowed since this is important on windows. */ for(i=j=1; i<n; i++){ z[j++] = z[i]; while( z[i]=='/' && i<n-1 && z[i+1]=='/' ) i++; } n = j; /* Skip over zero or more initial "./" sequences */ for(i=0; i<n-1 && z[i]=='.' && z[i+1]=='/'; i+=2){} /* Begin copying from z[i] back to z[j]... */ for(j=0; i<n; i++){ if( z[i]=='/' ){ /* Skip over internal "/." directory components */ if( z[i+1]=='.' && (i+2==n || z[i+2]=='/') ){ i += 1; continue; } /* If this is a "/.." directory component then back out the ** previous term of the directory if it is something other than ".." ** or "." */ if( z[i+1]=='.' && i+2<n && z[i+2]=='.' && (i+3==n || z[i+3]=='/') && backup_dir(z, &j) ){ i += 2; continue; } } if( j>=0 ) z[j] = z[i]; j++; } if( j==0 ) z[j++] = '.'; z[j] = 0; return j; } /* ** COMMAND: test-simplify-name ** ** %fossil test-simplify-name FILENAME... ** ** Print the simplified versions of each FILENAME. */ void cmd_test_simplify_name(void){ int i; char *z; for(i=2; i<g.argc; i++){ z = mprintf("%s", g.argv[i]); printf("[%s] -> ", z); file_simplify_name(z, -1); printf("[%s]\n", z); fossil_free(z); } } /* ** Compute a canonical pathname for a file or directory. ** Make the name absolute if it is relative. ** Remove redundant / characters ** Remove all /./ path elements. ** Convert /A/../ to just / |
︙ | ︙ | |||
310 311 312 313 314 315 316 | int i; Blob x; blob_zero(&x); for(i=2; i<g.argc; i++){ char zBuf[100]; const char *zName = g.argv[i]; file_canonical_name(zName, &x); | | | 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 | int i; Blob x; blob_zero(&x); for(i=2; i<g.argc; i++){ char zBuf[100]; const char *zName = g.argv[i]; file_canonical_name(zName, &x); printf("[%s] -> [%s]\n", zName, blob_buffer(&x)); blob_reset(&x); sqlite3_snprintf(sizeof(zBuf), zBuf, "%lld", file_size(zName)); printf(" file_size = %s\n", zBuf); sqlite3_snprintf(sizeof(zBuf), zBuf, "%lld", file_mtime(zName)); printf(" file_mtime = %s\n", zBuf); printf(" file_isfile = %d\n", file_isfile(zName)); printf(" file_isexe = %d\n", file_isexe(zName)); |
︙ | ︙ | |||
557 558 559 560 561 562 563 | sqlite3_randomness(15, &zBuf[j]); for(i=0; i<15; i++, j++){ zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; } zBuf[j] = 0; }while( access(zBuf,0)==0 ); } | > > > > > > > > > > > > > > > > > > > > | 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 | sqlite3_randomness(15, &zBuf[j]); for(i=0; i<15; i++, j++){ zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; } zBuf[j] = 0; }while( access(zBuf,0)==0 ); } /* ** Return true if a file named zName exists and has identical content ** to the blob pContent. If zName does not exist or if the content is ** different in any way, then return false. */ int file_is_the_same(Blob *pContent, const char *zName){ i64 iSize; int rc; Blob onDisk; iSize = file_size(zName); if( iSize<0 ) return 0; if( iSize!=blob_size(pContent) ) return 0; blob_read_from_file(&onDisk, zName); rc = blob_compare(&onDisk, pContent); blob_reset(&onDisk); return rc==0; } |
Changes to src/finfo.c.
︙ | ︙ | |||
37 38 39 40 41 42 43 | ** a quick status and does not check for up-to-date-ness of the file. ** ** The -p form, there's an optional flag "-r|--revision REVISION". The ** specified version (or the latest checked out version) is printed to ** stdout. */ void finfo_cmd(void){ | < < < < < < < > > > > > > | 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | ** a quick status and does not check for up-to-date-ness of the file. ** ** The -p form, there's an optional flag "-r|--revision REVISION". The ** specified version (or the latest checked out version) is printed to ** stdout. */ void finfo_cmd(void){ db_must_be_within_tree(); if (find_option("status","s",0)) { Stmt q; Blob line; Blob fname; int vid; if( g.argc!=3 ) usage("-s|--status FILENAME"); vid = db_lget_int("checkout", 0); if( vid==0 ){ fossil_panic("no checkout to finfo files in"); } vfile_check_signature(vid, 1, 0); file_tree_name(g.argv[2], &fname, 1); db_prepare(&q, "SELECT pathname, deleted, rid, chnged, coalesce(origname!=pathname,0)" " FROM vfile WHERE vfile.pathname=%B", &fname); blob_zero(&line); if ( db_step(&q)==SQLITE_ROW ) { Blob uuid; |
︙ | ︙ | |||
98 99 100 101 102 103 104 | }else if( find_option("print","p",0) ){ Blob record; Blob fname; const char *zRevision = find_option("revision", "r", 1); file_tree_name(g.argv[2], &fname, 1); if( zRevision ){ | | | 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | }else if( find_option("print","p",0) ){ Blob record; Blob fname; const char *zRevision = find_option("revision", "r", 1); file_tree_name(g.argv[2], &fname, 1); if( zRevision ){ historical_version_of_file(zRevision, blob_str(&fname), &record, 0, 0); }else{ int rid = db_int(0, "SELECT rid FROM vfile WHERE pathname=%B", &fname); if( rid==0 ){ fossil_fatal("no history for file: %b", &fname); } content_get(rid, &record); } |
︙ | ︙ | |||
140 141 142 143 144 145 146 | fossil_fatal("no history for file: %b", &fname); } zFilename = blob_str(&fname); db_prepare(&q, "SELECT b.uuid, ci.uuid, date(event.mtime,'localtime')," " coalesce(event.ecomment, event.comment)," " coalesce(event.euser, event.user)" | | > | | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | fossil_fatal("no history for file: %b", &fname); } zFilename = blob_str(&fname); db_prepare(&q, "SELECT b.uuid, ci.uuid, date(event.mtime,'localtime')," " coalesce(event.ecomment, event.comment)," " coalesce(event.euser, event.user)" " FROM mlink, blob b, event, blob ci, filename" " WHERE filename.name=%Q" " AND mlink.fnid=filename.fnid" " AND b.rid=mlink.fid" " AND event.objid=mlink.mid" " AND event.objid=ci.rid" " ORDER BY event.mtime DESC LIMIT %d OFFSET %d", zFilename, iLimit, iOffset ); blob_zero(&line); |
︙ | ︙ | |||
184 185 186 187 188 189 190 | } /* ** WEBPAGE: finfo ** URL: /finfo?name=FILENAME ** | | > > > > > > > > > > | > | | | < > > > > > > > > > > > > | | | | 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 | } /* ** WEBPAGE: finfo ** URL: /finfo?name=FILENAME ** ** Show the change history for a single file. ** ** Additional query parameters: ** ** a=DATE Only show changes after DATE ** b=DATE Only show changes before DATE ** n=NUM Show the first NUM changes only */ void finfo_page(void){ Stmt q; const char *zFilename; char zPrevDate[20]; const char *zA; const char *zB; int n; Blob title; Blob sql; GraphContext *pGraph; login_check_credentials(); if( !g.okRead ){ login_needed(); return; } style_header("File History"); login_anonymous_available(); zPrevDate[0] = 0; zFilename = PD("name",""); blob_zero(&sql); blob_appendf(&sql, "SELECT" " datetime(event.mtime,'localtime')," /* Date of change */ " coalesce(event.ecomment, event.comment)," /* Check-in comment */ " coalesce(event.euser, event.user)," /* User who made chng */ " mlink.pid," /* Parent rid */ " mlink.fid," /* File rid */ " (SELECT uuid FROM blob WHERE rid=mlink.pid)," /* Parent file uuid */ " (SELECT uuid FROM blob WHERE rid=mlink.fid)," /* Current file uuid */ " (SELECT uuid FROM blob WHERE rid=mlink.mid)," /* Check-in uuid */ " event.bgcolor," /* Background color */ " (SELECT value FROM tagxref WHERE tagid=%d AND tagtype>0" " AND tagxref.rid=mlink.mid)" /* Tags */ " FROM mlink, event" " WHERE mlink.fnid=(SELECT fnid FROM filename WHERE name=%Q)" " AND event.objid=mlink.mid", TAG_BRANCH, zFilename ); if( (zA = P("a"))!=0 ){ blob_appendf(&sql, " AND event.mtime>=julianday('%q')", zA); } if( (zB = P("b"))!=0 ){ blob_appendf(&sql, " AND event.mtime<=julianday('%q')", zB); } blob_appendf(&sql," ORDER BY event.mtime DESC /*sort*/"); if( (n = atoi(PD("n","0")))>0 ){ blob_appendf(&sql, " LIMIT %d", n); } db_prepare(&q, blob_str(&sql)); blob_reset(&sql); blob_zero(&title); blob_appendf(&title, "History of "); hyperlinked_path(zFilename, &title, 0); @ <h2>%b(&title)</h2> blob_reset(&title); pGraph = graph_init(); @ <div id="canvas" style="position:relative;width:1px;height:1px;"></div> @ <table id="timelineTable" class="timelineTable"> while( db_step(&q)==SQLITE_ROW ){ const char *zDate = db_column_text(&q, 0); const char *zCom = db_column_text(&q, 1); const char *zUser = db_column_text(&q, 2); int fpid = db_column_int(&q, 3); int frid = db_column_int(&q, 4); const char *zPUuid = db_column_text(&q, 5); const char *zUuid = db_column_text(&q, 6); const char *zCkin = db_column_text(&q,7); const char *zBgClr = db_column_text(&q, 8); const char *zBr = db_column_text(&q, 9); int gidx; char zTime[10]; char zShort[20]; char zShortCkin[20]; if( zBr==0 ) zBr = "trunk"; gidx = graph_add_row(pGraph, frid, fpid>0 ? 1 : 0, &fpid, zBr, zBgClr, 0); if( memcmp(zDate, zPrevDate, 10) ){ sqlite3_snprintf(sizeof(zPrevDate), zPrevDate, "%.10s", zDate); @ <tr><td> @ <div class="divider">%s(zPrevDate)</div> @ </td></tr> } memcpy(zTime, &zDate[11], 5); zTime[5] = 0; @ <tr><td class="timelineTime"> |
︙ | ︙ | |||
289 290 291 292 293 294 295 | @ <a href="%s(g.zTop)/annotate?checkin=%S(zCkin)&filename=%h(z)"> @ [annotate]</a> } @ </td></tr> } db_finalize(&q); if( pGraph ){ | | > | | | 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | @ <a href="%s(g.zTop)/annotate?checkin=%S(zCkin)&filename=%h(z)"> @ [annotate]</a> } @ </td></tr> } db_finalize(&q); if( pGraph ){ graph_finish(pGraph, 0); if( pGraph->nErr ){ graph_free(pGraph); pGraph = 0; }else{ @ <tr><td></td><td> @ <div id="grbtm" style="width:%d(pGraph->mxRail*20+30)px;"></div> @ </td></tr> } } @ </table> timeline_output_graph_javascript(pGraph, 0); style_footer(); } |
Added src/glob.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 | /* ** Copyright (c) 2011 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to pattern matching using "glob" syntax. */ #include "config.h" #include "glob.h" #include <assert.h> /* ** Construct and return a string which is an SQL expression that will ** be TRUE if value zVal matches any of the GLOB expressions in the list ** zGlobList. For example: ** ** zVal: "x" ** zGlobList: "*.o,*.obj" ** ** Result: "(x GLOB '*.o' OR x GLOB '*.obj')" ** ** Each element of the GLOB list may optionally be enclosed in either '...' ** or "...". This allows commas in the expression. Whitespace at the ** beginning and end of each GLOB pattern is ignored, except when enclosed ** within '...' or "...". ** ** This routine makes no effort to free the memory space it uses. */ char *glob_expr(const char *zVal, const char *zGlobList){ Blob expr; char *zSep = "("; int nTerm = 0; int i; int cTerm; if( zGlobList==0 || zGlobList[0]==0 ) return "0"; blob_zero(&expr); while( zGlobList[0] ){ while( fossil_isspace(zGlobList[0]) || zGlobList[0]==',' ) zGlobList++; if( zGlobList[0]==0 ) break; if( zGlobList[0]=='\'' || zGlobList[0]=='"' ){ cTerm = zGlobList[0]; zGlobList++; }else{ cTerm = ','; } for(i=0; zGlobList[i] && zGlobList[i]!=cTerm; i++){} if( cTerm==',' ){ while( i>0 && fossil_isspace(zGlobList[i-1]) ){ i--; } } blob_appendf(&expr, "%s%s GLOB '%.*q'", zSep, zVal, i, zGlobList); zSep = " OR "; if( cTerm!=',' && zGlobList[i] ) i++; zGlobList += i; if( zGlobList[0] ) zGlobList++; nTerm++; } if( nTerm ){ blob_appendf(&expr, ")"); return blob_str(&expr); }else{ return "0"; } } #if INTERFACE /* ** A Glob object holds a set of patterns read to be matched against ** a string. */ struct Glob { int nPattern; /* Number of patterns */ char **azPattern; /* Array of pointers to patterns */ }; #endif /* INTERFACE */ /* ** zPatternList is a comma-separate list of glob patterns. Parse up ** that list and use it to create a new Glob object. ** ** Elements of the glob list may be optionally enclosed in single our ** double-quotes. This allows a comma to be part of a glob. ** ** Leading and trailing spaces on unquoted glob patterns are ignored. ** ** An empty or null pattern list results in a null glob, which will ** match nothing. */ Glob *glob_create(const char *zPatternList){ int nList; /* Size of zPatternList in bytes */ int i, j; /* Loop counters */ Glob *p; /* The glob being created */ char *z; /* Copy of the pattern list */ char delimiter; /* '\'' or '\"' or 0 */ if( zPatternList==0 || zPatternList[0]==0 ) return 0; nList = strlen(zPatternList); p = fossil_malloc( sizeof(*p) + nList+1 ); memset(p, 0, sizeof(*p)); z = (char*)&p[1]; memcpy(z, zPatternList, nList+1); while( z[0] ){ while( z[0]==',' || z[0]==' ' ) z++; /* Skip leading spaces */ if( z[0]=='\'' || z[0]=='"' ){ delimiter = z[0]; z++; }else{ delimiter = ','; } if( z[0]==0 ) break; p->azPattern = fossil_realloc(p->azPattern, (p->nPattern+1)*sizeof(char*) ); p->azPattern[p->nPattern++] = z; for(i=0; z[i] && z[i]!=delimiter; i++){} if( delimiter==',' ){ /* Remove trailing spaces on a comma-delimited pattern */ for(j=i; j>1 && z[j-1]==' '; j--){} if( j<i ) z[j] = 0; } if( z[i]==0 ) break; z[i] = 0; z += i+1; } return p; } /* ** Return non-zero if string z matches glob pattern zGlob and zero if the ** pattern does not match. ** ** Globbing rules: ** ** '*' Matches any sequence of zero or more characters. ** ** '?' Matches exactly one character. ** ** [...] Matches one character from the enclosed list of ** characters. ** ** [^...] Matches one character not in the enclosed list. */ int strglob(const char *zGlob, const char *z){ int c, c2; int invert; int seen; while( (c = (*(zGlob++)))!=0 ){ if( c=='*' ){ while( (c=(*(zGlob++))) == '*' || c=='?' ){ if( c=='?' && (*(z++))==0 ) return 0; } if( c==0 ){ return 1; }else if( c=='[' ){ while( *z && strglob(zGlob-1,z)==0 ){ z++; } return (*z)!=0; } while( (c2 = (*(z++)))!=0 ){ while( c2!=c ){ c2 = *(z++); if( c2==0 ) return 0; } if( strglob(zGlob,z) ) return 1; } return 0; }else if( c=='?' ){ if( (*(z++))==0 ) return 0; }else if( c=='[' ){ int prior_c = 0; seen = 0; invert = 0; c = *(z++); if( c==0 ) return 0; c2 = *(zGlob++); if( c2=='^' ){ invert = 1; c2 = *(zGlob++); } if( c2==']' ){ if( c==']' ) seen = 1; c2 = *(zGlob++); } while( c2 && c2!=']' ){ if( c2=='-' && zGlob[0]!=']' && zGlob[0]!=0 && prior_c>0 ){ c2 = *(zGlob++); if( c>=prior_c && c<=c2 ) seen = 1; prior_c = 0; }else{ if( c==c2 ){ seen = 1; } prior_c = c2; } c2 = *(zGlob++); } if( c2==0 || (seen ^ invert)==0 ) return 0; }else{ if( c!=(*(z++)) ) return 0; } } return *z==0; } /* ** Return true (non-zero) if zString matches any of the patterns in ** the Glob. The value returned is actually a 1-basd index of the pattern ** that matched. Return 0 if none of the patterns match zString. ** ** A NULL glob matches nothing. */ int glob_match(Glob *pGlob, const char *zString){ int i; if( pGlob==0 ) return 0; for(i=0; i<pGlob->nPattern; i++){ if( strglob(pGlob->azPattern[i], zString) ) return i+1; } return 0; } /* ** Free all memory associated with the given Glob object */ void glob_free(Glob *pGlob){ if( pGlob ){ fossil_free(pGlob->azPattern); fossil_free(pGlob); } } /* ** COMMAND: test-glob ** ** Usage: %fossil test-glob PATTERN STRING... ** ** PATTERN is a comma-separated list of glob patterns. Show which of ** the STRINGs that follow match the PATTERN. */ void glob_test_cmd(void){ Glob *pGlob; int i; if( g.argc<4 ) usage("PATTERN STRING ..."); printf("SQL expression: %s\n", glob_expr("x", g.argv[2])); pGlob = glob_create(g.argv[2]); for(i=0; i<pGlob->nPattern; i++){ printf("pattern[%d] = [%s]\n", i, pGlob->azPattern[i]); } for(i=3; i<g.argc; i++){ printf("%d %s\n", glob_match(pGlob, g.argv[i]), g.argv[i]); } glob_free(pGlob); } |
Changes to src/graph.c.
︙ | ︙ | |||
19 20 21 22 23 24 25 | */ #include "config.h" #include "graph.h" #include <assert.h> #if INTERFACE | < | > > | | | | | | | > > | > | | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | */ #include "config.h" #include "graph.h" #include <assert.h> #if INTERFACE #define GR_MAX_RAIL 32 /* Max number of "rails" to display */ /* The graph appears vertically beside a timeline. Each row in the ** timeline corresponds to a row in the graph. GraphRow.idx is 0 for ** the top-most row and increases moving down. Hence (in the absence of ** time skew) parents have a larger index than their children. */ struct GraphRow { int rid; /* The rid for the check-in */ i8 nParent; /* Number of parents */ int *aParent; /* Array of parents. 0 element is primary .*/ char *zBranch; /* Branch name */ char *zBgClr; /* Background Color */ GraphRow *pNext; /* Next row down in the list of all rows */ GraphRow *pPrev; /* Previous row */ int idx; /* Row index. First is 1. 0 used for "none" */ int idxTop; /* Direct descendent highest up on the graph */ GraphRow *pChild; /* Child immediately above this node */ u8 isDup; /* True if this is duplicate of a prior entry */ u8 isLeaf; /* True if this is a leaf node */ u8 timeWarp; /* Child is earlier in time */ u8 bDescender; /* True if riser from bottom of graph to here. */ i8 iRail; /* Which rail this check-in appears on. 0-based.*/ i8 mergeOut; /* Merge out to this rail. -1 if no merge-out */ u8 mergeIn[GR_MAX_RAIL]; /* Merge in from non-zero rails */ int aiRiser[GR_MAX_RAIL]; /* Risers from this node to a higher row. */ int mergeUpto; /* Draw the mergeOut rail up to this level */ u32 mergeDown; /* Draw merge lines up from bottom of graph */ u32 railInUse; /* Mask of occupied rails at this row */ }; /* Context while building a graph */ struct GraphContext { int nErr; /* Number of errors encountered */ int mxRail; /* Number of rails required to render the graph */ |
︙ | ︙ | |||
83 84 85 86 87 88 89 | ** Create and initialize a GraphContext */ GraphContext *graph_init(void){ return (GraphContext*)safeMalloc( sizeof(GraphContext) ); } /* | | | > > > > > > > > > | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | ** Create and initialize a GraphContext */ GraphContext *graph_init(void){ return (GraphContext*)safeMalloc( sizeof(GraphContext) ); } /* ** Clear all content from a graph */ static void graph_clear(GraphContext *p){ int i; GraphRow *pRow; while( p->pFirst ){ pRow = p->pFirst; p->pFirst = pRow->pNext; free(pRow); } for(i=0; i<p->nBranch; i++) free(p->azBranch[i]); free(p->azBranch); free(p->apHash); memset(p, 0, sizeof(*p)); p->nErr = 1; } /* ** Destroy a GraphContext; */ void graph_free(GraphContext *p){ graph_clear(p); free(p); } /* ** Insert a row into the hash table. pRow->rid is the key. Keys must ** be unique. If there is already another row with the same rid, ** overwrite the prior entry if and only if the overwrite flag is set. |
︙ | ︙ | |||
142 143 144 145 146 147 148 | ** equality by comparing pointers. ** ** Note: also used for background color names. */ static char *persistBranchName(GraphContext *p, const char *zBranch){ int i; for(i=0; i<p->nBranch; i++){ | | | > > > | | > > > | 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 | ** equality by comparing pointers. ** ** Note: also used for background color names. */ static char *persistBranchName(GraphContext *p, const char *zBranch){ int i; for(i=0; i<p->nBranch; i++){ if( fossil_strcmp(zBranch, p->azBranch[i])==0 ) return p->azBranch[i]; } p->nBranch++; p->azBranch = fossil_realloc(p->azBranch, sizeof(char*)*p->nBranch); p->azBranch[p->nBranch-1] = mprintf("%s", zBranch); return p->azBranch[p->nBranch-1]; } /* ** Add a new row to the graph context. Rows are added from top to bottom. */ int graph_add_row( GraphContext *p, /* The context to which the row is added */ int rid, /* RID for the check-in */ int nParent, /* Number of parents */ int *aParent, /* Array of parents */ const char *zBranch, /* Branch for this check-in */ const char *zBgClr, /* Background color. NULL or "" for white. */ int isLeaf /* True if this row is a leaf */ ){ GraphRow *pRow; int nByte; if( p->nErr ) return 0; nByte = sizeof(GraphRow); nByte += sizeof(pRow->aParent[0])*nParent; pRow = (GraphRow*)safeMalloc( nByte ); pRow->aParent = (int*)&pRow[1]; pRow->rid = rid; pRow->nParent = nParent; pRow->zBranch = persistBranchName(p, zBranch); pRow->isLeaf = isLeaf; memset(pRow->aiRiser, -1, sizeof(pRow->aiRiser)); if( zBgClr==0 || zBgClr[0]==0 ) zBgClr = "white"; pRow->zBgClr = persistBranchName(p, zBgClr); memcpy(pRow->aParent, aParent, sizeof(aParent[0])*nParent); if( p->pFirst==0 ){ p->pFirst = pRow; }else{ p->pLast->pNext = pRow; |
︙ | ︙ | |||
217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 | if( dist<iBestDist ){ iBestDist = dist; iBest = i; } } } if( iBestDist>1000 ) p->nErr++; return iBest; } /* ** Assign all children of node pBottom to the same rail as pBottom. */ static void assignChildrenToRail(GraphRow *pBottom){ int iRail = pBottom->iRail; GraphRow *pCurrent; GraphRow *pPrior; u32 mask = 1<<iRail; pBottom->iRail = iRail; pBottom->railInUse |= mask; pPrior = pBottom; for(pCurrent=pBottom->pChild; pCurrent; pCurrent=pCurrent->pChild){ assert( pPrior->idx > pCurrent->idx ); assert( pCurrent->iRail<0 ); pCurrent->iRail = iRail; pCurrent->railInUse |= mask; | > | > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > | > > | | | | | > > | | | | | > > > > > | < < > > | > > > > > > > > < < | < > > > > | > | > > > > > > > > > > > > > | > > | < | | | < | < > | > | | < > > > > < > > > > > > > > | < < < < < < < | > > > > < > > | < < < | < | 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 | if( dist<iBestDist ){ iBestDist = dist; iBest = i; } } } if( iBestDist>1000 ) p->nErr++; if( iBest>p->mxRail ) p->mxRail = iBest; return iBest; } /* ** Assign all children of node pBottom to the same rail as pBottom. */ static void assignChildrenToRail(GraphRow *pBottom){ int iRail = pBottom->iRail; GraphRow *pCurrent; GraphRow *pPrior; u32 mask = 1<<iRail; pBottom->iRail = iRail; pBottom->railInUse |= mask; pPrior = pBottom; for(pCurrent=pBottom->pChild; pCurrent; pCurrent=pCurrent->pChild){ assert( pPrior->idx > pCurrent->idx ); assert( pCurrent->iRail<0 ); pCurrent->iRail = iRail; pCurrent->railInUse |= mask; pPrior->aiRiser[iRail] = pCurrent->idx; while( pPrior->idx > pCurrent->idx ){ pPrior->railInUse |= mask; pPrior = pPrior->pPrev; assert( pPrior!=0 ); } } } /* ** Create a merge-arrow riser going from pParent up to pChild. */ static void createMergeRiser( GraphContext *p, GraphRow *pParent, GraphRow *pChild ){ int u; u32 mask; GraphRow *pLoop; if( pParent->mergeOut<0 ){ u = pParent->aiRiser[pParent->iRail]; if( u>=0 && u<pChild->idx ){ /* The thick arrow up to the next primary child of pDesc goes ** further up than the thin merge arrow riser, so draw them both ** on the same rail. */ pParent->mergeOut = pParent->iRail*4; if( pParent->iRail<pChild->iRail ) pParent->mergeOut += 2; pParent->mergeUpto = pChild->idx; }else{ /* The thin merge arrow riser is taller than the thick primary ** child riser, so use separate rails. */ int iTarget = pParent->iRail; pParent->mergeOut = findFreeRail(p, pChild->idx, pParent->idx-1, 0, iTarget)*4 + 1; pParent->mergeUpto = pChild->idx; mask = 1<<(pParent->mergeOut/4); for(pLoop=pChild->pNext; pLoop && pLoop->rid!=pParent->rid; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } } } pChild->mergeIn[pParent->mergeOut/4] = (pParent->mergeOut&3)+1; } /* ** Compute the maximum rail number. */ static void find_max_rail(GraphContext *p){ GraphRow *pRow; p->mxRail = 0; for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( pRow->iRail>p->mxRail ) p->mxRail = pRow->iRail; if( pRow->mergeOut/4>p->mxRail ) p->mxRail = pRow->mergeOut/4; while( p->mxRail<GR_MAX_RAIL && pRow->mergeDown>((1<<(p->mxRail+1))-1) ){ p->mxRail++; } } } /* ** Compute the complete graph */ void graph_finish(GraphContext *p, int omitDescenders){ GraphRow *pRow, *pDesc, *pDup, *pLoop, *pParent; int i; u32 mask; u32 inUse; int hasDup = 0; /* True if one or more isDup entries */ const char *zTrunk; if( p==0 || p->pFirst==0 || p->nErr ) return; p->nErr = 1; /* Assume an error until proven otherwise */ /* Initialize all rows */ p->nHash = p->nRow*2 + 1; p->apHash = safeMalloc( sizeof(p->apHash[0])*p->nHash ); for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( pRow->pNext ) pRow->pNext->pPrev = pRow; pRow->iRail = -1; pRow->mergeOut = -1; if( (pDup = hashFind(p, pRow->rid))!=0 ){ hasDup = 1; pDup->isDup = 1; } hashInsert(p, pRow, 1); } p->mxRail = -1; /* Purge merge-parents that are out-of-graph if descenders are not ** drawn. ** ** Each node has one primary parent and zero or more "merge" parents. ** A merge parent is a prior checkin from which changes were merged into ** the current check-in. If a merge parent is not in the visible section ** of this graph, then no arrows will be drawn for it, so remove it from ** the aParent[] array. */ if( omitDescenders ){ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ if( hashFind(p, pRow->aParent[i])==0 ){ pRow->aParent[i] = pRow->aParent[--pRow->nParent]; i--; } } } } /* Find the pChild pointer for each node. ** ** The pChild points to the node directly above on the same rail. ** The pChild must be in the same branch. Leaf nodes have a NULL ** pChild. ** ** In the case of a fork, choose the pChild that results in the ** longest rail. */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( pRow->isDup ) continue; if( pRow->nParent==0 ) continue; /* Root node */ pParent = hashFind(p, pRow->aParent[0]); if( pParent==0 ) continue; /* Parent off-screen */ if( pParent->zBranch!=pRow->zBranch ) continue; /* Different branch */ if( pParent->idx <= pRow->idx ){ pParent->timeWarp = 1; continue; /* Time-warp */ } if( pRow->idxTop < pParent->idxTop ){ pParent->pChild = pRow; pParent->idxTop = pRow->idxTop; } } /* Identify rows where the primary parent is off screen. Assign ** each to a rail and draw descenders to the bottom of the screen. ** ** Strive to put the "trunk" branch on far left. */ zTrunk = persistBranchName(p, "trunk"); for(i=0; i<2; i++){ for(pRow=p->pLast; pRow; pRow=pRow->pPrev){ if( pRow->isDup ) continue; if( i==0 ){ if( pRow->zBranch!=zTrunk ) continue; }else { if( pRow->iRail>=0 ) continue; } if( pRow->nParent==0 || hashFind(p,pRow->aParent[0])==0 ){ if( omitDescenders ){ pRow->iRail = findFreeRail(p, pRow->idxTop, pRow->idx, 0, 0); }else{ pRow->iRail = ++p->mxRail; } if( p->mxRail>=GR_MAX_RAIL ) return; mask = 1<<(pRow->iRail); if( !omitDescenders ){ pRow->bDescender = pRow->nParent>0; for(pLoop=pRow; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } } assignChildrenToRail(pRow); } } } /* Assign rails to all rows that are still unassigned. */ inUse = (1<<(p->mxRail+1))-1; for(pRow=p->pLast; pRow; pRow=pRow->pPrev){ int parentRid; if( pRow->iRail>=0 ){ if( pRow->pChild==0 && !pRow->timeWarp ){ if( omitDescenders || count_nonbranch_children(pRow->rid)==0 ){ inUse &= ~(1<<pRow->iRail); }else{ pRow->aiRiser[pRow->iRail] = 0; mask = 1<<pRow->iRail; for(pLoop=pRow; pLoop; pLoop=pLoop->pPrev){ pLoop->railInUse |= mask; } } } continue; } if( pRow->isDup ){ continue; }else{ assert( pRow->nParent>0 ); parentRid = pRow->aParent[0]; pParent = hashFind(p, parentRid); if( pParent==0 ){ pRow->iRail = ++p->mxRail; if( p->mxRail>=GR_MAX_RAIL ) return; pRow->railInUse = 1<<pRow->iRail; continue; } if( pParent->idx>pRow->idx ){ /* Common case: Child occurs after parent and is above the ** parent in the timeline */ pRow->iRail = findFreeRail(p, 0, pParent->idx, inUse, pParent->iRail); if( p->mxRail>=GR_MAX_RAIL ) return; pParent->aiRiser[pRow->iRail] = pRow->idx; }else{ /* Timewarp case: Child occurs earlier in time than parent and ** appears below the parent in the timeline. */ int iDownRail = ++p->mxRail; if( iDownRail<1 ) iDownRail = ++p->mxRail; pRow->iRail = ++p->mxRail; if( p->mxRail>=GR_MAX_RAIL ) return; pRow->railInUse = 1<<pRow->iRail; pParent->aiRiser[iDownRail] = pRow->idx; mask = 1<<iDownRail; inUse |= mask; for(pLoop=p->pFirst; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } } } mask = 1<<pRow->iRail; pRow->railInUse |= mask; if( pRow->pChild==0 ){ inUse &= ~mask; }else{ inUse |= mask; assignChildrenToRail(pRow); } if( pParent ){ for(pLoop=pParent->pPrev; pLoop && pLoop!=pRow; pLoop=pLoop->pPrev){ pLoop->railInUse |= mask; } } } /* ** Insert merge rails and merge arrows */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ int parentRid = pRow->aParent[i]; pDesc = hashFind(p, parentRid); if( pDesc==0 ){ /* Merge from a node that is off-screen */ int iMrail = findFreeRail(p, pRow->idx, p->nRow, 0, 0); if( p->mxRail>=GR_MAX_RAIL ) return; mask = 1<<iMrail; pRow->mergeIn[iMrail] = 2; pRow->mergeDown |= mask; for(pLoop=pRow->pNext; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } }else{ /* Merge from an on-screen node */ createMergeRiser(p, pDesc, pRow); if( p->mxRail>=GR_MAX_RAIL ) return; } } } /* ** Insert merge rails from primaries to duplicates. */ if( hasDup ){ int dupRail; int mxRail; find_max_rail(p); mxRail = p->mxRail; dupRail = mxRail+1; if( p->mxRail>=GR_MAX_RAIL ) return; for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( !pRow->isDup ) continue; pRow->iRail = dupRail; pDesc = hashFind(p, pRow->rid); assert( pDesc!=0 && pDesc!=pRow ); createMergeRiser(p, pDesc, pRow); if( pDesc->mergeOut/4>mxRail ) mxRail = pDesc->mergeOut/4; } if( dupRail<=mxRail ){ dupRail = mxRail+1; for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( pRow->isDup ) pRow->iRail = dupRail; } } if( mxRail>=GR_MAX_RAIL ) return; } /* ** Find the maximum rail number. */ find_max_rail(p); p->nErr = 0; } |
Added src/gzip.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | /* ** Copyright (c) 2011 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to incrementally generate a GZIP compressed ** file. The GZIP format is described in RFC-1952. ** ** State information is stored in static variables, so this implementation ** can only be building up a single GZIP file at a time. */ #include <assert.h> #include <zlib.h> #include "config.h" #include "gzip.h" /* ** State information for the GZIP file under construction. */ struct gzip_state { int eState; /* 0: idle 1: header 2: compressing */ int iCRC; /* The checksum */ z_stream stream; /* The working compressor */ Blob out; /* Results stored here */ } gzip; /* ** Write a 32-bit integer as little-endian into the given buffer. */ static void put32(char *z, int v){ z[0] = v & 0xff; z[1] = (v>>8) & 0xff; z[2] = (v>>16) & 0xff; z[3] = (v>>24) & 0xff; } /* ** Begin constructing a gzip file. */ void gzip_begin(void){ char aHdr[10]; sqlite3_int64 now; assert( gzip.eState==0 ); blob_zero(&gzip.out); aHdr[0] = 0x1f; aHdr[1] = 0x8b; aHdr[2] = 8; aHdr[3] = 0; now = db_int64(0, "SELECT (julianday('now') - 2440587.5)*86400.0"); put32(&aHdr[4], now&0xffffffff); aHdr[8] = 2; aHdr[9] = 255; blob_append(&gzip.out, aHdr, 10); gzip.iCRC = 0; gzip.eState = 1; } /* ** Add nIn bytes of content from pIn to the gzip file. */ #define GZIP_BUFSZ 100000 void gzip_step(const char *pIn, int nIn){ char *zOutBuf; int nOut; nOut = nIn + nIn/10 + 100; if( nOut<100000 ) nOut = 100000; zOutBuf = fossil_malloc(nOut); gzip.stream.avail_in = nIn; gzip.stream.next_in = (unsigned char*)pIn; gzip.stream.avail_out = nOut; gzip.stream.next_out = (unsigned char*)zOutBuf; if( gzip.eState==1 ){ gzip.stream.zalloc = (alloc_func)0; gzip.stream.zfree = (free_func)0; gzip.stream.opaque = 0; deflateInit2(&gzip.stream, 9, Z_DEFLATED, -MAX_WBITS,8,Z_DEFAULT_STRATEGY); gzip.eState = 2; } gzip.iCRC = crc32(gzip.iCRC, gzip.stream.next_in, gzip.stream.avail_in); do{ deflate(&gzip.stream, nIn==0 ? Z_FINISH : 0); blob_append(&gzip.out, zOutBuf, nOut - gzip.stream.avail_out); gzip.stream.avail_out = nOut; gzip.stream.next_out = (unsigned char*)zOutBuf; }while( gzip.stream.avail_in>0 ); fossil_free(zOutBuf); } /* ** Finish the gzip file and put the content in *pOut */ void gzip_finish(Blob *pOut){ char aTrailer[8]; assert( gzip.eState>0 ); gzip_step("", 0); deflateEnd(&gzip.stream); put32(aTrailer, gzip.iCRC); put32(&aTrailer[4], gzip.stream.total_in); blob_append(&gzip.out, aTrailer, 8); *pOut = gzip.out; blob_zero(&gzip.out); gzip.eState = 0; } /* ** COMMAND: test-gzip ** ** Usage: %fossil test-gzip FILENAME ** ** Compress a file using gzip. */ void test_gzip_cmd(void){ Blob b; char *zOut; if( g.argc!=3 ) usage("FILENAME"); sqlite3_open(":memory:", &g.db); gzip_begin(); blob_read_from_file(&b, g.argv[2]); zOut = mprintf("%s.gz", g.argv[2]); gzip_step(blob_buffer(&b), blob_size(&b)); blob_reset(&b); gzip_finish(&b); blob_write_to_file(&b, zOut); blob_reset(&b); fossil_free(zOut); } |
Changes to src/http.c.
︙ | ︙ | |||
60 61 62 63 64 65 66 67 68 69 | zPw = 0; }else{ /* Password failure while doing a sync from the command-line interface */ url_prompt_for_password(); zPw = g.urlPasswd; if( !g.dontKeepUrl ) db_set("last-sync-pw", obscure(zPw), 0); } /* The login card wants the SHA1 hash of the password, so convert the ** password to its SHA1 hash it it isn't already a SHA1 hash. | > > > > > > < < < < < | < < < < < < | 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | zPw = 0; }else{ /* Password failure while doing a sync from the command-line interface */ url_prompt_for_password(); zPw = g.urlPasswd; if( !g.dontKeepUrl ) db_set("last-sync-pw", obscure(zPw), 0); } /* If the first character of the password is "#", then that character is ** not really part of the password - it is an indicator that we should ** use Basic Authentication. So skip that character. */ if( zPw && zPw[0]=='#' ) zPw++; /* The login card wants the SHA1 hash of the password, so convert the ** password to its SHA1 hash it it isn't already a SHA1 hash. */ if( zPw && zPw[0] ) zPw = sha1_shared_secret(zPw, zLogin, 0); blob_append(&pw, zPw, -1); sha1sum_blob(&pw, &sig); blob_appendf(pLogin, "login %F %b %b\n", zLogin, &nonce, &sig); blob_reset(&pw); blob_reset(&sig); blob_reset(&nonce); |
︙ | ︙ | |||
103 104 105 106 107 108 109 | if( i>0 && g.urlPath[i-1]=='/' ){ zSep = ""; }else{ zSep = "/"; } blob_appendf(pHdr, "POST %s%sxfer/xfer HTTP/1.0\r\n", g.urlPath, zSep); if( g.urlProxyAuth ){ | | > > > > > > > | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | if( i>0 && g.urlPath[i-1]=='/' ){ zSep = ""; }else{ zSep = "/"; } blob_appendf(pHdr, "POST %s%sxfer/xfer HTTP/1.0\r\n", g.urlPath, zSep); if( g.urlProxyAuth ){ blob_appendf(pHdr, "Proxy-Authorization: %s\r\n", g.urlProxyAuth); } if( g.urlPasswd && g.urlUser && g.urlPasswd[0]=='#' ){ char *zCredentials = mprintf("%s:%s", g.urlUser, &g.urlPasswd[1]); char *zEncoded = encode64(zCredentials, -1); blob_appendf(pHdr, "Authorization: Basic %s\r\n", zEncoded); fossil_free(zEncoded); fossil_free(zCredentials); } blob_appendf(pHdr, "Host: %s\r\n", g.urlHostname); blob_appendf(pHdr, "User-Agent: Fossil/" MANIFEST_VERSION "\r\n"); if( g.fHttpTrace ){ blob_appendf(pHdr, "Content-Type: application/x-fossil-debug\r\n"); }else{ blob_appendf(pHdr, "Content-Type: application/x-fossil\r\n"); |
︙ | ︙ | |||
125 126 127 128 129 130 131 | ** in pRecv. pRecv is assumed to be uninitialized when ** this routine is called - this routine will initialize it. ** ** The server address is contain in the "g" global structure. The ** url_parse() routine should have been called prior to this routine ** in order to fill this structure appropriately. */ | | > | > | | < < > > > > > | | | | | | < | > > > > > > | > | | 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 | ** in pRecv. pRecv is assumed to be uninitialized when ** this routine is called - this routine will initialize it. ** ** The server address is contain in the "g" global structure. The ** url_parse() routine should have been called prior to this routine ** in order to fill this structure appropriately. */ int http_exchange(Blob *pSend, Blob *pReply, int useLogin){ Blob login; /* The login card */ Blob payload; /* The complete payload including login card */ Blob hdr; /* The HTTP request header */ int closeConnection; /* True to close the connection when done */ int iLength; /* Length of the reply payload */ int rc; /* Result code */ int iHttpVersion; /* Which version of HTTP protocol server uses */ char *zLine; /* A single line of the reply header */ int i; /* Loop counter */ int isError = 0; /* True if the reply is an error message */ int isCompressed = 1; /* True if the reply is compressed */ if( transport_open() ){ fossil_warning(transport_errmsg()); return 1; } /* Construct the login card and prepare the complete payload */ blob_zero(&login); if( useLogin ) http_build_login_card(pSend, &login); if( g.fHttpTrace ){ payload = login; blob_append(&payload, blob_buffer(pSend), blob_size(pSend)); }else{ blob_compress2(&login, pSend, &payload); blob_reset(&login); } /* Construct the HTTP request header */ http_build_header(&payload, &hdr); /* When tracing, write the transmitted HTTP message both to standard ** output and into a file. The file can then be used to drive the ** server-side like this: ** ** ./fossil test-http <http-request-1.txt */ if( g.fHttpTrace ){ static int traceCnt = 0; char *zOutFile; FILE *out; traceCnt++; zOutFile = mprintf("http-request-%d.txt", traceCnt); out = fopen(zOutFile, "w"); if( out ){ fwrite(blob_buffer(&hdr), 1, blob_size(&hdr), out); fwrite(blob_buffer(&payload), 1, blob_size(&payload), out); fclose(out); } free(zOutFile); zOutFile = mprintf("http-reply-%d.txt", traceCnt); out = fopen(zOutFile, "w"); transport_log(out); free(zOutFile); } /* ** Send the request to the server. */ transport_send(&hdr); transport_send(&payload); blob_reset(&hdr); blob_reset(&payload); transport_flip(); /* ** Read and interpret the server reply */ closeConnection = 1; iLength = -1; while( (zLine = transport_receive_line())!=0 && zLine[0]!=0 ){ /* printf("[%s]\n", zLine); fflush(stdout); */ if( fossil_strnicmp(zLine, "http/1.", 7)==0 ){ if( sscanf(zLine, "HTTP/1.%d %d", &iHttpVersion, &rc)!=2 ) goto write_err; if( rc!=200 && rc!=302 ){ int ii; for(ii=7; zLine[ii] && zLine[ii]!=' '; ii++){} while( zLine[ii]==' ' ) ii++; fossil_warning("server says: %s", &zLine[ii]); goto write_err; } if( iHttpVersion==0 ){ closeConnection = 1; }else{ closeConnection = 0; } }else if( fossil_strnicmp(zLine, "content-length:", 15)==0 ){ for(i=15; fossil_isspace(zLine[i]); i++){} iLength = atoi(&zLine[i]); }else if( fossil_strnicmp(zLine, "connection:", 11)==0 ){ char c; for(i=11; fossil_isspace(zLine[i]); i++){} c = zLine[i]; if( c=='c' || c=='C' ){ closeConnection = 1; }else if( c=='k' || c=='K' ){ closeConnection = 0; } }else if( rc==302 && fossil_strnicmp(zLine, "location:", 9)==0 ){ int i, j; for(i=9; zLine[i] && zLine[i]==' '; i++){} if( zLine[i]==0 ) fossil_fatal("malformed redirect: %s", zLine); j = strlen(zLine) - 1; while( j>4 && strcmp(&zLine[j-4],"/xfer")==0 ){ j -= 4; zLine[j] = 0; } fossil_print("redirect to %s\n", &zLine[i]); url_parse(&zLine[i]); transport_close(); return http_exchange(pSend, pReply, useLogin); }else if( fossil_strnicmp(zLine, "content-type: ", 14)==0 ){ if( fossil_strnicmp(&zLine[14], "application/x-fossil-debug", -1)==0 ){ isCompressed = 0; }else if( fossil_strnicmp(&zLine[14], "application/x-fossil-uncompressed", -1)==0 ){ isCompressed = 0; }else if( fossil_strnicmp(&zLine[14], "application/x-fossil", -1)!=0 ){ isError = 1; } } } if( rc!=200 ){ fossil_warning("\"location:\" missing from 302 redirect reply"); goto write_err; } /* ** Extract the reply payload that follows the header */ if( iLength<0 ){ |
︙ | ︙ | |||
267 268 269 270 271 272 273 | if( z[i]==0 ) break; } z[j] = z[i]; } z[j] = 0; fossil_fatal("server sends error: %s", z); } | < < < | < | | | 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 | if( z[i]==0 ) break; } z[j] = z[i]; } z[j] = 0; fossil_fatal("server sends error: %s", z); } if( isCompressed ) blob_uncompress(pReply, pReply); /* ** Close the connection to the server if appropriate. ** ** FIXME: There is some bug in the lower layers that prevents the ** connection from remaining open. The easiest fix for now is to ** simply close and restart the connection for each round-trip. */ closeConnection = 1; /* FIX ME */ if( closeConnection ){ transport_close(); }else{ transport_rewind(); } return 0; /* ** Jump to here if an error is seen. */ write_err: transport_close(); return 1; } |
Changes to src/http_socket.c.
︙ | ︙ | |||
150 151 152 153 154 155 156 | struct hostent *pHost; pHost = gethostbyname(g.urlName); if( pHost!=0 ){ memcpy(&addr.sin_addr,pHost->h_addr_list[0],pHost->h_length); }else #endif { | | | 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | struct hostent *pHost; pHost = gethostbyname(g.urlName); if( pHost!=0 ){ memcpy(&addr.sin_addr,pHost->h_addr_list[0],pHost->h_length); }else #endif { socket_set_errmsg("can't resolve host name: %s", g.urlName); return 1; } } addrIsInit = 1; /* Set the Global.zIpAddr variable to the server we are talking to. ** This is used to populate the ipaddr column of the rcvfrom table, |
︙ | ︙ |
Changes to src/http_transport.c.
︙ | ︙ | |||
28 29 30 31 32 33 34 | */ static struct { int isOpen; /* True when the transport layer is open */ char *pBuf; /* Buffer used to hold the reply */ int nAlloc; /* Space allocated for transportBuf[] */ int nUsed ; /* Space of transportBuf[] used */ int iCursor; /* Next unread by in transportBuf[] */ | | | > | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | */ static struct { int isOpen; /* True when the transport layer is open */ char *pBuf; /* Buffer used to hold the reply */ int nAlloc; /* Space allocated for transportBuf[] */ int nUsed ; /* Space of transportBuf[] used */ int iCursor; /* Next unread by in transportBuf[] */ i64 nSent; /* Number of bytes sent */ i64 nRcvd; /* Number of bytes received */ FILE *pFile; /* File I/O for FILE: */ char *zOutFile; /* Name of outbound file for FILE: */ char *zInFile; /* Name of inbound file for FILE: */ FILE *pLog; /* Log output here */ } transport = { 0, 0, 0, 0, 0, 0, 0 }; /* ** Information about the connection to the SSH subprocess when ** using the ssh:// sync method. |
︙ | ︙ | |||
62 63 64 65 66 67 68 | return socket_errmsg(); } /* ** Retrieve send/receive counts from the transport layer. If "resetFlag" ** is true, then reset the counts. */ | | | 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | return socket_errmsg(); } /* ** Retrieve send/receive counts from the transport layer. If "resetFlag" ** is true, then reset the counts. */ void transport_stats(i64 *pnSent, i64 *pnRcvd, int resetFlag){ if( pnSent ) *pnSent = transport.nSent; if( pnRcvd ) *pnRcvd = transport.nRcvd; if( resetFlag ){ transport.nSent = 0; transport.nRcvd = 0; } } |
︙ | ︙ | |||
106 107 108 109 110 111 112 | if( g.urlIsSsh ){ /* Only SSH requires a global initialization. For SSH we need to create ** and run an SSH command to talk to the remote machine. */ const char *zSsh; /* The base SSH command */ Blob zCmd; /* The SSH command */ char *zHost; /* The host name to contact */ | | > | > > < > | > | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 | if( g.urlIsSsh ){ /* Only SSH requires a global initialization. For SSH we need to create ** and run an SSH command to talk to the remote machine. */ const char *zSsh; /* The base SSH command */ Blob zCmd; /* The SSH command */ char *zHost; /* The host name to contact */ char *zIn; /* An input line received back from remote */ zSsh = db_get("ssh-command", zDefaultSshCmd); blob_init(&zCmd, zSsh, -1); if( g.urlPort!=g.urlDfltPort ){ #ifdef __MINGW32__ blob_appendf(&zCmd, " -P %d", g.urlPort); #else blob_appendf(&zCmd, " -p %d", g.urlPort); #endif } printf("%s", blob_str(&zCmd)); /* Show the base of the SSH command */ if( g.urlUser && g.urlUser[0] ){ zHost = mprintf("%s@%s", g.urlUser, g.urlName); #ifdef __MINGW32__ /* Only win32 (and specifically PLINK.EXE) support the -pw option */ if( g.urlPasswd && g.urlPasswd[0] ){ Blob pw; blob_zero(&pw); if( g.urlPasswd[0]=='*' ){ char *zPrompt; zPrompt = mprintf("Password for [%s]: ", zHost); prompt_for_password(zPrompt, &pw, 0); free(zPrompt); }else{ blob_init(&pw, g.urlPasswd, -1); } blob_append(&zCmd, " -pw ", -1); shell_escape(&zCmd, blob_str(&pw)); blob_reset(&pw); printf(" -pw ********"); /* Do not show the password text */ } #endif }else{ zHost = mprintf("%s", g.urlName); } blob_append(&zCmd, " ", 1); shell_escape(&zCmd, zHost); printf(" %s\n", zHost); /* Show the conclusion of the SSH command */ free(zHost); popen2(blob_str(&zCmd), &sshIn, &sshOut, &sshPid); if( sshPid==0 ){ fossil_fatal("cannot start ssh tunnel using [%b]", &zCmd); } blob_reset(&zCmd); /* Send an "echo" command to the other side to make sure that the ** connection is up and working. */ fprintf(sshOut, "echo test\n"); fflush(sshOut); zIn = fossil_malloc(16000); sshin_read(zIn, 16000); if( memcmp(zIn, "test", 4)!=0 ){ pclose2(sshIn, sshOut, sshPid); fossil_fatal("ssh connection failed: [%s]", zIn); } fossil_free(zIn); } } /* ** Open a connection to the server. The server is defined by the following ** global variables: ** |
︙ | ︙ | |||
182 183 184 185 186 187 188 | if( transport.isOpen==0 ){ if( g.urlIsSsh ){ Blob cmd; blob_zero(&cmd); shell_escape(&cmd, g.urlFossil); blob_append(&cmd, " test-http ", -1); shell_escape(&cmd, g.urlPath); | | | 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 | if( transport.isOpen==0 ){ if( g.urlIsSsh ){ Blob cmd; blob_zero(&cmd); shell_escape(&cmd, g.urlFossil); blob_append(&cmd, " test-http ", -1); shell_escape(&cmd, g.urlPath); /* printf("%s\n", blob_str(&cmd)); fflush(stdout); */ fprintf(sshOut, "%s\n", blob_str(&cmd)); fflush(sshOut); blob_reset(&cmd); }else if( g.urlIsHttps ){ #ifdef FOSSIL_ENABLE_SSL rc = ssl_open(); if( rc==0 ) transport.isOpen = 1; |
︙ | ︙ | |||
224 225 226 227 228 229 230 231 232 233 234 235 236 237 | void transport_close(void){ if( transport.isOpen ){ free(transport.pBuf); transport.pBuf = 0; transport.nAlloc = 0; transport.nUsed = 0; transport.iCursor = 0; if( g.urlIsSsh ){ /* No-op */ }else if( g.urlIsHttps ){ #ifdef FOSSIL_ENABLE_SSL ssl_close(); #endif }else if( g.urlIsFile ){ | > > > > | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 | void transport_close(void){ if( transport.isOpen ){ free(transport.pBuf); transport.pBuf = 0; transport.nAlloc = 0; transport.nUsed = 0; transport.iCursor = 0; if( transport.pLog ){ fclose(transport.pLog); transport.pLog = 0; } if( g.urlIsSsh ){ /* No-op */ }else if( g.urlIsHttps ){ #ifdef FOSSIL_ENABLE_SSL ssl_close(); #endif }else if( g.urlIsFile ){ |
︙ | ︙ | |||
291 292 293 294 295 296 297 | */ void transport_flip(void){ if( g.urlIsSsh ){ fprintf(sshOut, "\n\n"); }else if( g.urlIsFile ){ char *zCmd; fclose(transport.pFile); | | | > > > > > > > > > > > > | 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 | */ void transport_flip(void){ if( g.urlIsSsh ){ fprintf(sshOut, "\n\n"); }else if( g.urlIsFile ){ char *zCmd; fclose(transport.pFile); zCmd = mprintf("\"%s\" http \"%s\" \"%s\" \"%s\" 127.0.0.1 --localauth", fossil_nameofexe(), g.urlName, transport.zOutFile, transport.zInFile ); fossil_system(zCmd); free(zCmd); transport.pFile = fopen(transport.zInFile, "rb"); } } /* ** Log all input to a file. The transport layer will take responsibility ** for closing the log file when it is done. */ void transport_log(FILE *pLog){ if( transport.pLog ){ fclose(transport.pLog); transport.pLog = 0; } transport.pLog = pLog; } /* ** This routine is called when the inbound message has been received ** and it is time to start sending again. */ void transport_rewind(void){ if( g.urlIsFile ){ |
︙ | ︙ | |||
320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | */ static int transport_fetch(char *zBuf, int N){ int got; if( sshIn ){ int x; int wanted = N; got = 0; while( wanted>0 ){ x = read(sshIn, &zBuf[got], wanted); if( x<=0 ) break; got += x; wanted -= x; } }else if( g.urlIsHttps ){ #ifdef FOSSIL_ENABLE_SSL got = ssl_receive(0, zBuf, N); #else got = 0; #endif }else if( g.urlIsFile ){ got = fread(zBuf, 1, N, transport.pFile); }else{ got = socket_receive(0, zBuf, N); } | > | > > > > | | 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 | */ static int transport_fetch(char *zBuf, int N){ int got; if( sshIn ){ int x; int wanted = N; got = 0; /* printf("want %d bytes...\n", wanted); fflush(stdout); */ while( wanted>0 ){ x = read(sshIn, &zBuf[got], wanted); if( x<=0 ) break; got += x; wanted -= x; } }else if( g.urlIsHttps ){ #ifdef FOSSIL_ENABLE_SSL got = ssl_receive(0, zBuf, N); #else got = 0; #endif }else if( g.urlIsFile ){ got = fread(zBuf, 1, N, transport.pFile); }else{ got = socket_receive(0, zBuf, N); } /* printf("received %d of %d bytes\n", got, N); fflush(stdout); */ if( transport.pLog ){ fwrite(zBuf, 1, got, transport.pLog); fflush(transport.pLog); } return got; } /* ** Read N bytes of content from the wire and store in the supplied buffer. ** Return the number of bytes actually received. */ int transport_receive(char *zBuf, int N){ int onHand; /* Bytes current held in the transport buffer */ int nByte = 0; /* Bytes of content received */ onHand = transport.nUsed - transport.iCursor; /* printf("request %d with %d on hand\n", N, onHand); fflush(stdout); */ if( onHand>0 ){ int toMove = onHand; if( toMove>N ) toMove = N; /* printf("bytes on hand: %d of %d\n", toMove, N); fflush(stdout); */ memcpy(zBuf, &transport.pBuf[transport.iCursor], toMove); transport.iCursor += toMove; if( transport.iCursor>=transport.nUsed ){ |
︙ | ︙ | |||
404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 | transport.nAlloc = transport.nUsed + N; pNew = fossil_realloc(transport.pBuf, transport.nAlloc); transport.pBuf = pNew; } if( N>0 ){ i = transport_fetch(&transport.pBuf[transport.nUsed], N); if( i>0 ){ transport.nUsed += i; } } } /* ** Fetch a single line of input where a line is all text up to the next ** \n character or until the end of input. Remove all trailing whitespace ** from the received line and zero-terminate the result. Return a pointer ** to the line. ** ** Each call to this routine potentially overwrites the returned buffer. */ char *transport_receive_line(void){ int i; int iStart; i = iStart = transport.iCursor; while(1){ if( i >= transport.nUsed ){ | > | | | 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 | transport.nAlloc = transport.nUsed + N; pNew = fossil_realloc(transport.pBuf, transport.nAlloc); transport.pBuf = pNew; } if( N>0 ){ i = transport_fetch(&transport.pBuf[transport.nUsed], N); if( i>0 ){ transport.nRcvd += i; transport.nUsed += i; } } } /* ** Fetch a single line of input where a line is all text up to the next ** \n character or until the end of input. Remove all trailing whitespace ** from the received line and zero-terminate the result. Return a pointer ** to the line. ** ** Each call to this routine potentially overwrites the returned buffer. */ char *transport_receive_line(void){ int i; int iStart; i = iStart = transport.iCursor; while(1){ if( i >= transport.nUsed ){ transport_load_buffer(g.urlIsSsh ? 2 : 1000); i -= iStart; iStart = 0; if( i >= transport.nUsed ){ transport.pBuf[i] = 0; transport.iCursor = i; break; } } if( transport.pBuf[i]=='\n' ){ transport.iCursor = i+1; while( i>=iStart && fossil_isspace(transport.pBuf[i]) ){ transport.pBuf[i] = 0; i--; } break; } i++; } /* printf("Got line: [%s]\n", &transport.pBuf[iStart]); */ return &transport.pBuf[iStart]; } void transport_global_shutdown(void){ if( g.urlIsSsh && sshPid ){ printf("Closing SSH tunnel: "); fflush(stdout); |
︙ | ︙ |
Changes to src/import.c.
︙ | ︙ | |||
18 19 20 21 22 23 24 25 26 27 28 | ** repository in the git-fast-import format as a new Fossil ** repository. */ #include "config.h" #include "import.h" #include <assert.h> /* ** COMMAND: import ** | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 | ** repository in the git-fast-import format as a new Fossil ** repository. */ #include "config.h" #include "import.h" #include <assert.h> #if INTERFACE /* ** A single file change record. */ struct ImportFile { char *zName; /* Name of a file */ char *zUuid; /* UUID of the file */ char *zPrior; /* Prior name if the name was changed */ char isFrom; /* True if obtained from the parent */ char isExe; /* True if executable */ }; #endif /* ** State information about an on-going fast-import parse. */ static struct { void (*xFinish)(void); /* Function to finish a prior record */ int nData; /* Bytes of data */ char *zTag; /* Name of a tag */ char *zBranch; /* Name of a branch for a commit */ char *zPrevBranch; /* The branch of the previous check-in */ char *aData; /* Data content */ char *zMark; /* The current mark */ char *zDate; /* Date/time stamp */ char *zUser; /* User name */ char *zComment; /* Comment of a commit */ char *zFrom; /* from value as a UUID */ char *zPrevCheckin; /* Name of the previous check-in */ char *zFromMark; /* The mark of the "from" field */ int nMerge; /* Number of merge values */ int nMergeAlloc; /* Number of slots in azMerge[] */ char **azMerge; /* Merge values */ int nFile; /* Number of aFile values */ int nFileAlloc; /* Number of slots in aFile[] */ ImportFile *aFile; /* Information about files in a commit */ int fromLoaded; /* True zFrom content loaded into aFile[] */ int tagCommit; /* True if the commit adds a tag */ } gg; /* ** Duplicate a string. */ char *fossil_strdup(const char *zOrig){ char *z = 0; if( zOrig ){ int n = strlen(zOrig); z = fossil_malloc( n+1 ); memcpy(z, zOrig, n+1); } return z; } /* ** A no-op "xFinish" method */ static void finish_noop(void){} /* ** Deallocate the state information. ** ** The azMerge[] and aFile[] arrays are zeroed by allocated space is ** retained unless the freeAll flag is set. */ static void import_reset(int freeAll){ int i; gg.xFinish = 0; fossil_free(gg.zTag); gg.zTag = 0; fossil_free(gg.zBranch); gg.zBranch = 0; fossil_free(gg.aData); gg.aData = 0; fossil_free(gg.zMark); gg.zMark = 0; fossil_free(gg.zDate); gg.zDate = 0; fossil_free(gg.zUser); gg.zUser = 0; fossil_free(gg.zComment); gg.zComment = 0; fossil_free(gg.zFrom); gg.zFrom = 0; fossil_free(gg.zFromMark); gg.zFromMark = 0; for(i=0; i<gg.nMerge; i++){ fossil_free(gg.azMerge[i]); gg.azMerge[i] = 0; } gg.nMerge = 0; for(i=0; i<gg.nFile; i++){ fossil_free(gg.aFile[i].zName); fossil_free(gg.aFile[i].zUuid); fossil_free(gg.aFile[i].zPrior); } memset(gg.aFile, 0, gg.nFile*sizeof(gg.aFile[0])); gg.nFile = 0; if( freeAll ){ fossil_free(gg.zPrevBranch); fossil_free(gg.zPrevCheckin); fossil_free(gg.azMerge); fossil_free(gg.aFile); memset(&gg, 0, sizeof(gg)); } gg.xFinish = finish_noop; } /* ** Insert an artifact into the BLOB table if it isn't there already. ** If zMark is not zero, create a cross-reference from that mark back ** to the newly inserted artifact. ** ** If saveUuid is true, then pContent is a commit record. Record its ** UUID in gg.zPrevCheckin. */ static int fast_insert_content(Blob *pContent, const char *zMark, int saveUuid){ Blob hash; Blob cmpr; int rid; sha1sum_blob(pContent, &hash); rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%B", &hash); if( rid==0 ){ static Stmt ins; db_static_prepare(&ins, "INSERT INTO blob(uuid, size, content) VALUES(:uuid, :size, :content)" ); db_bind_text(&ins, ":uuid", blob_str(&hash)); db_bind_int(&ins, ":size", gg.nData); blob_compress(pContent, &cmpr); db_bind_blob(&ins, ":content", &cmpr); db_step(&ins); db_reset(&ins); blob_reset(&cmpr); rid = db_last_insert_rowid(); } if( zMark ){ db_multi_exec( "INSERT OR IGNORE INTO xmark(tname, trid, tuuid)" "VALUES(%Q,%d,%B)", zMark, rid, &hash ); db_multi_exec( "INSERT OR IGNORE INTO xmark(tname, trid, tuuid)" "VALUES(%B,%d,%B)", &hash, rid, &hash ); } if( saveUuid ){ fossil_free(gg.zPrevCheckin); gg.zPrevCheckin = fossil_strdup(blob_str(&hash)); } blob_reset(&hash); return rid; } /* ** Use data accumulated in gg from a "blob" record to add a new file ** to the BLOB table. */ static void finish_blob(void){ Blob content; blob_init(&content, gg.aData, gg.nData); fast_insert_content(&content, gg.zMark, 0); blob_reset(&content); import_reset(0); } /* ** Use data accumulated in gg from a "tag" record to add a new ** control artifact to the BLOB table. */ static void finish_tag(void){ Blob record, cksum; if( gg.zDate && gg.zTag && gg.zFrom && gg.zUser ){ blob_zero(&record); blob_appendf(&record, "D %s\n", gg.zDate); blob_appendf(&record, "T +%F %s\n", gg.zTag, gg.zFrom); blob_appendf(&record, "U %F\n", gg.zUser); md5sum_blob(&record, &cksum); blob_appendf(&record, "Z %b\n", &cksum); fast_insert_content(&record, 0, 0); blob_reset(&record); blob_reset(&cksum); } import_reset(0); } /* ** Compare two ImportFile objects for sorting */ static int mfile_cmp(const void *pLeft, const void *pRight){ const ImportFile *pA = (const ImportFile*)pLeft; const ImportFile *pB = (const ImportFile*)pRight; return strcmp(pA->zName, pB->zName); } /* Forward reference */ static void import_prior_files(void); /* ** Use data accumulated in gg from a "commit" record to add a new ** manifest artifact to the BLOB table. */ static void finish_commit(void){ int i; char *zFromBranch; Blob record, cksum; import_prior_files(); qsort(gg.aFile, gg.nFile, sizeof(gg.aFile[0]), mfile_cmp); blob_zero(&record); blob_appendf(&record, "C %F\n", gg.zComment); blob_appendf(&record, "D %s\n", gg.zDate); for(i=0; i<gg.nFile; i++){ const char *zUuid = gg.aFile[i].zUuid; if( zUuid==0 ) continue; blob_appendf(&record, "F %F %s", gg.aFile[i].zName, zUuid); if( gg.aFile[i].isExe ){ blob_append(&record, " x\n", 3); }else{ blob_append(&record, "\n", 1); } } if( gg.zFrom ){ blob_appendf(&record, "P %s", gg.zFrom); for(i=0; i<gg.nMerge; i++){ blob_appendf(&record, " %s", gg.azMerge[i]); } blob_append(&record, "\n", 1); zFromBranch = db_text(0, "SELECT brnm FROM xbranch WHERE tname=%Q", gg.zFromMark); }else{ zFromBranch = 0; } if( !gg.tagCommit && fossil_strcmp(zFromBranch, gg.zBranch)!=0 ){ blob_appendf(&record, "T *branch * %F\n", gg.zBranch); blob_appendf(&record, "T *sym-%F *\n", gg.zBranch); if( zFromBranch ){ blob_appendf(&record, "T -sym-%F *\n", zFromBranch); } } free(zFromBranch); if( gg.zFrom==0 ){ blob_appendf(&record, "T *sym-trunk *\n"); } db_multi_exec("INSERT INTO xbranch(tname, brnm) VALUES(%Q,%Q)", gg.zMark, gg.zBranch); blob_appendf(&record, "U %F\n", gg.zUser); md5sum_blob(&record, &cksum); blob_appendf(&record, "Z %b\n", &cksum); fast_insert_content(&record, gg.zMark, 1); blob_reset(&record); blob_reset(&cksum); /* The "git fast-export" command might output multiple "commit" lines ** that reference a tag using "refs/tags/TAGNAME". The tag should only ** be applied to the last commit that is output. The problem is we do not ** know at this time if the current commit is the last one to hold this ** tag or not. So make an entry in the XTAG table to record this tag ** but overwrite that entry if a later instance of the same tag appears. ** ** This behavior seems like a bug in git-fast-export, but it is easier ** to work around the problem than to fix git-fast-export. */ if( gg.tagCommit && gg.zDate && gg.zUser && gg.zFrom ){ blob_appendf(&record, "D %s\n", gg.zDate); blob_appendf(&record, "T +sym-%F %s\n", gg.zBranch, gg.zPrevCheckin); blob_appendf(&record, "U %F\n", gg.zUser); md5sum_blob(&record, &cksum); blob_appendf(&record, "Z %b\n", &cksum); db_multi_exec( "INSERT OR REPLACE INTO xtag(tname, tcontent)" " VALUES(%Q,%Q)", gg.zBranch, blob_str(&record) ); blob_reset(&record); blob_reset(&cksum); } fossil_free(gg.zPrevBranch); gg.zPrevBranch = gg.zBranch; gg.zBranch = 0; import_reset(0); } /* ** Turn the first \n in the input string into a \000 */ static void trim_newline(char *z){ while( z[0] && z[0]!='\n' ){ z++; } z[0] = 0; } /* ** Get a token from a line of text. Return a pointer to the first ** character of the token and zero-terminate the token. Make ** *pzIn point to the first character past the end of the zero ** terminator, or at the zero-terminator at EOL. */ static char *next_token(char **pzIn){ char *z = *pzIn; int i; if( z[0]==0 ) return z; for(i=0; z[i] && z[i]!=' ' && z[i]!='\n'; i++){} if( z[i] ){ z[i] = 0; *pzIn = &z[i+1]; }else{ *pzIn = &z[i]; } return z; } /* ** Return a token that is all text up to (but omitting) the next \n ** or \r\n. */ static char *rest_of_line(char **pzIn){ char *z = *pzIn; int i; if( z[0]==0 ) return z; for(i=0; z[i] && z[i]!='\r' && z[i]!='\n'; i++){} if( z[i] ){ if( z[i]=='\r' && z[i+1]=='\n' ){ z[i] = 0; i++; }else{ z[i] = 0; } *pzIn = &z[i+1]; }else{ *pzIn = &z[i]; } return z; } /* ** Convert a "mark" or "committish" into the UUID. */ static char *resolve_committish(const char *zCommittish){ char *zRes; zRes = db_text(0, "SELECT tuuid FROM xmark WHERE tname=%Q", zCommittish); return zRes; } /* ** Create a new entry in the gg.aFile[] array */ static ImportFile *import_add_file(void){ ImportFile *pFile; if( gg.nFile>=gg.nFileAlloc ){ gg.nFileAlloc = gg.nFileAlloc*2 + 100; gg.aFile = fossil_realloc(gg.aFile, gg.nFileAlloc*sizeof(gg.aFile[0])); } pFile = &gg.aFile[gg.nFile++]; memset(pFile, 0, sizeof(*pFile)); return pFile; } /* ** Load all file information out of the gg.zFrom check-in */ static void import_prior_files(void){ Manifest *p; int rid; ManifestFile *pOld; ImportFile *pNew; if( gg.fromLoaded ) return; gg.fromLoaded = 1; if( gg.zFrom==0 && gg.zPrevCheckin!=0 && fossil_strcmp(gg.zBranch, gg.zPrevBranch)==0 ){ gg.zFrom = gg.zPrevCheckin; gg.zPrevCheckin = 0; } if( gg.zFrom==0 ) return; rid = fast_uuid_to_rid(gg.zFrom); if( rid==0 ) return; p = manifest_get(rid, CFTYPE_MANIFEST); if( p==0 ) return; manifest_file_rewind(p); while( (pOld = manifest_file_next(p, 0))!=0 ){ pNew = import_add_file(); pNew->zName = fossil_strdup(pOld->zName); pNew->isExe = pOld->zPerm && strstr(pOld->zPerm, "x")!=0; pNew->zUuid = fossil_strdup(pOld->zUuid); pNew->isFrom = 1; } manifest_destroy(p); } /* ** Locate a file in the gg.aFile[] array by its name. Begin the search ** with the *pI-th file. Update *pI to be one past the file found. ** Do not search past the mx-th file. */ static ImportFile *import_find_file(const char *zName, int *pI, int mx){ int i = *pI; int nName = strlen(zName); while( i<mx ){ const char *z = gg.aFile[i].zName; if( memcmp(zName, z, nName)==0 && (z[nName]==0 || z[nName]=='/') ){ *pI = i+1; return &gg.aFile[i]; } i++; } return 0; } /* ** Read the git-fast-import format from pIn and insert the corresponding ** content into the database. */ static void git_fast_import(FILE *pIn){ ImportFile *pFile, *pNew; int i, mx; char *z; char *zUuid; char *zName; char *zPerm; char *zFrom; char *zTo; char zLine[1000]; gg.xFinish = finish_noop; while( fgets(zLine, sizeof(zLine), pIn) ){ if( zLine[0]=='\n' || zLine[0]=='#' ) continue; if( memcmp(zLine, "blob", 4)==0 ){ gg.xFinish(); gg.xFinish = finish_blob; }else if( memcmp(zLine, "commit ", 7)==0 ){ gg.xFinish(); gg.xFinish = finish_commit; trim_newline(&zLine[7]); z = &zLine[7]; /* The argument to the "commit" line might match either of these ** patterns: ** ** (A) refs/heads/BRANCHNAME ** (B) refs/tags/TAGNAME ** ** If pattern A is used, then the branchname used is as shown. ** Except, the "master" branch which is the default branch name in ** Git is changed to "trunk" which is the default name in Fossil. ** If the pattern is B, then the new commit should be on the same ** branch as its parent. And, we might need to add the TAGNAME ** tag to the new commit. However, if there are multiple instances ** of pattern B with the same TAGNAME, then only put the tag on the ** last commit that holds that tag. ** ** None of the above is explained in the git-fast-export ** documentation. We had to figure it out via trial and error. */ for(i=strlen(z)-1; i>=0 && z[i]!='/'; i--){} gg.tagCommit = memcmp(&z[i-4], "tags", 4)==0; /* True for pattern B */ if( z[i+1]!=0 ) z += i+1; if( fossil_strcmp(z, "master")==0 ) z = "trunk"; gg.zBranch = fossil_strdup(z); gg.fromLoaded = 0; }else if( memcmp(zLine, "tag ", 4)==0 ){ gg.xFinish(); gg.xFinish = finish_tag; trim_newline(&zLine[4]); gg.zTag = fossil_strdup(&zLine[4]); }else if( memcmp(zLine, "reset ", 4)==0 ){ gg.xFinish(); }else if( memcmp(zLine, "checkpoint", 10)==0 ){ gg.xFinish(); }else if( memcmp(zLine, "feature", 7)==0 ){ gg.xFinish(); }else if( memcmp(zLine, "option", 6)==0 ){ gg.xFinish(); }else if( memcmp(zLine, "progress ", 9)==0 ){ gg.xFinish(); trim_newline(&zLine[9]); printf("%s\n", &zLine[9]); fflush(stdout); }else if( memcmp(zLine, "data ", 5)==0 ){ fossil_free(gg.aData); gg.aData = 0; gg.nData = atoi(&zLine[5]); if( gg.nData ){ int got; gg.aData = fossil_malloc( gg.nData+1 ); got = fread(gg.aData, 1, gg.nData, pIn); if( got!=gg.nData ){ fossil_fatal("short read: got %d of %d bytes", got, gg.nData); } gg.aData[got] = 0; if( gg.zComment==0 && gg.xFinish==finish_commit ){ gg.zComment = gg.aData; gg.aData = 0; gg.nData = 0; } } }else if( memcmp(zLine, "author ", 7)==0 ){ /* No-op */ }else if( memcmp(zLine, "mark ", 5)==0 ){ trim_newline(&zLine[5]); fossil_free(gg.zMark); gg.zMark = fossil_strdup(&zLine[5]); }else if( memcmp(zLine, "tagger ", 7)==0 || memcmp(zLine, "committer ",10)==0 ){ sqlite3_int64 secSince1970; for(i=0; zLine[i] && zLine[i]!='<'; i++){} if( zLine[i]==0 ) goto malformed_line; z = &zLine[i+1]; for(i=i+1; zLine[i] && zLine[i]!='>'; i++){} if( zLine[i]==0 ) goto malformed_line; zLine[i] = 0; fossil_free(gg.zUser); gg.zUser = fossil_strdup(z); secSince1970 = 0; for(i=i+2; fossil_isdigit(zLine[i]); i++){ secSince1970 = secSince1970*10 + zLine[i] - '0'; } fossil_free(gg.zDate); gg.zDate = db_text(0, "SELECT datetime(%lld, 'unixepoch')", secSince1970); gg.zDate[10] = 'T'; }else if( memcmp(zLine, "from ", 5)==0 ){ trim_newline(&zLine[5]); fossil_free(gg.zFromMark); gg.zFromMark = fossil_strdup(&zLine[5]); fossil_free(gg.zFrom); gg.zFrom = resolve_committish(&zLine[5]); }else if( memcmp(zLine, "merge ", 6)==0 ){ trim_newline(&zLine[6]); if( gg.nMerge>=gg.nMergeAlloc ){ gg.nMergeAlloc = gg.nMergeAlloc*2 + 10; gg.azMerge = fossil_realloc(gg.azMerge, gg.nMergeAlloc*sizeof(char*)); } gg.azMerge[gg.nMerge] = resolve_committish(&zLine[6]); if( gg.azMerge[gg.nMerge] ) gg.nMerge++; }else if( memcmp(zLine, "M ", 2)==0 ){ import_prior_files(); z = &zLine[2]; zPerm = next_token(&z); zUuid = next_token(&z); zName = rest_of_line(&z); i = 0; pFile = import_find_file(zName, &i, gg.nFile); if( pFile==0 ){ pFile = import_add_file(); pFile->zName = fossil_strdup(zName); } pFile->isExe = (fossil_strcmp(zPerm, "100755")==0); fossil_free(pFile->zUuid); pFile->zUuid = resolve_committish(zUuid); pFile->isFrom = 0; }else if( memcmp(zLine, "D ", 2)==0 ){ import_prior_files(); z = &zLine[2]; zName = rest_of_line(&z); i = 0; while( (pFile = import_find_file(zName, &i, gg.nFile))!=0 ){ if( pFile->isFrom==0 ) continue; fossil_free(pFile->zName); fossil_free(pFile->zPrior); fossil_free(pFile->zUuid); *pFile = gg.aFile[--gg.nFile]; i--; } }else if( memcmp(zLine, "C ", 2)==0 ){ int nFrom; import_prior_files(); z = &zLine[2]; zFrom = next_token(&z); zTo = rest_of_line(&z); i = 0; mx = gg.nFile; nFrom = strlen(zFrom); while( (pFile = import_find_file(zFrom, &i, mx))!=0 ){ if( pFile->isFrom==0 ) continue; pNew = import_add_file(); pFile = &gg.aFile[i-1]; if( strlen(pFile->zName)>nFrom ){ pNew->zName = mprintf("%s%s", zTo, pFile->zName[nFrom]); }else{ pNew->zName = fossil_strdup(pFile->zName); } pNew->isExe = pFile->isExe; pNew->zUuid = fossil_strdup(pFile->zUuid); pNew->isFrom = 0; } }else if( memcmp(zLine, "R ", 2)==0 ){ int nFrom; import_prior_files(); z = &zLine[2]; zFrom = next_token(&z); zTo = rest_of_line(&z); i = 0; nFrom = strlen(zFrom); while( (pFile = import_find_file(zFrom, &i, gg.nFile))!=0 ){ if( pFile->isFrom==0 ) continue; pNew = import_add_file(); pFile = &gg.aFile[i-1]; if( strlen(pFile->zName)>nFrom ){ pNew->zName = mprintf("%s%s", zTo, pFile->zName[nFrom]); }else{ pNew->zName = fossil_strdup(pFile->zName); } pNew->zPrior = pFile->zName; pNew->isExe = pFile->isExe; pNew->zUuid = pFile->zUuid; pNew->isFrom = 0; gg.nFile--; *pFile = *pNew; memset(pNew, 0, sizeof(*pNew)); } fossil_fatal("cannot handle R records, use --full-tree"); }else if( memcmp(zLine, "deleteall", 9)==0 ){ gg.fromLoaded = 1; }else if( memcmp(zLine, "N ", 2)==0 ){ /* No-op */ }else { goto malformed_line; } } gg.xFinish(); import_reset(1); return; malformed_line: trim_newline(zLine); fossil_fatal("bad fast-import line: [%s]", zLine); return; } /* ** COMMAND: import ** ** Usage: %fossil import --git NEW-REPOSITORY ** ** Read text generated by the git-fast-export command and use it to ** construct a new Fossil repository named by the NEW-REPOSITORY ** argument. The git-fast-export text is read from standard input. ** ** The git-fast-export file format is currently the only VCS interchange ** format that is understood, though other interchange formats may be added ** in the future. ** ** The --incremental option allows an existing repository to be extended ** with new content. */ void git_import_cmd(void){ char *zPassword; FILE *pIn; Stmt q; int forceFlag = find_option("force", "f", 0)!=0; int incrFlag = find_option("incremental", "i", 0)!=0; find_option("git",0,0); /* Skip the --git option for now */ verify_all_options(); if( g.argc!=3 && g.argc!=4 ){ usage("REPOSITORY-NAME"); } if( g.argc==4 ){ pIn = fopen(g.argv[3], "rb"); }else{ pIn = stdin; fossil_binary_mode(pIn); } if( !incrFlag ){ if( forceFlag ) unlink(g.argv[2]); db_create_repository(g.argv[2]); } db_open_repository(g.argv[2]); db_open_config(0); /* The following temp-tables are used to hold information needed for ** the import. ** ** The XMARK table provides a mapping from fast-import "marks" and symbols ** into artifact ids (UUIDs - the 40-byte hex SHA1 hash of artifacts). ** Given any valid fast-import symbol, the corresponding fossil rid and ** uuid can found by searching against the xmark.tname field. ** ** The XBRANCH table maps commit marks and symbols into the branch those ** commits belong to. If xbranch.tname is a fast-import symbol for a ** checkin then xbranch.brnm is the branch that checkin is part of. ** ** The XTAG table records information about tags that need to be applied ** to various branches after the import finishes. The xtag.tcontent field ** contains the text of an artifact that will add a tag to a check-in. ** The git-fast-export file format might specify the same tag multiple ** times but only the last tag should be used. And we do not know which ** occurrence of the tag is the last until the import finishes. */ db_multi_exec( "CREATE TEMP TABLE xmark(tname TEXT UNIQUE, trid INT, tuuid TEXT);" "CREATE TEMP TABLE xbranch(tname TEXT UNIQUE, brnm TEXT);" "CREATE TEMP TABLE xtag(tname TEXT UNIQUE, tcontent TEXT);" ); db_begin_transaction(); if( !incrFlag ) db_initial_setup(0, 0, 1); git_fast_import(pIn); db_prepare(&q, "SELECT tcontent FROM xtag"); while( db_step(&q)==SQLITE_ROW ){ Blob record; db_ephemeral_blob(&q, 0, &record); fast_insert_content(&record, 0, 0); import_reset(0); } db_finalize(&q); db_end_transaction(0); db_begin_transaction(); printf("Rebuilding repository meta-data...\n"); rebuild_db(0, 1, !incrFlag); verify_cancel(); db_end_transaction(0); printf("Vacuuming..."); fflush(stdout); db_multi_exec("VACUUM"); printf(" ok\n"); if( !incrFlag ){ printf("project-id: %s\n", db_get("project-code", 0)); printf("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); printf("admin-user: %s (password is \"%s\")\n", g.zLogin, zPassword); } } |
Changes to src/info.c.
︙ | ︙ | |||
47 48 49 50 51 52 53 | ** Print common information about a particular record. ** ** * The UUID ** * The record ID ** * mtime and ctime ** * who signed it */ | | > > > > > | > > > | | | | | | | | | | | | | | | | | | | | | | | | > | > > > | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | ** Print common information about a particular record. ** ** * The UUID ** * The record ID ** * mtime and ctime ** * who signed it */ void show_common_info( int rid, /* The rid for the check-in to display info for */ const char *zUuidName, /* Name of the UUID */ int showComment, /* True to show the check-in comment */ int showFamily /* True to show parents and children */ ){ Stmt q; char *zComment = 0; char *zTags; char *zDate; char *zUuid; zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); if( zUuid ){ zDate = db_text(0, "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", rid ); /* 01234567890123 */ printf("%-13s %s %s\n", zUuidName, zUuid, zDate ? zDate : ""); free(zUuid); free(zDate); } if( zUuid && showComment ){ zComment = db_text(0, "SELECT coalesce(ecomment,comment) || " " ' (user: ' || coalesce(euser,user,'?') || ')' " " FROM event WHERE objid=%d", rid ); } if( showFamily ){ db_prepare(&q, "SELECT uuid, pid FROM plink JOIN blob ON pid=rid " " WHERE cid=%d", rid); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); zDate = db_text("", "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", db_column_int(&q, 1) ); printf("parent: %s %s\n", zUuid, zDate); free(zDate); } db_finalize(&q); db_prepare(&q, "SELECT uuid, cid FROM plink JOIN blob ON cid=rid " " WHERE pid=%d", rid); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); zDate = db_text("", "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", db_column_int(&q, 1) ); printf("child: %s %s\n", zUuid, zDate); free(zDate); } db_finalize(&q); } zTags = info_tags_of_checkin(rid, 0); if( zTags && zTags[0] ){ printf("tags: %s\n", zTags); } free(zTags); if( zComment ){ printf("comment: "); comment_print(zComment, 14, 79); free(zComment); } } /* ** COMMAND: info ** ** Usage: %fossil info ?VERSION | REPOSITORY_FILENAME? ** ** With no arguments, provide information about the current tree. ** If an argument is specified, provide information about the object ** in the respository of the current tree that the argument refers ** to. Or if the argument is the name of a repository, show ** information about that repository. ** ** Use the "finfo" command to get information about a specific ** file in a checkout. */ void info_cmd(void){ i64 fsize; if( g.argc!=2 && g.argc!=3 ){ usage("?FILENAME|ARTIFACT-ID?"); } if( g.argc==3 && (fsize = file_size(g.argv[2]))>0 && (fsize&0x1ff)==0 ){ |
︙ | ︙ | |||
151 152 153 154 155 156 157 | #endif printf("project-code: %s\n", db_get("project-code", "")); printf("server-code: %s\n", db_get("server-code", "")); vid = db_lget_int("checkout", 0); if( vid==0 ){ printf("checkout: nil\n"); }else{ | | | | 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 | #endif printf("project-code: %s\n", db_get("project-code", "")); printf("server-code: %s\n", db_get("server-code", "")); vid = db_lget_int("checkout", 0); if( vid==0 ){ printf("checkout: nil\n"); }else{ show_common_info(vid, "checkout:", 1, 1); } }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ fossil_panic("no such object: %s\n", g.argv[2]); } show_common_info(rid, "uuid:", 1, 1); } } /* ** Show information about all tags on a given node. */ static void showTags(int rid, const char *zNotGlob){ |
︙ | ︙ | |||
205 206 207 208 209 210 211 212 213 | if( tagtype==2 ){ if( zOrigUuid && zOrigUuid[0] ){ @ inherited from hyperlink_to_uuid(zOrigUuid); }else{ @ propagates to descendants } if( zValue && strcmp(zTagname,"branch")==0 ){ @ | > | > | 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 | if( tagtype==2 ){ if( zOrigUuid && zOrigUuid[0] ){ @ inherited from hyperlink_to_uuid(zOrigUuid); }else{ @ propagates to descendants } #if 0 if( zValue && strcmp(zTagname,"branch")==0 ){ @ @ <a href="%s(g.zTop)/timeline?r=%T(zValue)">branch timeline</a> } #endif } if( zSrcUuid && zSrcUuid[0] ){ if( tagtype==0 ){ @ by }else{ @ added by } |
︙ | ︙ | |||
264 265 266 267 268 269 270 | ** Write a line of web-page output that shows changes that have occurred ** to a file between two check-ins. */ static void append_file_change_line( const char *zName, /* Name of the file that has changed */ const char *zOld, /* blob.uuid before change. NULL for added files */ const char *zNew, /* blob.uuid after change. NULL for deletes */ | | > > > > | | | > > > > | 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 | ** Write a line of web-page output that shows changes that have occurred ** to a file between two check-ins. */ static void append_file_change_line( const char *zName, /* Name of the file that has changed */ const char *zOld, /* blob.uuid before change. NULL for added files */ const char *zNew, /* blob.uuid after change. NULL for deletes */ int showDiff, /* Show edit diffs if true */ int mperm /* EXE permission for zNew */ ){ if( !g.okHistory ){ if( zNew==0 ){ @ <p>Deleted %h(zName)</p> }else if( zOld==0 ){ @ <p>Added %h(zName)</p> }else if( fossil_strcmp(zNew, zOld)==0 ){ @ <p>Execute permission %s(mperm?"set":"cleared") for %h(zName)</p> }else{ @ <p>Changes to %h(zName)</p> } if( showDiff ){ @ <blockquote><pre> append_diff(zOld, zNew); @ </pre></blockquote> } }else{ if( zOld && zNew ){ if( fossil_strcmp(zOld, zNew)!=0 ){ @ <p>Modified <a href="%s(g.zTop)/finfo?name=%T(zName)">%h(zName)</a> @ from <a href="%s(g.zTop)/artifact/%s(zOld)">[%S(zOld)]</a> @ to <a href="%s(g.zTop)/artifact/%s(zNew)">[%S(zNew)].</a> }else{ @ <p>Execute permission %s(mperm?"set":"cleared") for @ <a href="%s(g.zTop)/finfo?name=%T(zName)">%h(zName)</a> } }else if( zOld ){ @ <p>Deleted <a href="%s(g.zTop)/finfo?name=%T(zName)">%h(zName)</a> @ version <a href="%s(g.zTop)/artifact/%s(zOld)">[%S(zOld)]</a> }else{ @ <p>Added <a href="%s(g.zTop)/finfo?name=%T(zName)">%h(zName)</a> @ version <a href="%s(g.zTop)/artifact/%s(zNew)">[%S(zNew)]</a> } |
︙ | ︙ | |||
324 325 326 327 328 329 330 | ** "show-version-diffs" setting is turned on. */ void ci_page(void){ Stmt q; int rid; int isLeaf; int showDiff; | | > > > > | > > > > | > > > > > > > | 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 | ** "show-version-diffs" setting is turned on. */ void ci_page(void){ Stmt q; int rid; int isLeaf; int showDiff; const char *zName; /* Name of the checkin to be displayed */ const char *zUuid; /* UUID of zName */ const char *zParent; /* UUID of the parent checkin (if any) */ login_check_credentials(); if( !g.okRead ){ login_needed(); return; } zName = P("name"); rid = name_to_rid_www("name"); if( rid==0 ){ style_header("Check-in Information Error"); @ No such object: %h(g.argv[2]) style_footer(); return; } zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); zParent = db_text(0, "SELECT uuid FROM plink, blob" " WHERE plink.cid=%d AND blob.rid=plink.pid AND plink.isprim", rid ); isLeaf = is_a_leaf(rid); db_prepare(&q, "SELECT uuid, datetime(mtime, 'localtime'), user, comment," " datetime(omtime, 'localtime')" " FROM blob, event" " WHERE blob.rid=%d" " AND event.objid=%d", rid, rid ); if( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); char *zTitle = mprintf("Check-in [%.10s]", zUuid); char *zEUser, *zEComment; const char *zUser; const char *zComment; const char *zDate; const char *zOrigDate; style_header(zTitle); login_anonymous_available(); free(zTitle); zEUser = db_text(0, "SELECT value FROM tagxref WHERE tagid=%d AND rid=%d", TAG_USER, rid); zEComment = db_text(0, "SELECT value FROM tagxref WHERE tagid=%d AND rid=%d", TAG_COMMENT, rid); zUser = db_column_text(&q, 2); zComment = db_column_text(&q, 3); zDate = db_column_text(&q,1); zOrigDate = db_column_text(&q, 4); @ <div class="section">Overview</div> @ <table class="label-value"> @ <tr><th>SHA1 Hash:</th><td>%s(zUuid) if( g.okSetup ){ @ (Record ID: %d(rid)) } @ </td></tr> @ <tr><th>Date:</th><td> hyperlink_to_date(zDate, "</td></tr>"); if( zOrigDate && fossil_strcmp(zDate, zOrigDate)!=0 ){ @ <tr><th>Original Date:</th><td> hyperlink_to_date(zOrigDate, "</td></tr>"); } if( zEUser ){ @ <tr><th>Edited User:</th><td> hyperlink_to_user(zEUser,zDate,"</td></tr>"); @ <tr><th>Original User:</th><td> hyperlink_to_user(zUser,zDate,"</td></tr>"); }else{ @ <tr><th>User:</th><td> |
︙ | ︙ | |||
407 408 409 410 411 412 413 | @ <td>%h(zUser) @ %h(zIpAddr) on %s(zDate)</td></tr> } db_finalize(&q); } if( g.okHistory ){ const char *zProjName = db_get("project-name", "unnamed"); @ <tr><th>Timelines:</th><td> | > > | > > | > > | > > > > > > | | | | | | | | | | | | | | | | > > | | | | | | | | | | | > | | | | | > | 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 | @ <td>%h(zUser) @ %h(zIpAddr) on %s(zDate)</td></tr> } db_finalize(&q); } if( g.okHistory ){ const char *zProjName = db_get("project-name", "unnamed"); @ <tr><th>Timelines:</th><td> @ <a href="%s(g.zTop)/timeline?f=%S(zUuid)">family</a> if( zParent ){ @ | <a href="%s(g.zTop)/timeline?p=%S(zUuid)">ancestors</a> } if( !isLeaf ){ @ | <a href="%s(g.zTop)/timeline?d=%S(zUuid)">descendants</a> } if( zParent && !isLeaf ){ @ | <a href="%s(g.zTop)/timeline?d=%S(zUuid)&p=%S(zUuid)">both</a> } db_prepare(&q, "SELECT substr(tag.tagname,5) FROM tagxref, tag " " WHERE rid=%d AND tagtype>0 " " AND tag.tagid=tagxref.tagid " " AND +tag.tagname GLOB 'sym-*'", rid); while( db_step(&q)==SQLITE_ROW ){ const char *zTagName = db_column_text(&q, 0); @ | <a href="%s(g.zTop)/timeline?r=%T(zTagName)">%h(zTagName)</a> } db_finalize(&q); @ </td></tr> @ <tr><th>Other Links:</th> @ <td> @ <a href="%s(g.zTop)/dir?ci=%S(zUuid)">files</a> if( g.okZip ){ char *zUrl = mprintf("%s/tarball/%s-%S.tar.gz?uuid=%s", g.zTop, zProjName, zUuid, zUuid); @ | <a href="%s(zUrl)">Tarball</a> @ | <a href="%s(g.zTop)/zip/%s(zProjName)-%S(zUuid).zip?uuid=%s(zUuid)"> @ ZIP archive</a> fossil_free(zUrl); } @ | <a href="%s(g.zTop)/artifact/%S(zUuid)">manifest</a> if( g.okWrite ){ @ | <a href="%s(g.zTop)/ci_edit?r=%S(zUuid)">edit</a> } @ </td> @ </tr> } @ </table> }else{ style_header("Check-in Information"); login_anonymous_available(); } db_finalize(&q); showTags(rid, ""); if( zParent ){ @ <div class="section">Changes</div> showDiff = g.zPath[0]!='c'; if( db_get_boolean("show-version-diffs", 0)==0 ){ showDiff = !showDiff; if( showDiff ){ @ <a href="%s(g.zTop)/vinfo/%T(zName)">[hide diffs]</a> }else{ @ <a href="%s(g.zTop)/ci/%T(zName)">[show diffs]</a> } }else{ if( showDiff ){ @ <a href="%s(g.zTop)/ci/%T(zName)">[hide diffs]</a> }else{ @ <a href="%s(g.zTop)/vinfo/%T(zName)">[show diffs]</a> } } @ @ <a href="%s(g.zTop)/vpatch?from=%S(zParent)&to=%S(zUuid)">[patch]</a><br/> db_prepare(&q, "SELECT name, mperm," " (SELECT uuid FROM blob WHERE rid=mlink.pid)," " (SELECT uuid FROM blob WHERE rid=mlink.fid)" " FROM mlink JOIN filename ON filename.fnid=mlink.fnid" " WHERE mlink.mid=%d" " ORDER BY name", rid ); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q,0); int mperm = db_column_int(&q, 1); const char *zOld = db_column_text(&q,2); const char *zNew = db_column_text(&q,3); append_file_change_line(zName, zOld, zNew, showDiff, mperm); } db_finalize(&q); } style_footer(); } /* ** WEBPAGE: winfo ** URL: /winfo?name=RID ** |
︙ | ︙ | |||
528 529 530 531 532 533 534 | @ <tr><th>Record ID:</th><td>%d(rid)</td></tr> } @ <tr><th>Original User:</th><td> hyperlink_to_user(zUser, zDate, "</td></tr>"); if( g.okHistory ){ @ <tr><th>Commands:</th> @ <td> | | | | 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 | @ <tr><th>Record ID:</th><td>%d(rid)</td></tr> } @ <tr><th>Original User:</th><td> hyperlink_to_user(zUser, zDate, "</td></tr>"); if( g.okHistory ){ @ <tr><th>Commands:</th> @ <td> @ <a href="%s(g.zTop)/whistory?name=%t(zName)">history</a> @ | <a href="%s(g.zTop)/artifact/%S(zUuid)">raw-text</a> @ </td> @ </tr> } @ </table></p> }else{ style_header("Wiki Information"); rid = 0; |
︙ | ︙ | |||
656 657 658 659 660 661 662 | while( pFileFrom || pFileTo ){ int cmp; if( pFileFrom==0 ){ cmp = +1; }else if( pFileTo==0 ){ cmp = -1; }else{ | | | | > | | > | 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 | while( pFileFrom || pFileTo ){ int cmp; if( pFileFrom==0 ){ cmp = +1; }else if( pFileTo==0 ){ cmp = -1; }else{ cmp = fossil_strcmp(pFileFrom->zName, pFileTo->zName); } if( cmp<0 ){ append_file_change_line(pFileFrom->zName, pFileFrom->zUuid, 0, 0, 0); pFileFrom = manifest_file_next(pFrom, 0); }else if( cmp>0 ){ append_file_change_line(pFileTo->zName, 0, pFileTo->zUuid, 0, manifest_file_mperm(pFileTo)); pFileTo = manifest_file_next(pTo, 0); }else if( fossil_strcmp(pFileFrom->zUuid, pFileTo->zUuid)==0 ){ /* No changes */ pFileFrom = manifest_file_next(pFrom, 0); pFileTo = manifest_file_next(pTo, 0); }else{ append_file_change_line(pFileFrom->zName, pFileFrom->zUuid, pFileTo->zUuid, showDetail, manifest_file_mperm(pFileTo)); pFileFrom = manifest_file_next(pFrom, 0); pFileTo = manifest_file_next(pTo, 0); } } manifest_destroy(pFrom); manifest_destroy(pTo); |
︙ | ︙ | |||
734 735 736 737 738 739 740 | const char *zVers = db_column_text(&q, 4); if( cnt>0 ){ @ Also file }else{ @ File } if( g.okHistory ){ | | > > > > | 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 | const char *zVers = db_column_text(&q, 4); if( cnt>0 ){ @ Also file }else{ @ File } if( g.okHistory ){ @ <a href="%s(g.zTop)/finfo?name=%T(zName)">%h(zName)</a> }else{ @ %h(zName) } @ part of check-in hyperlink_to_uuid(zVers); @ - %w(zCom) by hyperlink_to_user(zUser,zDate," on"); hyperlink_to_date(zDate,"."); if( g.okHistory ){ @ <a href="%s(g.zTop)/annotate?checkin=%S(zVers)&filename=%T(zName)"> @ [annotate]</a> } cnt++; if( pDownloadName && blob_size(pDownloadName)==0 ){ blob_append(pDownloadName, zName, -1); } } db_finalize(&q); db_prepare(&q, |
︙ | ︙ | |||
769 770 771 772 773 774 775 | const char *zUser = db_column_text(&q, 2); if( cnt>0 ){ @ Also wiki page }else{ @ Wiki page } if( g.okHistory ){ | | | 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 | const char *zUser = db_column_text(&q, 2); if( cnt>0 ){ @ Also wiki page }else{ @ Wiki page } if( g.okHistory ){ @ [<a href="%s(g.zTop)/wiki?name=%t(zPagename)">%h(zPagename)</a>] }else{ @ [%h(zPagename)] } @ by hyperlink_to_user(zUser,zDate," on"); hyperlink_to_date(zDate,"."); nWiki++; |
︙ | ︙ | |||
871 872 873 874 875 876 877 | db_finalize(&q); if( cnt==0 ){ @ Control artifact. if( pDownloadName && blob_size(pDownloadName)==0 ){ blob_appendf(pDownloadName, "%.10s.txt", zUuid); } }else if( linkToView && g.okHistory ){ | | > | > | > | > > > > > > > > > > > > > > > > > > | > > | > | | | | | | | | | < < < < < < | | | | > | 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 | db_finalize(&q); if( cnt==0 ){ @ Control artifact. if( pDownloadName && blob_size(pDownloadName)==0 ){ blob_appendf(pDownloadName, "%.10s.txt", zUuid); } }else if( linkToView && g.okHistory ){ @ <a href="%s(g.zTop)/artifact/%S(zUuid)">[view]</a> } } /* ** WEBPAGE: fdiff ** URL: fdiff?v1=UUID&v2=UUID&patch ** ** Two arguments, v1 and v2, identify the files to be diffed. Show the ** difference between the two artifacts. Generate plaintext if "patch" ** is present. */ void diff_page(void){ int v1, v2; int isPatch; Blob c1, c2, diff, *pOut; char *zV1; char *zV2; login_check_credentials(); if( !g.okRead ){ login_needed(); return; } v1 = name_to_rid_www("v1"); v2 = name_to_rid_www("v2"); if( v1==0 || v2==0 ) fossil_redirect_home(); zV1 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v1); zV2 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v2); isPatch = P("patch")!=0; if( isPatch ){ pOut = cgi_output_blob(); cgi_set_content_type("text/plain"); }else{ blob_zero(&diff); pOut = &diff; } content_get(v1, &c1); content_get(v2, &c2); text_diff(&c1, &c2, pOut, 4, 1); blob_reset(&c1); blob_reset(&c2); if( !isPatch ){ style_header("Diff"); style_submenu_element("Patch", "Patch", "%s/fdiff?v1=%T&v2=%T&patch", g.zTop, P("v1"), P("v2")); @ <h2>Differences From @ Artifact <a href="%s(g.zTop)/artifact/%S(zV1)">[%S(zV1)]</a>:</h2> @ <blockquote><p> object_description(v1, 1, 0); @ </p></blockquote> @ <h2>To Artifact <a href="%s(g.zTop)/artifact/%S(zV2)">[%S(zV2)]</a>:</h2> @ <blockquote><p> object_description(v2, 1, 0); @ </p></blockquote> @ <hr /> @ <blockquote><pre> @ %h(blob_str(&diff)) @ </pre></blockquote> blob_reset(&diff); style_footer(); } } /* ** WEBPAGE: raw ** URL: /raw?name=ARTIFACTID&m=TYPE ** ** Return the uninterpreted content of an artifact. Used primarily |
︙ | ︙ | |||
954 955 956 957 958 959 960 | for(i=0; i<n; i+=16){ j = 0; zLine[0] = zHex[(i>>24)&0xf]; zLine[1] = zHex[(i>>16)&0xf]; zLine[2] = zHex[(i>>8)&0xf]; zLine[3] = zHex[i&0xf]; zLine[4] = ':'; | | | 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 | for(i=0; i<n; i+=16){ j = 0; zLine[0] = zHex[(i>>24)&0xf]; zLine[1] = zHex[(i>>16)&0xf]; zLine[2] = zHex[(i>>8)&0xf]; zLine[3] = zHex[i&0xf]; zLine[4] = ':'; sqlite3_snprintf(sizeof(zLine), zLine, "%04x: ", i); for(j=0; j<16; j++){ k = 5+j*3; zLine[k] = ' '; if( i+j<n ){ unsigned char c = x[i+j]; zLine[k+1] = zHex[c>>4]; zLine[k+2] = zHex[c&0xf]; |
︙ | ︙ | |||
1051 1052 1053 1054 1055 1056 1057 | zFilename = P("filename"); if( zFilename==0 ) return 0; cirid = name_to_rid_www("ci"); pManifest = manifest_get(cirid, CFTYPE_MANIFEST); if( pManifest==0 ) return 0; manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 | zFilename = P("filename"); if( zFilename==0 ) return 0; cirid = name_to_rid_www("ci"); pManifest = manifest_get(cirid, CFTYPE_MANIFEST); if( pManifest==0 ) return 0; manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ if( fossil_strcmp(zFilename, pFile->zName)==0 ){ int rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", pFile->zUuid); manifest_destroy(pManifest); return rid; } } return 0; } /* ** The "z" argument is a string that contains the text of a source code ** file. This routine appends that text to the HTTP reply with line numbering. ** ** zLn is the ?ln= parameter for the HTTP query. If there is an argument, ** then highlight that line number and scroll to it once the page loads. ** If there are two line numbers, highlight the range of lines. */ static void output_text_with_line_numbers( const char *z, const char *zLn ){ int iStart, iEnd; /* Start and end of region to highlight */ int n = 0; /* Current line number */ int i; /* Loop index */ int iTop = 0; /* Scroll so that this line is on top of screen. */ iStart = iEnd = atoi(zLn); if( iStart>0 ){ for(i=0; fossil_isdigit(zLn[i]); i++){} if( zLn[i]==',' || zLn[i]=='-' || zLn[i]=='.' ){ i++; while( zLn[i]=='.' ){ i++; } iEnd = atoi(&zLn[i]); } if( iEnd<iStart ) iEnd = iStart; iTop = iStart - 15 + (iEnd-iStart)/4; if( iTop>iStart - 2 ) iTop = iStart-2; } @ <pre> while( z[0] ){ n++; for(i=0; z[i] && z[i]!='\n'; i++){} if( n==iTop ) cgi_append_content("<span id=\"topln\">", -1); if( n==iStart ){ cgi_append_content("<div class=\"selectedText\">",-1); } cgi_printf("%6d ", n); if( i>0 ){ char *zHtml = htmlize(z, i); cgi_append_content(zHtml, -1); fossil_free(zHtml); } if( n==iStart-15 ) cgi_append_content("</span>", -1); if( n==iEnd ) cgi_append_content("</div>", -1); else cgi_append_content("\n", 1); z += i; if( z[0]=='\n' ) z++; } if( n<iEnd ) cgi_printf("</div>"); @ </pre> if( iStart ){ @ <script type="text/JavaScript"> @ /* <![CDATA[ */ @ document.getElementById('topln').scrollIntoView(true); @ /* ]]> */ @ </script> } } /* ** WEBPAGE: artifact ** URL: /artifact?name=ARTIFACTID ** URL: /artifact?ci=CHECKIN&filename=PATH ** |
︙ | ︙ | |||
1107 1108 1109 1110 1111 1112 1113 | @ <blockquote><p> blob_zero(&downloadName); object_description(rid, 0, &downloadName); style_submenu_element("Download", "Download", "%s/raw/%T?name=%s", g.zTop, blob_str(&downloadName), zUuid); zMime = mimetype_from_name(blob_str(&downloadName)); if( zMime ){ | | | > > > > > > | | | < > | < < | < | 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 | @ <blockquote><p> blob_zero(&downloadName); object_description(rid, 0, &downloadName); style_submenu_element("Download", "Download", "%s/raw/%T?name=%s", g.zTop, blob_str(&downloadName), zUuid); zMime = mimetype_from_name(blob_str(&downloadName)); if( zMime ){ if( fossil_strcmp(zMime, "text/html")==0 ){ if( P("txt") ){ style_submenu_element("Html", "Html", "%s/artifact?name=%s", g.zTop, zUuid); }else{ renderAsHtml = 1; style_submenu_element("Text", "Text", "%s/artifact?name=%s&txt=1", g.zTop, zUuid); } }else if( fossil_strcmp(zMime, "application/x-fossil-wiki")==0 ){ if( P("txt") ){ style_submenu_element("Wiki", "Wiki", "%s/artifact?name=%s", g.zTop, zUuid); }else{ renderAsWiki = 1; style_submenu_element("Text", "Text", "%s/artifact?name=%s&txt=1", g.zTop, zUuid); } } } @ </p></blockquote> @ <hr /> content_get(rid, &content); if( renderAsWiki ){ wiki_convert(&content, 0, 0); }else if( renderAsHtml ){ @ <div> cgi_append_content(blob_buffer(&content), blob_size(&content)); @ </div> }else{ style_submenu_element("Hex","Hex", "%s/hexdump?name=%s", g.zTop, zUuid); zMime = mimetype_from_content(&content); @ <blockquote> if( zMime==0 ){ const char *zLn = P("ln"); const char *z = blob_str(&content); if( zLn ){ output_text_with_line_numbers(z, zLn); }else{ @ <pre> @ %h(z) @ </pre> } }else if( strncmp(zMime, "image/", 6)==0 ){ @ <img src="%s(g.zTop)/raw?name=%s(zUuid)&m=%s(zMime)"></img> }else{ @ <i>(file is %d(blob_size(&content)) bytes of binary data)</i> } @ </blockquote> } style_footer(); } /* |
︙ | ︙ | |||
1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 | ** ** Figure out what the artifact ID is and jump to it. */ void info_page(void){ const char *zName; Blob uuid; int rid; zName = P("name"); if( zName==0 ) fossil_redirect_home(); | > | | | | | > > > > > | > > > > > > > | > | 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 | ** ** Figure out what the artifact ID is and jump to it. */ void info_page(void){ const char *zName; Blob uuid; int rid; int rc; zName = P("name"); if( zName==0 ) fossil_redirect_home(); if( validate16(zName, strlen(zName)) ){ if( db_exists("SELECT 1 FROM ticket WHERE tkt_uuid GLOB '%q*'", zName) ){ tktview_page(); return; } if( db_exists("SELECT 1 FROM tag WHERE tagname GLOB 'event-%q*'", zName) ){ event_page(); return; } } blob_set(&uuid, zName); rc = name_to_uuid(&uuid, -1); if( rc==1 ){ style_header("No Such Object"); @ <p>No such object: %h(zName)</p> style_footer(); return; }else if( rc==2 ){ cgi_set_parameter("src","info"); ambiguous_page(); return; } zName = blob_str(&uuid); rid = db_int(0, "SELECT rid FROM blob WHERE uuid='%s'", zName); if( rid==0 ){ style_header("Broken Link"); @ <p>No such object: %h(zName)</p> style_footer(); |
︙ | ︙ | |||
1286 1287 1288 1289 1290 1291 1292 | const char *zIdCustom /* ID of text box for custom color */ ){ static const struct SampleColors { const char *zCName; const char *zColor; } aColor[] = { { "(none)", "" }, | | < | > > > > > | | | | | | | > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > | | | | | | 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 | const char *zIdCustom /* ID of text box for custom color */ ){ static const struct SampleColors { const char *zCName; const char *zColor; } aColor[] = { { "(none)", "" }, { "#f2dcdc", 0 }, { "#bde5d6", 0 }, { "#a0a0a0", 0 }, { "#b0b0b0", 0 }, { "#c0c0c0", 0 }, { "#d0d0d0", 0 }, { "#e0e0e0", 0 }, { "#c0fff0", 0 }, { "#c0f0ff", 0 }, { "#d0c0ff", 0 }, { "#ffc0ff", 0 }, { "#ffc0d0", 0 }, { "#fff0c0", 0 }, { "#f0ffc0", 0 }, { "#c0ffc0", 0 }, { "#a8d3c0", 0 }, { "#a8c7d3", 0 }, { "#aaa8d3", 0 }, { "#cba8d3", 0 }, { "#d3a8bc", 0 }, { "#d3b5a8", 0 }, { "#d1d3a8", 0 }, { "#b1d3a8", 0 }, { "#8eb2a1", 0 }, { "#8ea7b2", 0 }, { "#8f8eb2", 0 }, { "#ab8eb2", 0 }, { "#b28e9e", 0 }, { "#b2988e", 0 }, { "#b0b28e", 0 }, { "#95b28e", 0 }, { "#80d6b0", 0 }, { "#80bbd6", 0 }, { "#8680d6", 0 }, { "#c680d6", 0 }, { "#d680a6", 0 }, { "#d69b80", 0 }, { "#d1d680", 0 }, { "#91d680", 0 }, { "custom", "##" }, }; int nColor = sizeof(aColor)/sizeof(aColor[0])-1; int stdClrFound = 0; int i; @ <table border="0" cellpadding="0" cellspacing="1"> if( zIdPropagate ){ @ <tr><td colspan="6" align="left"> if( fPropagate ){ @ <input type="checkbox" name="%s(zIdPropagate)" checked="checked" /> }else{ @ <input type="checkbox" name="%s(zIdPropagate)" /> } @ Propagate color to descendants</td></tr> } @ <tr> for(i=0; i<nColor; i++){ const char *zClr = aColor[i].zColor; if( zClr==0 ) zClr = aColor[i].zCName; if( zClr[0] ){ @ <td style="background-color: %h(zClr);"> }else{ @ <td> } if( fossil_strcmp(zDefaultColor, zClr)==0 ){ @ <input type="radio" name="%s(zId)" value="%h(zClr)" @ checked="checked" /> stdClrFound=1; }else{ @ <input type="radio" name="%s(zId)" value="%h(zClr)" /> } @ %h(aColor[i].zCName)</td> if( (i%8)==7 && i+1<nColor ){ @ </tr><tr> } } @ </tr><tr> if (stdClrFound){ @ <td colspan="6"> @ <input type="radio" name="%s(zId)" value="%h(aColor[nColor].zColor)" /> |
︙ | ︙ | |||
1375 1376 1377 1378 1379 1380 1381 | const char *zColor; const char *zNewColor; const char *zNewTagFlag; const char *zNewTag; const char *zNewBrFlag; const char *zNewBranch; const char *zCloseFlag; | | > | | | | | | > > > | | | < | | | > > | | | | | | | | 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 | const char *zColor; const char *zNewColor; const char *zNewTagFlag; const char *zNewTag; const char *zNewBrFlag; const char *zNewBranch; const char *zCloseFlag; int fPropagateColor; /* True if color propagates before edit */ int fNewPropagateColor; /* True if color propagates after edit */ char *zUuid; Blob comment; Stmt q; login_check_credentials(); if( !g.okWrite ){ login_needed(); return; } rid = name_to_rid(P("r")); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); zComment = db_text(0, "SELECT coalesce(ecomment,comment)" " FROM event WHERE objid=%d", rid); if( zComment==0 ) fossil_redirect_home(); if( P("cancel") ){ cgi_redirectf("ci?name=%s", zUuid); } zNewComment = PD("c",zComment); zUser = db_text(0, "SELECT coalesce(euser,user)" " FROM event WHERE objid=%d", rid); if( zUser==0 ) fossil_redirect_home(); zNewUser = PDT("u",zUser); zDate = db_text(0, "SELECT datetime(mtime)" " FROM event WHERE objid=%d", rid); if( zDate==0 ) fossil_redirect_home(); zNewDate = PDT("dt",zDate); zColor = db_text("", "SELECT bgcolor" " FROM event WHERE objid=%d", rid); zNewColor = PDT("clr",zColor); if( fossil_strcmp(zNewColor,"##")==0 ){ zNewColor = PT("clrcust"); } fPropagateColor = db_int(0, "SELECT tagtype FROM tagxref" " WHERE rid=%d AND tagid=%d", rid, TAG_BGCOLOR)==2; fNewPropagateColor = P("clr")!=0 ? P("pclr")!=0 : fPropagateColor; zNewTagFlag = P("newtag") ? " checked" : ""; zNewTag = PDT("tagname",""); zNewBrFlag = P("newbr") ? " checked" : ""; zNewBranch = PDT("brname",""); zCloseFlag = P("close") ? " checked" : ""; if( P("apply") ){ Blob ctrl; char *zNow; int nChng = 0; login_verify_csrf_secret(); blob_zero(&ctrl); zNow = date_in_standard_format("now"); blob_appendf(&ctrl, "D %s\n", zNow); db_multi_exec("CREATE TEMP TABLE newtags(tag UNIQUE, prefix, value)"); if( zNewColor[0] && (fPropagateColor!=fNewPropagateColor || strcmp(zColor,zNewColor)!=0) ){ char *zPrefix = "+"; if( fNewPropagateColor ){ zPrefix = "*"; } db_multi_exec("REPLACE INTO newtags VALUES('bgcolor',%Q,%Q)", zPrefix, zNewColor); } if( zNewColor[0]==0 && zColor[0]!=0 ){ db_multi_exec("REPLACE INTO newtags VALUES('bgcolor','-',NULL)"); } if( fossil_strcmp(zComment,zNewComment)!=0 ){ db_multi_exec("REPLACE INTO newtags VALUES('comment','+',%Q)", zNewComment); } if( fossil_strcmp(zDate,zNewDate)!=0 ){ db_multi_exec("REPLACE INTO newtags VALUES('date','+',%Q)", zNewDate); } if( fossil_strcmp(zUser,zNewUser)!=0 ){ db_multi_exec("REPLACE INTO newtags VALUES('user','+',%Q)", zNewUser); } db_prepare(&q, "SELECT tag.tagid, tagname FROM tagxref, tag" " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid", rid ); while( db_step(&q)==SQLITE_ROW ){ int tagid = db_column_int(&q, 0); const char *zTag = db_column_text(&q, 1); char zLabel[30]; sqlite3_snprintf(sizeof(zLabel), zLabel, "c%d", tagid); if( P(zLabel) ){ db_multi_exec("REPLACE INTO newtags VALUES(%Q,'-',NULL)", zTag); } } db_finalize(&q); if( zCloseFlag[0] ){ db_multi_exec("REPLACE INTO newtags VALUES('closed','+',NULL)"); } if( zNewTagFlag[0] && zNewTag[0] ){ db_multi_exec("REPLACE INTO newtags VALUES('sym-%q','+',NULL)", zNewTag); } if( zNewBrFlag[0] && zNewBranch[0] ){ db_multi_exec( "REPLACE INTO newtags " " SELECT tagname, '-', NULL FROM tagxref, tag" " WHERE tagxref.rid=%d AND tagtype==2" " AND tagname GLOB 'sym-*'" " AND tag.tagid=tagxref.tagid", rid |
︙ | ︙ | |||
1500 1501 1502 1503 1504 1505 1506 | int nrid; Blob cksum; blob_appendf(&ctrl, "U %F\n", g.zLogin); md5sum_blob(&ctrl, &cksum); blob_appendf(&ctrl, "Z %b\n", &cksum); db_begin_transaction(); g.markPrivate = content_is_private(rid); | | > | 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 | int nrid; Blob cksum; blob_appendf(&ctrl, "U %F\n", g.zLogin); md5sum_blob(&ctrl, &cksum); blob_appendf(&ctrl, "Z %b\n", &cksum); db_begin_transaction(); g.markPrivate = content_is_private(rid); nrid = content_put(&ctrl); manifest_crosslink(nrid, &ctrl); assert( blob_is_reset(&ctrl) ); db_end_transaction(0); } cgi_redirectf("ci?name=%s", zUuid); } blob_zero(&comment); blob_append(&comment, zNewComment, -1); zUuid[10] = 0; |
︙ | ︙ | |||
1547 1548 1549 1550 1551 1552 1553 | @ </td></tr></table> @ </blockquote> @ <hr /> blob_reset(&suffix); } @ <p>Make changes to attributes of check-in @ [<a href="ci?name=%s(zUuid)">%s(zUuid)</a>]:</p> | | | 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 | @ </td></tr></table> @ </blockquote> @ <hr /> blob_reset(&suffix); } @ <p>Make changes to attributes of check-in @ [<a href="ci?name=%s(zUuid)">%s(zUuid)</a>]:</p> @ <form action="%s(g.zTop)/ci_edit" method="post"><div> login_insert_csrf_secret(); @ <input type="hidden" name="r" value="%S(zUuid)" /> @ <table border="0" cellspacing="10"> @ <tr><td align="right" valign="top"><b>User:</b></td> @ <td valign="top"> @ <input type="text" name="u" size="20" value="%h(zNewUser)" /> |
︙ | ︙ | |||
1569 1570 1571 1572 1573 1574 1575 | @ <tr><td align="right" valign="top"><b>Check-in Time:</b></td> @ <td valign="top"> @ <input type="text" name="dt" size="20" value="%h(zNewDate)" /> @ </td></tr> @ <tr><td align="right" valign="top"><b>Background Color:</b></td> @ <td valign="top"> | | | | 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 | @ <tr><td align="right" valign="top"><b>Check-in Time:</b></td> @ <td valign="top"> @ <input type="text" name="dt" size="20" value="%h(zNewDate)" /> @ </td></tr> @ <tr><td align="right" valign="top"><b>Background Color:</b></td> @ <td valign="top"> render_color_chooser(fNewPropagateColor, zNewColor, "pclr", "clr", "clrcust"); @ </td></tr> @ <tr><td align="right" valign="top"><b>Tags:</b></td> @ <td valign="top"> @ <input type="checkbox" name="newtag"%s(zNewTagFlag) /> @ Add the following new tag name to this check-in: @ <input type="text" style="width:15;" name="tagname" value="%h(zNewTag)" /> db_prepare(&q, "SELECT tag.tagid, tagname FROM tagxref, tag" " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid" " ORDER BY CASE WHEN tagname GLOB 'sym-*' THEN substr(tagname,5)" " ELSE tagname END", rid ); while( db_step(&q)==SQLITE_ROW ){ int tagid = db_column_int(&q, 0); const char *zTagName = db_column_text(&q, 1); char zLabel[30]; sqlite3_snprintf(sizeof(zLabel), zLabel, "c%d", tagid); if( P(zLabel) ){ @ <br /><input type="checkbox" name="c%d(tagid)" checked="checked" /> }else{ @ <br /><input type="checkbox" name="c%d(tagid)" /> } if( strncmp(zTagName, "sym-", 4)==0 ){ @ Cancel tag <b>%h(&zTagName[4])</b> |
︙ | ︙ |
Added src/leaf.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | /* ** Copyright (c) 2011 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to manage the "leaf" table of the ** repository. ** ** The LEAF table contains the rids for all leaves in the checkin DAG. ** A leaf is a checkin that has no children in the same branch. */ #include "config.h" #include "leaf.h" #include <assert.h> /* ** Return true if the check-in with RID=rid is a leaf. ** ** A leaf has no children in the same branch. */ int is_a_leaf(int rid){ int rc; static const char zSql[] = @ SELECT 1 FROM plink @ WHERE pid=%d @ AND coalesce((SELECT value FROM tagxref @ WHERE tagid=%d AND rid=plink.pid), 'trunk') @ =coalesce((SELECT value FROM tagxref @ WHERE tagid=%d AND rid=plink.cid), 'trunk') ; rc = db_int(0, zSql, rid, TAG_BRANCH, TAG_BRANCH); return rc==0; } /* ** Count the number of primary non-branch children for the given check-in. ** ** A primary child is one where the parent is the primary parent, not ** a merge parent. A "leaf" is a node that has zero children of any ** kind. This routine counts only primary children. ** ** A non-branch child is one which is on the same branch as the parent. */ int count_nonbranch_children(int pid){ int nNonBranch = 0; static Stmt q; static const char zSql[] = @ SELECT count(*) FROM plink @ WHERE pid=:pid AND isprim @ AND coalesce((SELECT value FROM tagxref @ WHERE tagid=%d AND rid=plink.pid), 'trunk') @ =coalesce((SELECT value FROM tagxref @ WHERE tagid=%d AND rid=plink.cid), 'trunk') ; db_static_prepare(&q, zSql, TAG_BRANCH, TAG_BRANCH); db_bind_int(&q, ":pid", pid); if( db_step(&q)==SQLITE_ROW ){ nNonBranch = db_column_int(&q, 0); } db_reset(&q); return nNonBranch; } /* ** Recompute the entire LEAF table. ** ** This can be expensive (5 seconds or so) for a really large repository. ** So it is only done for things like a rebuild. */ void leaf_rebuild(void){ db_multi_exec( "DELETE FROM leaf;" "INSERT OR IGNORE INTO leaf" " SELECT cid FROM plink" " EXCEPT" " SELECT pid FROM plink" " WHERE coalesce((SELECT value FROM tagxref" " WHERE tagid=%d AND rid=plink.pid),'trunk')" " == coalesce((SELECT value FROM tagxref" " WHERE tagid=%d AND rid=plink.cid),'trunk')", TAG_BRANCH, TAG_BRANCH ); } /* ** A bag of checkins whose leaf status needs to be checked. */ static Bag needToCheck; /* ** Check to see if checkin "rid" is a leaf and either add it to the LEAF ** table if it is, or remove it if it is not. */ void leaf_check(int rid){ static Stmt checkIfLeaf; static Stmt addLeaf; static Stmt removeLeaf; int rc; db_static_prepare(&checkIfLeaf, "SELECT 1 FROM plink" " WHERE pid=:rid" " AND coalesce((SELECT value FROM tagxref" " WHERE tagid=%d AND rid=:rid),'trunk')" " == coalesce((SELECT value FROM tagxref" " WHERE tagid=%d AND rid=plink.cid),'trunk');", TAG_BRANCH, TAG_BRANCH ); db_bind_int(&checkIfLeaf, ":rid", rid); rc = db_step(&checkIfLeaf); db_reset(&checkIfLeaf); if( rc==SQLITE_ROW ){ db_static_prepare(&removeLeaf, "DELETE FROM leaf WHERE rid=:rid"); db_bind_int(&removeLeaf, ":rid", rid); db_step(&removeLeaf); db_reset(&removeLeaf); }else{ db_static_prepare(&addLeaf, "INSERT OR IGNORE INTO leaf VALUES(:rid)"); db_bind_int(&addLeaf, ":rid", rid); db_step(&addLeaf); db_reset(&addLeaf); } } /* ** Return an SQL expression (stored in memory obtained from fossil_malloc()) ** that is true if the SQL variable named "zVar" contains the rid with ** a CLOSED tag. In other words, return true if the leaf is closed. ** ** The result can be prefaced with a NOT operator to get all leaves that ** are open. */ char *leaf_is_closed_sql(const char *zVar){ return mprintf( "EXISTS(SELECT 1 FROM tagxref AS tx" " WHERE tx.rid=%s" " AND tx.tagid=%d" " AND tx.tagtype>0)", zVar, TAG_CLOSED ); } /* ** Schedule a leaf check for "rid" and its parents. */ void leaf_eventually_check(int rid){ static Stmt parentsOf; db_static_prepare(&parentsOf, "SELECT pid FROM plink WHERE cid=:rid AND pid>0" ); db_bind_int(&parentsOf, ":rid", rid); bag_insert(&needToCheck, rid); while( db_step(&parentsOf)==SQLITE_ROW ){ bag_insert(&needToCheck, db_column_int(&parentsOf, 0)); } db_reset(&parentsOf); } /* ** Do all pending leaf checks. */ void leaf_do_pending_checks(void){ int rid; for(rid=bag_first(&needToCheck); rid; rid=bag_next(&needToCheck,rid)){ leaf_check(rid); } bag_clear(&needToCheck); } |
Changes to src/login.c.
︙ | ︙ | |||
15 16 17 18 19 20 21 | ** ******************************************************************************* ** ** This file contains code for generating the login and logout screens. ** ** Notes: ** | | > > | > > | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | ** ******************************************************************************* ** ** This file contains code for generating the login and logout screens. ** ** Notes: ** ** There are four special-case user-ids: "anonymous", "nobody", ** "developer" and "reader". ** ** The capabilities of the nobody user are available to anyone, ** regardless of whether or not they are logged in. The capabilities ** of anonymous are only available after logging in, but the login ** screen displays the password for the anonymous login, so this ** should not prevent a human user from doing so. The capabilities ** of developer and reader are inherited by any user that has the ** "v" and "u" capabilities, respectively. ** ** The nobody user has capabilities that you want spiders to have. ** The anonymous user has capabilities that you want people without ** logins to have. ** ** Of course, a sophisticated spider could easily circumvent the ** anonymous login requirement and walk the website. But that is |
︙ | ︙ | |||
44 45 46 47 48 49 50 | # if defined(__MINGW32__) || defined(_MSC_VER) # define sleep Sleep /* windows does not have sleep, but Sleep */ # endif #endif #include <time.h> /* | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > < < < | > > > > > | < | | | | > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | # if defined(__MINGW32__) || defined(_MSC_VER) # define sleep Sleep /* windows does not have sleep, but Sleep */ # endif #endif #include <time.h> /* ** Return the login-group name. Or return 0 if this repository is ** not a member of a login-group. */ const char *login_group_name(void){ static const char *zGroup = 0; static int once = 1; if( once ){ zGroup = db_get("login-group-name", 0); once = 0; } return zGroup; } /* ** Return a path appropriate for setting a cookie. ** ** The path is g.zTop for single-repo cookies. It is "/" for ** cookies of a login-group. */ static const char *login_cookie_path(void){ if( login_group_name()==0 ){ return g.zTop; }else{ return "/"; } } /* ** Return the name of the login cookie. ** ** The login cookie name is always of the form: fossil-XXXXXXXXXXXXXXXX ** where the Xs are the first 16 characters of the login-group-code or ** of the project-code if we are not a member of any login-group. */ static char *login_cookie_name(void){ static char *zCookieName = 0; if( zCookieName==0 ){ zCookieName = db_text(0, "SELECT 'fossil-' || substr(value,1,16)" " FROM config" " WHERE name IN ('project-code','login-group-code')" " ORDER BY name;" ); } return zCookieName; } /* ** Redirect to the page specified by the "g" query parameter. ** Or if there is no "g" query parameter, redirect to the homepage. */ static void redirect_to_g(void){ const char *zGoto = P("g"); if( zGoto ){ cgi_redirect(zGoto); }else{ fossil_redirect_home(); } } /* ** The IP address of the client is stored as part of login cookies. ** But some clients are behind firewalls that shift the IP address ** with each HTTP request. To allow such (broken) clients to log in, ** extract just a prefix of the IP address. */ static char *ipPrefix(const char *zIP){ int i, j; for(i=j=0; zIP[i]; i++){ if( zIP[i]=='.' ){ j++; if( j==2 ) break; } } return mprintf("%.*s", i, zIP); } /* ** Return an abbreviated project code. The abbreviation is the first ** 16 characters of the project code. ** ** Memory is obtained from malloc. */ static char *abbreviated_project_code(const char *zFullCode){ return mprintf("%.16s", zFullCode); } /* ** Check to see if the anonymous login is valid. If it is valid, return ** the userid of the anonymous user. */ static int isValidAnonymousLogin( const char *zUsername, /* The username. Must be "anonymous" */ const char *zPassword /* The supplied password */ ){ const char *zCS; /* The captcha seed value */ const char *zPw; /* The correct password shown in the captcha */ int uid; /* The user ID of anonymous */ if( zUsername==0 ) return 0; if( zPassword==0 ) return 0; if( strcmp(zUsername,"anonymous")!=0 ) return 0; zCS = P("cs"); /* The "cs" parameter is the "captcha seed" */ if( zCS==0 ) return 0; zPw = captcha_decode((unsigned int)atoi(zCS)); if( fossil_stricmp(zPw, zPassword)!=0 ) return 0; uid = db_int(0, "SELECT uid FROM user WHERE login='anonymous'" " AND length(pw)>0 AND length(cap)>0"); return uid; } /* ** Make sure the accesslog table exists. Create it if it does not */ void create_accesslog_table(void){ db_multi_exec( "CREATE TABLE IF NOT EXISTS %s.accesslog(" " uname TEXT," " ipaddr TEXT," " success BOOLEAN," " mtime TIMESTAMP" ");", db_name("repository") ); } /* ** Make a record of a login attempt, if login record keeping is enabled. */ static void record_login_attempt( const char *zUsername, /* Name of user logging in */ const char *zIpAddr, /* IP address from which they logged in */ int bSuccess /* True if the attempt was a success */ ){ if( !db_get_boolean("access-log", 0) ) return; create_accesslog_table(); db_multi_exec( "INSERT INTO accesslog(uname,ipaddr,success,mtime)" "VALUES(%Q,%Q,%d,julianday('now'));", zUsername, zIpAddr, bSuccess ); } /* ** WEBPAGE: login ** WEBPAGE: logout ** WEBPAGE: my ** ** Generate the login page. |
︙ | ︙ | |||
133 134 135 136 137 138 139 140 141 142 143 144 145 146 | const char *zUsername, *zPasswd; const char *zNew1, *zNew2; const char *zAnonPw = 0; int anonFlag; char *zErrMsg = ""; int uid; /* User id loged in user */ char *zSha1Pw; login_check_credentials(); zUsername = P("u"); zPasswd = P("p"); anonFlag = P("anon")!=0; if( P("out")!=0 ){ const char *zCookieName = login_cookie_name(); | > > > | > | | | > > > > > > > > > > > > > > | | | | > > > > > > > > > > > < < | | | > > > | > > > > > > > > > > | | | > | > | | 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | const char *zUsername, *zPasswd; const char *zNew1, *zNew2; const char *zAnonPw = 0; int anonFlag; char *zErrMsg = ""; int uid; /* User id loged in user */ char *zSha1Pw; const char *zIpAddr; /* IP address of requestor */ char *zRemoteAddr; /* Abbreviated IP address of requestor */ login_check_credentials(); zUsername = P("u"); zPasswd = P("p"); anonFlag = P("anon")!=0; if( P("out")!=0 ){ /* To logout, change the cookie value to an empty string */ const char *zCookieName = login_cookie_name(); cgi_set_cookie(zCookieName, "", login_cookie_path(), -86400); redirect_to_g(); } if( g.okPassword && zPasswd && (zNew1 = P("n1"))!=0 && (zNew2 = P("n2"))!=0 ){ /* The user requests a password change */ zSha1Pw = sha1_shared_secret(zPasswd, g.zLogin, 0); if( db_int(1, "SELECT 0 FROM user" " WHERE uid=%d AND (pw=%Q OR pw=%Q)", g.userUid, zPasswd, zSha1Pw) ){ sleep(1); zErrMsg = @ <p><span class="loginError"> @ You entered an incorrect old password while attempting to change @ your password. Your password is unchanged. @ </span></p> ; }else if( fossil_strcmp(zNew1,zNew2)!=0 ){ zErrMsg = @ <p><span class="loginError"> @ The two copies of your new passwords do not match. @ Your password is unchanged. @ </span></p> ; }else{ char *zNewPw = sha1_shared_secret(zNew1, g.zLogin, 0); char *zChngPw; char *zErr; db_multi_exec( "UPDATE user SET pw=%Q WHERE uid=%d", zNewPw, g.userUid ); fossil_free(zNewPw); zChngPw = mprintf( "UPDATE user" " SET pw=shared_secret(%Q,%Q," " (SELECT value FROM config WHERE name='project-code'))" " WHERE login=%Q", zNew1, g.zLogin, g.zLogin ); if( login_group_sql(zChngPw, "<p>", "</p>\n", &zErr) ){ zErrMsg = mprintf("<span class=\"loginError\">%s</span>", zErr); fossil_free(zErr); }else{ redirect_to_g(); return; } } } zIpAddr = PD("REMOTE_ADDR","nil"); /* Complete IP address for logging */ zRemoteAddr = ipPrefix(zIpAddr); /* Abbreviated IP address */ uid = isValidAnonymousLogin(zUsername, zPasswd); if( uid>0 ){ /* Successful login as anonymous. Set a cookie that looks like ** this: ** ** HASH/TIME/anonymous ** ** Where HASH is the sha1sum of TIME/IPADDR/SECRET, in which IPADDR ** is the abbreviated IP address and SECRET is captcha-secret. */ char *zNow; /* Current time (julian day number) */ char *zCookie; /* The login cookie */ const char *zCookieName; /* Name of the login cookie */ Blob b; /* Blob used during cookie construction */ zCookieName = login_cookie_name(); zNow = db_text("0", "SELECT julianday('now')"); blob_init(&b, zNow, -1); blob_appendf(&b, "/%s/%s", zRemoteAddr, db_get("captcha-secret","")); sha1sum_blob(&b, &b); zCookie = sqlite3_mprintf("%s/%s/anonymous", blob_buffer(&b), zNow); blob_reset(&b); free(zNow); cgi_set_cookie(zCookieName, zCookie, login_cookie_path(), 6*3600); record_login_attempt("anonymous", zIpAddr, 1); redirect_to_g(); } if( zUsername!=0 && zPasswd!=0 && zPasswd[0]!=0 ){ /* Attempting to log in as a user other than anonymous. */ zSha1Pw = sha1_shared_secret(zPasswd, zUsername, 0); uid = db_int(0, "SELECT uid FROM user" " WHERE login=%Q" " AND length(cap)>0 AND length(pw)>0" " AND login NOT IN ('anonymous','nobody','developer','reader')" " AND (pw=%Q OR pw=%Q)", zUsername, zPasswd, zSha1Pw ); if( uid<=0 ){ sleep(1); zErrMsg = @ <p><span class="loginError"> @ You entered an unknown user or an incorrect password. @ </span></p> ; record_login_attempt(zUsername, zIpAddr, 0); }else{ /* Non-anonymous login is successful. Set a cookie of the form: ** ** HASH/PROJECT/LOGIN ** ** where HASH is a random hex number, PROJECT is either project ** code prefix, and LOGIN is the user name. */ char *zCookie; const char *zCookieName = login_cookie_name(); const char *zExpire = db_get("cookie-expire","8766"); int expires = atoi(zExpire)*3600; char *zCode = abbreviated_project_code(db_get("project-code","")); char *zHash; zHash = db_text(0, "SELECT hex(randomblob(25))"); zCookie = mprintf("%s/%s/%s", zHash, zCode, zUsername); cgi_set_cookie(zCookieName, zCookie, login_cookie_path(), expires); record_login_attempt(zUsername, zIpAddr, 1); db_multi_exec( "UPDATE user SET cookie=%Q, ipaddr=%Q, " " cexpire=julianday('now')+%d/86400.0 WHERE uid=%d", zHash, zRemoteAddr, expires, uid ); redirect_to_g(); } } style_header("Login/Logout"); @ %s(zErrMsg) @ <form action="login" method="post"> |
︙ | ︙ | |||
250 251 252 253 254 255 256 | if( g.zLogin==0 ){ zAnonPw = db_text(0, "SELECT pw FROM user" " WHERE login='anonymous'" " AND cap!=''"); } @ <tr> @ <td></td> | | > | > > > > > > > > > > > > > > > > > | 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 | if( g.zLogin==0 ){ zAnonPw = db_text(0, "SELECT pw FROM user" " WHERE login='anonymous'" " AND cap!=''"); } @ <tr> @ <td></td> @ <td><input type="submit" name="in" value="Login" @ onClick="chngAction(this.form)" /></td> @ </tr> @ </table> @ <script type="text/JavaScript"> @ document.getElementById('u').focus() @ function chngAction(form){ if( g.sslNotAvailable==0 && memcmp(g.zBaseURL,"https:",6)!=0 && db_get_boolean("https-login",0) ){ char *zSSL = mprintf("https:%s", &g.zBaseURL[5]); @ if( form.u.value!="anonymous" ){ @ form.action = "%h(zSSL)/login"; @ } } @ } @ </script> if( g.zLogin==0 ){ @ <p>Enter }else{ @ <p>You are currently logged in as <b>%h(g.zLogin)</b></p> @ <p>To change your login to a different user, enter } @ your user-id and password at the left and press the @ "Login" button. Your user name will be stored in a browser cookie. @ You must configure your web browser to accept cookies in order for @ the login to take.</p> if( db_get_boolean("self-register", 0) ){ @ <p>If you do not have an account, you can @ <a href="%s(g.zTop)/register?g=%T(P("G"))">create one</a>. } if( zAnonPw ){ unsigned int uSeed = captcha_seed(); char const *zDecoded = captcha_decode(uSeed); int bAutoCaptcha = db_get_boolean("auto-captcha", 1); char *zCaptcha = captcha_render(zDecoded); @ <p><input type="hidden" name="cs" value="%u(uSeed)" /> |
︙ | ︙ | |||
312 313 314 315 316 317 318 319 320 321 322 323 | @ <td><input type="submit" value="Change Password" /></td></tr> @ </table> @ </form> } style_footer(); } /* ** This routine examines the login cookie to see if it exists and ** and is valid. If the login cookie checks out, it then sets | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > | > < > > | | > | | | | > | | > > > > > > > > > > > | < | < < < < | | | | < < < < < | | | > | | | | | < | | > > > > > > > > > > | > | | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 | @ <td><input type="submit" value="Change Password" /></td></tr> @ </table> @ </form> } style_footer(); } /* ** Attempt to find login credentials for user zLogin on a peer repository ** with project code zCode. Transfer those credentials to the local ** repository. ** ** Return true if a transfer was made and false if not. */ static int login_transfer_credentials( const char *zLogin, /* Login we are looking for */ const char *zCode, /* Project code of peer repository */ const char *zHash, /* HASH from login cookie HASH/CODE/LOGIN */ const char *zRemoteAddr /* Request comes from here */ ){ sqlite3 *pOther = 0; /* The other repository */ sqlite3_stmt *pStmt; /* Query against the other repository */ char *zSQL; /* SQL of the query against other repo */ char *zOtherRepo; /* Filename of the other repository */ int rc; /* Result code from SQLite library functions */ int nXfer = 0; /* Number of credentials transferred */ zOtherRepo = db_text(0, "SELECT value FROM config WHERE name='peer-repo-%q'", zCode ); if( zOtherRepo==0 ) return 0; /* No such peer repository */ rc = sqlite3_open(zOtherRepo, &pOther); if( rc==SQLITE_OK ){ zSQL = mprintf( "SELECT cexpire FROM user" " WHERE cookie=%Q" " AND ipaddr=%Q" " AND login=%Q" " AND length(cap)>0" " AND length(pw)>0" " AND cexpire>julianday('now')", zHash, zRemoteAddr, zLogin ); pStmt = 0; rc = sqlite3_prepare_v2(pOther, zSQL, -1, &pStmt, 0); if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ db_multi_exec( "UPDATE user SET cookie=%Q, ipaddr=%Q, cexpire=%.17g" " WHERE login=%Q", zHash, zRemoteAddr, sqlite3_column_double(pStmt, 0), zLogin ); nXfer++; } sqlite3_finalize(pStmt); } sqlite3_close(pOther); fossil_free(zOtherRepo); return nXfer; } /* ** Lookup the uid for a user with zLogin and zCookie and zRemoteAddr. ** Return 0 if not found. */ static int login_find_user( const char *zLogin, /* User name */ const char *zCookie, /* Login cookie value */ const char *zRemoteAddr /* Abbreviated IP address for valid login */ ){ int uid; if( fossil_strcmp(zLogin, "anonymous")==0 ) return 0; if( fossil_strcmp(zLogin, "nobody")==0 ) return 0; if( fossil_strcmp(zLogin, "developer")==0 ) return 0; if( fossil_strcmp(zLogin, "reader")==0 ) return 0; uid = db_int(0, "SELECT uid FROM user" " WHERE login=%Q" " AND cookie=%Q" " AND ipaddr=%Q" " AND cexpire>julianday('now')" " AND length(cap)>0" " AND length(pw)>0", zLogin, zCookie, zRemoteAddr ); return uid; } /* ** This routine examines the login cookie to see if it exists and ** and is valid. If the login cookie checks out, it then sets ** global variables appropriately. Global variables set include ** g.userUid and g.zLogin and of the g.okRead family of permission ** booleans. ** */ void login_check_credentials(void){ int uid = 0; /* User id */ const char *zCookie; /* Text of the login cookie */ const char *zIpAddr; /* Raw IP address of the requestor */ char *zRemoteAddr; /* Abbreviated IP address of the requestor */ const char *zCap = 0; /* Capability string */ /* Only run this check once. */ if( g.userUid!=0 ) return; /* If the HTTP connection is coming over 127.0.0.1 and if ** local login is disabled and if we are using HTTP and not HTTPS, ** then there is no need to check user credentials. ** ** This feature allows the "fossil ui" command to give the user ** full access rights without having to log in. */ zRemoteAddr = ipPrefix(zIpAddr = PD("REMOTE_ADDR","nil")); if( strcmp(zIpAddr, "127.0.0.1")==0 && g.useLocalauth && db_get_int("localauth",0)==0 && P("HTTPS")==0 ){ uid = db_int(0, "SELECT uid FROM user WHERE cap LIKE '%%s%%'"); g.zLogin = db_text("?", "SELECT login FROM user WHERE uid=%d", uid); zCap = "sx"; g.noPswd = 1; sqlite3_snprintf(sizeof(g.zCsrfToken), g.zCsrfToken, "localhost"); } /* Check the login cookie to see if it matches a known valid user. */ if( uid==0 && (zCookie = P(login_cookie_name()))!=0 ){ /* Parse the cookie value up into HASH/ARG/USER */ char *zHash = fossil_strdup(zCookie); char *zArg = 0; char *zUser = 0; int i, c; for(i=0; (c = zHash[i])!=0; i++){ if( c=='/' ){ zHash[i++] = 0; if( zArg==0 ){ zArg = &zHash[i]; }else{ zUser = &zHash[i]; break; } } } if( zUser==0 ){ /* Invalid cookie */ }else if( strcmp(zUser, "anonymous")==0 ){ /* Cookies of the form "HASH/TIME/anonymous". The TIME must not be ** too old and the sha1 hash of TIME/IPADDR/SECRET must match HASH. ** SECRET is the "captcha-secret" value in the repository. */ double rTime = atof(zArg); Blob b; blob_zero(&b); blob_appendf(&b, "%s/%s/%s", zArg, zRemoteAddr, db_get("captcha-secret","")); sha1sum_blob(&b, &b); if( fossil_strcmp(zHash, blob_str(&b))==0 ){ uid = db_int(0, "SELECT uid FROM user WHERE login='anonymous'" " AND length(cap)>0" " AND length(pw)>0" " AND %.17g+0.25>julianday('now')", rTime ); } blob_reset(&b); }else{ /* Cookies of the form "HASH/CODE/USER". Search first in the ** local user table, then the user table for project CODE if we ** are part of a login-group. */ uid = login_find_user(zUser, zHash, zRemoteAddr); if( uid==0 && login_transfer_credentials(zUser,zArg,zHash,zRemoteAddr) ){ uid = login_find_user(zUser, zHash, zRemoteAddr); if( uid ) record_login_attempt(zUser, zIpAddr, 1); } } sqlite3_snprintf(sizeof(g.zCsrfToken), g.zCsrfToken, "%.10s", zHash); } /* If no user found and the REMOTE_USER environment variable is set, ** the accept the value of REMOTE_USER as the user. */ if( uid==0 ){ const char *zRemoteUser = P("REMOTE_USER"); |
︙ | ︙ | |||
409 410 411 412 413 414 415 | if( uid==0 ){ uid = db_int(0, "SELECT uid FROM user WHERE login='nobody'"); if( uid==0 ){ /* If there is no user "nobody", then make one up - with no privileges */ uid = -1; zCap = ""; } | | | 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 | if( uid==0 ){ uid = db_int(0, "SELECT uid FROM user WHERE login='nobody'"); if( uid==0 ){ /* If there is no user "nobody", then make one up - with no privileges */ uid = -1; zCap = ""; } sqlite3_snprintf(sizeof(g.zCsrfToken), g.zCsrfToken, "none"); } /* At this point, we know that uid!=0. Find the privileges associated ** with user uid. */ assert( uid!=0 ); if( zCap==0 ){ |
︙ | ︙ | |||
436 437 438 439 440 441 442 | fprintf(stderr, "# login: [%s] with capabilities [%s]\n", g.zLogin, zCap); } /* Set the global variables recording the userid and login. The ** "nobody" user is a special case in that g.zLogin==0. */ g.userUid = uid; | | | > > > > > < | | | | | > > > > > > > > | < < | 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 | fprintf(stderr, "# login: [%s] with capabilities [%s]\n", g.zLogin, zCap); } /* Set the global variables recording the userid and login. The ** "nobody" user is a special case in that g.zLogin==0. */ g.userUid = uid; if( fossil_strcmp(g.zLogin,"nobody")==0 ){ g.zLogin = 0; } /* Set the capabilities */ login_set_capabilities(zCap, 0); login_set_anon_nobody_capabilities(); } /* ** Memory of settings */ static int login_anon_once = 1; /* ** Add the default privileges of users "nobody" and "anonymous" as appropriate ** for the user g.zLogin. */ void login_set_anon_nobody_capabilities(void){ if( g.zLogin && login_anon_once ){ const char *zCap; /* All logged-in users inherit privileges from "nobody" */ zCap = db_text("", "SELECT cap FROM user WHERE login = 'nobody'"); login_set_capabilities(zCap, 0); if( fossil_strcmp(g.zLogin, "nobody")!=0 ){ /* All logged-in users inherit privileges from "anonymous" */ zCap = db_text("", "SELECT cap FROM user WHERE login = 'anonymous'"); login_set_capabilities(zCap, 0); } login_anon_once = 0; } } /* ** Flags passed into the 2nd argument of login_set_capabilities(). */ #if INTERFACE #define LOGIN_IGNORE_U 0x01 /* Ignore "u" */ #define LOGIN_IGNORE_V 0x01 /* Ignore "v" */ #endif /* ** Set the global capability flags based on a capability string. */ void login_set_capabilities(const char *zCap, unsigned flags){ int i; for(i=0; zCap[i]; i++){ switch( zCap[i] ){ case 's': g.okSetup = 1; /* Fall thru into Admin */ case 'a': g.okAdmin = g.okRdTkt = g.okWrTkt = g.okZip = g.okRdWiki = g.okWrWiki = g.okNewWiki = g.okApndWiki = g.okHistory = g.okClone = |
︙ | ︙ | |||
503 504 505 506 507 508 509 510 511 512 513 | case 'r': g.okRdTkt = 1; break; case 'n': g.okNewTkt = 1; break; case 'w': g.okWrTkt = g.okRdTkt = g.okNewTkt = g.okApndTkt = 1; break; case 'c': g.okApndTkt = 1; break; case 't': g.okTktFmt = 1; break; case 'b': g.okAttach = 1; break; /* The "u" privileges is a little different. It recursively ** inherits all privileges of the user named "reader" */ case 'u': { | > > | | > | | | 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 | case 'r': g.okRdTkt = 1; break; case 'n': g.okNewTkt = 1; break; case 'w': g.okWrTkt = g.okRdTkt = g.okNewTkt = g.okApndTkt = 1; break; case 'c': g.okApndTkt = 1; break; case 't': g.okTktFmt = 1; break; case 'b': g.okAttach = 1; break; case 'x': g.okPrivate = 1; break; /* The "u" privileges is a little different. It recursively ** inherits all privileges of the user named "reader" */ case 'u': { if( (flags & LOGIN_IGNORE_U)==0 ){ const char *zUser; zUser = db_text("", "SELECT cap FROM user WHERE login='reader'"); login_set_capabilities(zUser, flags | LOGIN_IGNORE_U); } break; } /* The "v" privileges is a little different. It recursively ** inherits all privileges of the user named "developer" */ case 'v': { if( (flags & LOGIN_IGNORE_V)==0 ){ const char *zDev; zDev = db_text("", "SELECT cap FROM user WHERE login='developer'"); login_set_capabilities(zDev, flags | LOGIN_IGNORE_V); } break; } } } } |
︙ | ︙ | |||
561 562 563 564 565 566 567 | /* case 'q': */ case 'r': rc = g.okRdTkt; break; case 's': rc = g.okSetup; break; case 't': rc = g.okTktFmt; break; /* case 'u': READER */ /* case 'v': DEVELOPER */ case 'w': rc = g.okWrTkt; break; | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 | /* case 'q': */ case 'r': rc = g.okRdTkt; break; case 's': rc = g.okSetup; break; case 't': rc = g.okTktFmt; break; /* case 'u': READER */ /* case 'v': DEVELOPER */ case 'w': rc = g.okWrTkt; break; case 'x': rc = g.okPrivate; break; /* case 'y': */ case 'z': rc = g.okZip; break; default: rc = 0; break; } } return rc; } /* ** Change the login to zUser. */ void login_as_user(const char *zUser){ char *zCap = ""; /* New capabilities */ /* Turn off all capabilities from prior logins */ g.okSetup = 0; g.okAdmin = 0; g.okDelete = 0; g.okPassword = 0; g.okQuery = 0; g.okWrite = 0; g.okRead = 0; g.okHistory = 0; g.okClone = 0; g.okRdWiki = 0; g.okNewWiki = 0; g.okApndWiki = 0; g.okWrWiki = 0; g.okRdTkt = 0; g.okNewTkt = 0; g.okApndTkt = 0; g.okWrTkt = 0; g.okAttach = 0; g.okTktFmt = 0; g.okRdAddr = 0; g.okZip = 0; g.okPrivate = 0; /* Set the global variables recording the userid and login. The ** "nobody" user is a special case in that g.zLogin==0. */ g.userUid = db_int(0, "SELECT uid FROM user WHERE login=%Q", zUser); if( g.userUid==0 ){ zUser = 0; g.userUid = db_int(0, "SELECT uid FROM user WHERE login='nobody'"); } if( g.userUid ){ zCap = db_text("", "SELECT cap FROM user WHERE uid=%d", g.userUid); } if( fossil_strcmp(zUser,"nobody")==0 ) zUser = 0; g.zLogin = fossil_strdup(zUser); /* Set the capabilities */ login_set_capabilities(zCap, 0); login_anon_once = 1; login_set_anon_nobody_capabilities(); } /* ** Call this routine when the credential check fails. It causes ** a redirect to the "login" page. */ void login_needed(void){ const char *zUrl = PD("REQUEST_URI", "index"); |
︙ | ︙ | |||
614 615 616 617 618 619 620 | /* ** Before using the results of a form, first call this routine to verify ** that ths Anti-CSRF token is present and is valid. If the Anti-CSRF token ** is missing or is incorrect, that indicates a cross-site scripting attach ** so emits an error message and abort. */ void login_verify_csrf_secret(void){ | < | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 | /* ** Before using the results of a form, first call this routine to verify ** that ths Anti-CSRF token is present and is valid. If the Anti-CSRF token ** is missing or is incorrect, that indicates a cross-site scripting attach ** so emits an error message and abort. */ void login_verify_csrf_secret(void){ if( g.okCsrf ) return; if( fossil_strcmp(P("csrf"), g.zCsrfToken)==0 ){ g.okCsrf = 1; return; } fossil_fatal("Cross-site request forgery attempt"); } /* ** WEBPAGE: register ** ** Generate the register page. ** */ void register_page(void){ const char *zUsername, *zPasswd, *zConfirm, *zContact, *zCS, *zPw, *zCap; unsigned int uSeed; char const *zDecoded; char *zCaptcha; if( !db_get_boolean("self-register", 0) ){ style_header("Registration not possible"); @ <p>This project does not allow user self-registration. Please contact the @ project administrator to obtain an account.</p> style_footer(); return; } style_header("Register"); zUsername = P("u"); zPasswd = P("p"); zConfirm = P("cp"); zContact = P("c"); zCap = P("cap"); zCS = P("cs"); /* Captcha Secret */ /* Try to make any sense from user input. */ if( P("new") ){ if( zCS==0 ) fossil_redirect_home(); /* Forged request */ zPw = captcha_decode((unsigned int)atoi(zCS)); if( !(zUsername && zPasswd && zConfirm && zContact) ){ @ <p><span class="loginError"> @ All fields are obligatory. @ </span></p> }else if( strlen(zPasswd) < 6){ @ <p><span class="loginError"> @ Password too weak. @ </span></p> }else if( strcmp(zPasswd,zConfirm)!=0 ){ @ <p><span class="loginError"> @ The two copies of your new passwords do not match. @ </span></p> }else if( fossil_stricmp(zPw, zCap)!=0 ){ @ <p><span class="loginError"> @ Captcha text invalid. @ </span></p> }else{ /* This almost is stupid copy-paste of code from user.c:user_cmd(). */ Blob passwd, login, caps, contact; blob_init(&login, zUsername, -1); blob_init(&contact, zContact, -1); blob_init(&caps, db_get("default-perms", "u"), -1); blob_init(&passwd, zPasswd, -1); if( db_exists("SELECT 1 FROM user WHERE login=%B", &login) ){ /* Here lies the reason I don't use zErrMsg - it would not substitute * this %s(zUsername), or at least I don't know how to force it to.*/ @ <p><span class="loginError"> @ %s(zUsername) already exists. @ </span></p> }else{ char *zPw = sha1_shared_secret(blob_str(&passwd), blob_str(&login), 0); int uid; char *zCookie; const char *zCookieName; const char *zExpire; int expires; const char *zIpAddr; db_multi_exec( "INSERT INTO user(login,pw,cap,info)" "VALUES(%B,%Q,%B,%B)", &login, zPw, &caps, &contact ); free(zPw); /* The user is registered, now just log him in. */ uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", zUsername); zCookieName = login_cookie_name(); zExpire = db_get("cookie-expire","8766"); expires = atoi(zExpire)*3600; zIpAddr = PD("REMOTE_ADDR","nil"); zCookie = db_text(0, "SELECT '%d/' || hex(randomblob(25))", uid); cgi_set_cookie(zCookieName, zCookie, login_cookie_path(), expires); record_login_attempt(zUsername, zIpAddr, 1); db_multi_exec( "UPDATE user SET cookie=%Q, ipaddr=%Q, " " cexpire=julianday('now')+%d/86400.0 WHERE uid=%d", zCookie, ipPrefix(zIpAddr), expires, uid ); redirect_to_g(); } } } /* Prepare the captcha. */ uSeed = captcha_seed(); zDecoded = captcha_decode(uSeed); zCaptcha = captcha_render(zDecoded); /* Print out the registration form. */ @ <form action="register" method="post"> if( P("g") ){ @ <input type="hidden" name="g" value="%h(P("g"))" /> } @ <p><input type="hidden" name="cs" value="%u(uSeed)" /> @ <table class="login_out"> @ <tr> @ <td class="login_out_label" align="right">User ID:</td> @ <td><input type="text" id="u" name="u" value="" size="30" /></td> @ </tr> @ <tr> @ <td class="login_out_label" align="right">Password:</td> @ <td><input type="password" id="p" name="p" value="" size="30" /></td> @ </tr> @ <tr> @ <td class="login_out_label" align="right">Confirm password:</td> @ <td><input type="password" id="cp" name="cp" value="" size="30" /></td> @ </tr> @ <tr> @ <td class="login_out_label" align="right">Contact info:</td> @ <td><input type="text" id="c" name="c" value="" size="30" /></td> @ </tr> @ <tr> @ <td class="login_out_label" align="right">Captcha text (below):</td> @ <td><input type="text" id="cap" name="cap" value="" size="30" /></td> @ </tr> @ <tr><td></td> @ <td><input type="submit" name="new" value="Register" /></td></tr> @ </table> @ <div class="captcha"><table class="captcha"><tr><td><pre> @ %s(zCaptcha) @ </pre></td></tr></table> @ </form> style_footer(); free(zCaptcha); } /* ** Run SQL on the repository database for every repository in our ** login group. The SQL is run in a separate database connection. ** ** Any members of the login group whose repository database file ** cannot be found is silently removed from the group. ** ** Error messages accumulate and are returned in *pzErrorMsg. The ** memory used to hold these messages should be freed using ** fossil_free() if one desired to avoid a memory leak. The ** zPrefix and zSuffix strings surround each error message. ** ** Return the number of errors. */ int login_group_sql( const char *zSql, /* The SQL to run */ const char *zPrefix, /* Prefix to each error message */ const char *zSuffix, /* Suffix to each error message */ char **pzErrorMsg /* Write error message here, if not NULL */ ){ sqlite3 *pPeer; /* Connection to another database */ int nErr = 0; /* Number of errors seen so far */ int rc; /* Result code from subroutine calls */ char *zErr; /* SQLite error text */ char *zSelfCode; /* Project code for ourself */ Blob err; /* Accumulate errors here */ Stmt q; /* Query of all peer-* entries in CONFIG */ if( zPrefix==0 ) zPrefix = ""; if( zSuffix==0 ) zSuffix = ""; if( pzErrorMsg ) *pzErrorMsg = 0; zSelfCode = abbreviated_project_code(db_get("project-code", "x")); blob_zero(&err); db_prepare(&q, "SELECT name, value FROM config" " WHERE name GLOB 'peer-repo-*'" " AND name <> 'peer-repo-%q'" " ORDER BY +value", zSelfCode ); while( db_step(&q)==SQLITE_ROW ){ const char *zRepoName = db_column_text(&q, 1); if( file_size(zRepoName)<0 ){ /* Silently remove non-existant repositories from the login group. */ const char *zLabel = db_column_text(&q, 0); db_multi_exec( "DELETE FROM config WHERE name GLOB 'peer-*-%q'", &zLabel[10] ); continue; } rc = sqlite3_open_v2(zRepoName, &pPeer, SQLITE_OPEN_READWRITE, 0); if( rc!=SQLITE_OK ){ blob_appendf(&err, "%s%s: %s%s", zPrefix, zRepoName, sqlite3_errmsg(pPeer), zSuffix); nErr++; sqlite3_close(pPeer); continue; } sqlite3_create_function(pPeer, "shared_secret", 3, SQLITE_UTF8, 0, sha1_shared_secret_sql_function, 0, 0); zErr = 0; rc = sqlite3_exec(pPeer, zSql, 0, 0, &zErr); if( zErr ){ blob_appendf(&err, "%s%s: %s%s", zPrefix, zRepoName, zErr, zSuffix); sqlite3_free(zErr); nErr++; }else if( rc!=SQLITE_OK ){ blob_appendf(&err, "%s%s: %s%s", zPrefix, zRepoName, sqlite3_errmsg(pPeer), zSuffix); nErr++; } sqlite3_close(pPeer); } db_finalize(&q); if( pzErrorMsg && blob_size(&err)>0 ){ *pzErrorMsg = fossil_strdup(blob_str(&err)); } blob_reset(&err); fossil_free(zSelfCode); return nErr; } /* ** Attempt to join a login-group. ** ** If problems arise, leave an error message in *pzErrMsg. */ void login_group_join( const char *zRepo, /* Repository file in the login group */ const char *zLogin, /* Login name for the other repo */ const char *zPassword, /* Password to prove we are authorized to join */ const char *zNewName, /* Name of new login group if making a new one */ char **pzErrMsg /* Leave an error message here */ ){ Blob fullName; /* Blob for finding full pathnames */ sqlite3 *pOther; /* The other repository */ int rc; /* Return code from sqlite3 functions */ char *zOtherProjCode; /* Project code for pOther */ char *zPwHash; /* Password hash on pOther */ char *zSelfRepo; /* Name of our repository */ char *zSelfLabel; /* Project-name for our repository */ char *zSelfProjCode; /* Our project-code */ char *zSql; /* SQL to run on all peers */ const char *zSelf; /* The ATTACH name of our repository */ *pzErrMsg = 0; /* Default to no errors */ zSelf = db_name("repository"); /* Get the full pathname of the other repository */ file_canonical_name(zRepo, &fullName); zRepo = mprintf(blob_str(&fullName)); blob_reset(&fullName); /* Get the full pathname for our repository. Also the project code ** and project name for ourself. */ file_canonical_name(g.zRepositoryName, &fullName); zSelfRepo = mprintf(blob_str(&fullName)); blob_reset(&fullName); zSelfProjCode = db_get("project-code", "unknown"); zSelfLabel = db_get("project-name", 0); if( zSelfLabel==0 ){ zSelfLabel = zSelfProjCode; } /* Make sure we are not trying to join ourselves */ if( strcmp(zRepo, zSelfRepo)==0 ){ *pzErrMsg = mprintf("The \"other\" repository is the same as this one."); return; } /* Make sure the other repository is a valid Fossil database */ if( file_size(zRepo)<0 ){ *pzErrMsg = mprintf("repository file \"%s\" does not exist", zRepo); return; } rc = sqlite3_open(zRepo, &pOther); if( rc!=SQLITE_OK ){ *pzErrMsg = mprintf(sqlite3_errmsg(pOther)); }else{ rc = sqlite3_exec(pOther, "SELECT count(*) FROM user", 0, 0, pzErrMsg); } sqlite3_close(pOther); if( rc ) return; /* Attach the other respository. Make sure the username/password is ** valid and has Setup permission. */ db_multi_exec("ATTACH %Q AS other", zRepo); zOtherProjCode = db_text("x", "SELECT value FROM other.config" " WHERE name='project-code'"); zPwHash = sha1_shared_secret(zPassword, zLogin, zOtherProjCode); if( !db_exists( "SELECT 1 FROM other.user" " WHERE login=%Q AND cap GLOB '*s*'" " AND (pw=%Q OR pw=%Q)", zLogin, zPassword, zPwHash) ){ db_multi_exec("DETACH other"); *pzErrMsg = "The supplied username/password does not correspond to a" " user Setup permission on the other repository."; return; } /* Create all the necessary CONFIG table entries on both the ** other repository and on our own repository. */ zSelfProjCode = abbreviated_project_code(zSelfProjCode); zOtherProjCode = abbreviated_project_code(zOtherProjCode); db_begin_transaction(); db_multi_exec( "DELETE FROM %s.config WHERE name GLOB 'peer-*';" "INSERT INTO %s.config(name,value) VALUES('peer-repo-%s',%Q);" "INSERT INTO %s.config(name,value) " " SELECT 'peer-name-%q', value FROM other.config" " WHERE name='project-name';", zSelf, zSelf, zOtherProjCode, zRepo, zSelf, zOtherProjCode ); db_multi_exec( "INSERT OR IGNORE INTO other.config(name,value)" " VALUES('login-group-name',%Q);" "INSERT OR IGNORE INTO other.config(name,value)" " VALUES('login-group-code',lower(hex(randomblob(8))));", zNewName ); db_multi_exec( "REPLACE INTO %s.config(name,value)" " SELECT name, value FROM other.config" " WHERE name GLOB 'peer-*' OR name GLOB 'login-group-*'", zSelf ); db_end_transaction(0); db_multi_exec("DETACH other"); /* Propagate the changes to all other members of the login-group */ zSql = mprintf( "BEGIN;" "REPLACE INTO config(name,value,mtime) VALUES('peer-name-%q',%Q,now());" "REPLACE INTO config(name,value,mtime) VALUES('peer-repo-%q',%Q,now());" "COMMIT;", zSelfProjCode, zSelfLabel, zSelfProjCode, zSelfRepo ); login_group_sql(zSql, "<li> ", "</li>", pzErrMsg); fossil_free(zSql); } /* ** Leave the login group that we are currently part of. */ void login_group_leave(char **pzErrMsg){ char *zProjCode; char *zSql; *pzErrMsg = 0; zProjCode = abbreviated_project_code(db_get("project-code","x")); zSql = mprintf( "DELETE FROM config WHERE name GLOB 'peer-*-%q';" "DELETE FROM config" " WHERE name='login-group-name'" " AND (SELECT count(*) FROM config WHERE name GLOB 'peer-*')==0;", zProjCode ); fossil_free(zProjCode); login_group_sql(zSql, "<li> ", "</li>", pzErrMsg); fossil_free(zSql); db_multi_exec( "DELETE FROM config " " WHERE name GLOB 'peer-*'" " OR name GLOB 'login-group-*';" ); } |
Changes to src/main.c.
︙ | ︙ | |||
50 51 52 53 54 55 56 | struct Global { int argc; char **argv; /* Command-line arguments to the program */ int isConst; /* True if the output is unchanging */ sqlite3 *db; /* The connection to the databases */ sqlite3 *dbConfig; /* Separate connection for global_config table */ int useAttach; /* True if global_config is attached to repository */ int configOpen; /* True if the config database is open */ | | | | > > | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | struct Global { int argc; char **argv; /* Command-line arguments to the program */ int isConst; /* True if the output is unchanging */ sqlite3 *db; /* The connection to the databases */ sqlite3 *dbConfig; /* Separate connection for global_config table */ int useAttach; /* True if global_config is attached to repository */ int configOpen; /* True if the config database is open */ sqlite3_int64 now; /* Seconds since 1970 */ int repositoryOpen; /* True if the main repository database is open */ char *zRepositoryName; /* Name of the repository database */ const char *zMainDbType;/* "configdb", "localdb", or "repository" */ const char *zHome; /* Name of user home directory */ int localOpen; /* True if the local database is open */ char *zLocalRoot; /* The directory holding the local database */ int minPrefix; /* Number of digits needed for a distinct UUID */ int fSqlTrace; /* True if --sqltrace flag is present */ int fSqlStats; /* True if --sqltrace or --sqlstats are present */ int fSqlPrint; /* True if -sqlprint flag is present */ int fQuiet; /* True if -quiet flag is present */ int fHttpTrace; /* Trace outbound HTTP requests */ int fNoSync; /* Do not do an autosync even. --nosync */ char *zPath; /* Name of webpage being served */ char *zExtra; /* Extra path information past the webpage name */ char *zBaseURL; /* Full text of the URL being served */ char *zTop; /* Parent directory of zPath */ const char *zContentType; /* The content type of the input HTTP request */ int iErrPriority; /* Priority of current error message */ char *zErrMsg; /* Text of an error message */ int sslNotAvailable; /* SSL is not available. Do not redirect to https: */ Blob cgiIn; /* Input to an xfer www method */ int cgiOutput; /* Write error and status messages to CGI */ int xferPanic; /* Write error messages in XFER protocol */ int fullHttpReply; /* True for full HTTP reply. False for CGI reply */ Th_Interp *interp; /* The TH1 interpreter */ FILE *httpIn; /* Accept HTTP input from here */ FILE *httpOut; /* Send HTTP output here */ |
︙ | ︙ | |||
100 101 102 103 104 105 106 107 108 109 110 111 112 113 | char *urlPasswd; /* Password for http: */ char *urlCanonical; /* Canonical representation of the URL */ char *urlProxyAuth; /* Proxy-Authorizer: string */ char *urlFossil; /* The path of the ?fossil=path suffix on ssh: */ int dontKeepUrl; /* Do not persist the URL */ const char *zLogin; /* Login name. "" if not logged in. */ int noPswd; /* Logged in without password (on 127.0.0.1) */ int userUid; /* Integer user id */ /* Information used to populate the RCVFROM table */ int rcvid; /* The rcvid. 0 if not yet defined. */ char *zIpAddr; /* The remote IP address */ char *zNonce; /* The nonce used for login */ | > | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | char *urlPasswd; /* Password for http: */ char *urlCanonical; /* Canonical representation of the URL */ char *urlProxyAuth; /* Proxy-Authorizer: string */ char *urlFossil; /* The path of the ?fossil=path suffix on ssh: */ int dontKeepUrl; /* Do not persist the URL */ const char *zLogin; /* Login name. "" if not logged in. */ int useLocalauth; /* No login required if from 127.0.0.1 */ int noPswd; /* Logged in without password (on 127.0.0.1) */ int userUid; /* Integer user id */ /* Information used to populate the RCVFROM table */ int rcvid; /* The rcvid. 0 if not yet defined. */ char *zIpAddr; /* The remote IP address */ char *zNonce; /* The nonce used for login */ |
︙ | ︙ | |||
130 131 132 133 134 135 136 137 138 139 140 141 142 143 | int okNewTkt; /* n: create new tickets */ int okApndTkt; /* c: append to tickets via the web */ int okWrTkt; /* w: make changes to tickets via web */ int okAttach; /* b: add attachments */ int okTktFmt; /* t: create new ticket report formats */ int okRdAddr; /* e: read email addresses or other private data */ int okZip; /* z: download zipped artifact via /zip URL */ /* For defense against Cross-site Request Forgery attacks */ char zCsrfToken[12]; /* Value of the anti-CSRF token */ int okCsrf; /* Anti-CSRF token is present and valid */ FILE *fDebug; /* Write debug information here, if the file exists */ int thTrace; /* True to enable TH1 debugging output */ | > | 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | int okNewTkt; /* n: create new tickets */ int okApndTkt; /* c: append to tickets via the web */ int okWrTkt; /* w: make changes to tickets via web */ int okAttach; /* b: add attachments */ int okTktFmt; /* t: create new ticket report formats */ int okRdAddr; /* e: read email addresses or other private data */ int okZip; /* z: download zipped artifact via /zip URL */ int okPrivate; /* x: can send and receive private content */ /* For defense against Cross-site Request Forgery attacks */ char zCsrfToken[12]; /* Value of the anti-CSRF token */ int okCsrf; /* Anti-CSRF token is present and valid */ FILE *fDebug; /* Write debug information here, if the file exists */ int thTrace; /* True to enable TH1 debugging output */ |
︙ | ︙ | |||
237 238 239 240 241 242 243 244 245 246 | "\"%s help\" for a list of available commands\n" "\"%s help COMMAND\" for specific details\n", argv[0], argv[0], argv[0]); fossil_exit(1); }else{ g.fQuiet = find_option("quiet", 0, 0)!=0; g.fSqlTrace = find_option("sqltrace", 0, 0)!=0; g.fSqlPrint = find_option("sqlprint", 0, 0)!=0; g.fHttpTrace = find_option("httptrace", 0, 0)!=0; g.zLogin = find_option("user", "U", 1); | > > > > > > > > > > > > > > | > > > > > > > > > > | > > > > > > > > > > > | | | | 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 | "\"%s help\" for a list of available commands\n" "\"%s help COMMAND\" for specific details\n", argv[0], argv[0], argv[0]); fossil_exit(1); }else{ g.fQuiet = find_option("quiet", 0, 0)!=0; g.fSqlTrace = find_option("sqltrace", 0, 0)!=0; g.fSqlStats = find_option("sqlstats", 0, 0)!=0; if( g.fSqlTrace ) g.fSqlStats = 1; g.fSqlPrint = find_option("sqlprint", 0, 0)!=0; g.fHttpTrace = find_option("httptrace", 0, 0)!=0; g.zLogin = find_option("user", "U", 1); if( find_option("help",0,0)!=0 ){ /* --help anywhere on the command line is translated into ** "fossil help argv[1] argv[2]..." */ int i; char **zNewArgv = fossil_malloc( sizeof(char*)*(g.argc+2) ); for(i=1; i<g.argc; i++) zNewArgv[i+1] = argv[i]; zNewArgv[i+1] = 0; zNewArgv[0] = argv[0]; zNewArgv[1] = "help"; g.argc++; g.argv = zNewArgv; } zCmdName = g.argv[1]; } rc = name_search(zCmdName, aCommand, count(aCommand), &idx); if( rc==1 ){ fprintf(stderr,"%s: unknown command: %s\n" "%s: use \"help\" for more information\n", argv[0], zCmdName, argv[0]); fossil_exit(1); }else if( rc==2 ){ int i, n; Blob couldbe; blob_zero(&couldbe); n = strlen(zCmdName); for(i=0; i<count(aCommand); i++){ if( memcmp(zCmdName, aCommand[i].zName, n)==0 ){ blob_appendf(&couldbe, " %s", aCommand[i].zName); } } fprintf(stderr,"%s: ambiguous command prefix: %s\n" "%s: could be any of:%s\n" "%s: use \"help\" for more information\n", argv[0], zCmdName, argv[0], blob_str(&couldbe), argv[0]); fossil_exit(1); } aCommand[idx].xFunc(); fossil_exit(0); /*NOT_REACHED*/ return 0; } /* ** The following variable becomes true while processing a fatal error ** or a panic. If additional "recursive-fatal" errors occur while ** shutting down, the recursive errors are silently ignored. */ static int mainInFatalError = 0; /* ** Return the name of the current executable. */ const char *fossil_nameofexe(void){ #ifdef _WIN32 return _pgmptr; #else return g.argv[0]; #endif } /* ** Exit. Take care to close the database first. */ void fossil_exit(int rc){ db_close(1); exit(rc); } /* ** Print an error message, rollback all databases, and quit. These ** routines never return. */ void fossil_panic(const char *zFormat, ...){ char *z; va_list ap; static int once = 1; mainInFatalError = 1; va_start(ap, zFormat); z = vmprintf(zFormat, ap); va_end(ap); if( g.cgiOutput && once ){ once = 0; cgi_printf("<p class=\"generalError\">%h</p>", z); cgi_reply(); }else{ fprintf(stderr, "%s: %s\n", fossil_nameofexe(), z); } db_force_rollback(); fossil_exit(1); } void fossil_fatal(const char *zFormat, ...){ char *z; va_list ap; mainInFatalError = 1; va_start(ap, zFormat); z = vmprintf(zFormat, ap); va_end(ap); if( g.cgiOutput ){ g.cgiOutput = 0; cgi_printf("<p class=\"generalError\">%h</p>", z); cgi_reply(); }else{ fprintf(stderr, "\r%s: %s\n", fossil_nameofexe(), z); } db_force_rollback(); fossil_exit(1); } /* This routine works like fossil_fatal() except that if called ** recursively, the recursive call is a no-op. |
︙ | ︙ | |||
337 338 339 340 341 342 343 | z = vmprintf(zFormat, ap); va_end(ap); if( g.cgiOutput ){ g.cgiOutput = 0; cgi_printf("<p class=\"generalError\">%h</p>", z); cgi_reply(); }else{ | | | | | 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 | z = vmprintf(zFormat, ap); va_end(ap); if( g.cgiOutput ){ g.cgiOutput = 0; cgi_printf("<p class=\"generalError\">%h</p>", z); cgi_reply(); }else{ fprintf(stderr, "\r%s: %s\n", fossil_nameofexe(), z); } db_force_rollback(); fossil_exit(1); } /* Print a warning message */ void fossil_warning(const char *zFormat, ...){ char *z; va_list ap; va_start(ap, zFormat); z = vmprintf(zFormat, ap); va_end(ap); if( g.cgiOutput ){ cgi_printf("<p class=\"generalError\">%h</p>", z); }else{ fprintf(stderr, "\r%s: %s\n", fossil_nameofexe(), z); } } /* ** Malloc and free routines that cannot fail */ void *fossil_malloc(size_t n){ void *p = malloc(n==0 ? 1 : n); if( p==0 ) fossil_panic("out of memory"); return p; } void fossil_free(void *p){ free(p); } void *fossil_realloc(void *p, size_t n){ |
︙ | ︙ | |||
394 395 396 397 398 399 400 401 402 403 404 405 406 407 | #else /* On unix, evaluate the command directly. */ rc = system(zOrigCmd); #endif return rc; } /* ** Return a name for an SQLite error code */ static const char *sqlite_error_code_name(int iCode){ | > > > > > > > > > > > > > > > > > > > > > > > > > | 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 | #else /* On unix, evaluate the command directly. */ rc = system(zOrigCmd); #endif return rc; } /* ** Like strcmp() except that it accepts NULL pointers. NULL sorts before ** all non-NULL string pointers. */ int fossil_strcmp(const char *zA, const char *zB){ if( zA==0 ){ if( zB==0 ) return 0; return -1; }else if( zB==0 ){ return +1; }else{ return strcmp(zA,zB); } } /* ** Turn off any NL to CRNL translation on the stream given as an ** argument. This is a no-op on unix but is necessary on windows. */ void fossil_binary_mode(FILE *p){ #if defined(_WIN32) _setmode(_fileno(p), _O_BINARY); #endif } /* ** Return a name for an SQLite error code */ static const char *sqlite_error_code_name(int iCode){ |
︙ | ︙ | |||
441 442 443 444 445 446 447 | fossil_warning("%s: %s", sqlite_error_code_name(iCode), zErrmsg); } /* ** Print a usage comment and quit */ void usage(const char *zFormat){ | | | 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 | fossil_warning("%s: %s", sqlite_error_code_name(iCode), zErrmsg); } /* ** Print a usage comment and quit */ void usage(const char *zFormat){ fprintf(stderr, "Usage: %s %s %s\n", fossil_nameofexe(), g.argv[1], zFormat); fossil_exit(1); } /* ** Remove n elements from g.argv beginning with the i-th element. */ void remove_from_argv(int i, int n){ |
︙ | ︙ | |||
471 472 473 474 475 476 477 | */ const char *find_option(const char *zLong, const char *zShort, int hasArg){ int i; int nLong; const char *zReturn = 0; assert( hasArg==0 || hasArg==1 ); nLong = strlen(zLong); | | | 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 | */ const char *find_option(const char *zLong, const char *zShort, int hasArg){ int i; int nLong; const char *zReturn = 0; assert( hasArg==0 || hasArg==1 ); nLong = strlen(zLong); for(i=1; i<g.argc; i++){ char *z; if (i+hasArg >= g.argc) break; z = g.argv[i]; if( z[0]!='-' ) continue; z++; if( z[0]=='-' ){ if( z[1]==0 ){ |
︙ | ︙ | |||
494 495 496 497 498 499 500 | remove_from_argv(i, 1); break; }else if( z[nLong]==0 ){ zReturn = g.argv[i+hasArg]; remove_from_argv(i, 1+hasArg); break; } | | | 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 | remove_from_argv(i, 1); break; }else if( z[nLong]==0 ){ zReturn = g.argv[i+hasArg]; remove_from_argv(i, 1+hasArg); break; } }else if( fossil_strcmp(z,zShort)==0 ){ zReturn = g.argv[i+hasArg]; remove_from_argv(i, 1+hasArg); break; } } return zReturn; } |
︙ | ︙ | |||
543 544 545 546 547 548 549 | zSpacer = " "; } printf("\n"); } } /* | | < < < | > > | | < | 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 | zSpacer = " "; } printf("\n"); } } /* ** List of commands starting with zPrefix, or all commands if zPrefix is NULL. */ static void cmd_cmd_list(const char *zPrefix){ int i, nCmd; int nPrefix = zPrefix ? strlen(zPrefix) : 0; const char *aCmd[count(aCommand)]; for(i=nCmd=0; i<count(aCommand); i++){ const char *z = aCommand[i].zName; if( memcmp(z,"test",4)==0 ) continue; if( zPrefix && memcmp(zPrefix, z, nPrefix)!=0 ) continue; aCmd[nCmd++] = aCommand[i].zName; } multi_column_list(aCmd, nCmd); } /* ** COMMAND: test-commands ** ** Usage: %fossil test-commands ** ** List all commands used for testing and debugging. */ void cmd_test_cmd_list(void){ int i, nCmd; const char *aCmd[count(aCommand)]; for(i=nCmd=0; i<count(aCommand); i++){ if( strncmp(aCommand[i].zName,"test",4)!=0 ) continue; aCmd[nCmd++] = aCommand[i].zName; } multi_column_list(aCmd, nCmd); } /* |
︙ | ︙ | |||
600 601 602 603 604 605 606 | ** Usage: %fossil help COMMAND ** ** Display information on how to use COMMAND */ void help_cmd(void){ int rc, idx; const char *z; | | | > | | > > | > > > | | | | > | | | | > | | | | | | | | | | | | | | | | | | | | > > | | | | < < | | | | > | > | > > > | | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > | | | | | > > > > | | | 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 | ** Usage: %fossil help COMMAND ** ** Display information on how to use COMMAND */ void help_cmd(void){ int rc, idx; const char *z; if( g.argc<3 ){ printf("Usage: %s help COMMAND.\nAvailable COMMANDs:\n", fossil_nameofexe()); cmd_cmd_list(0); version_cmd(); return; } rc = name_search(g.argv[2], aCommand, count(aCommand), &idx); if( rc==1 ){ fossil_print("unknown command: %s\nAvailable commands:\n", g.argv[2]); cmd_cmd_list(0); fossil_exit(1); }else if( rc==2 ){ fossil_print("ambiguous command prefix: %s\nMatching commands:\n", g.argv[2]); cmd_cmd_list(g.argv[2]); fossil_exit(1); } z = aCmdHelp[idx]; if( z==0 ){ fossil_fatal("no help available for the %s command", aCommand[idx].zName); } while( *z ){ if( *z=='%' && strncmp(z, "%fossil", 7)==0 ){ printf("%s", fossil_nameofexe()); z += 7; }else{ putchar(*z); z++; } } putchar('\n'); } /* ** WEBPAGE: help ** URL: /help/CMD */ void help_page(void){ const char * zCmd = P("cmd"); if( zCmd==0 ) zCmd = P("name"); style_header("Command-line Help"); if( zCmd ){ int rc, idx; char *z, *s, *d; style_submenu_element("Command-List", "Command-List", "%s/help", g.zTop); @ <h1>The "%s(zCmd)" command:</h1> rc = name_search(zCmd, aCommand, count(aCommand), &idx); if( rc==1 ){ @ unknown command: %s(zCmd) }else if( rc==2 ){ @ ambiguous command prefix: %s(zCmd) }else{ z = (char*)aCmdHelp[idx]; if( z==0 ){ @ no help available for the %s(aCommand[idx].zName) command }else{ z=s=d=mprintf("%s",z); while( *s ){ if( *s=='%' && strncmp(s, "%fossil", 7)==0 ){ s++; }else{ *d++ = *s++; } } *d = 0; @ <blockquote><pre> @ %h(z) @ </pre></blockquote> free(z); } } }else{ int i, j, n; @ <h1>Available commands:</h1> @ <table border="0"><tr> for(i=j=0; i<count(aCommand); i++){ const char *z = aCommand[i].zName; if( strncmp(z,"test",4)==0 ) continue; j++; } n = (j+6)/7; for(i=j=0; i<count(aCommand); i++){ const char *z = aCommand[i].zName; if( strncmp(z,"test",4)==0 ) continue; if( j==0 ){ @ <td valign="top"><ul> } @ <li><a href="%s(g.zTop)/help?cmd=%s(z)">%s(z)</a> j++; if( j>=n ){ @ </ul></td> j = 0; } } if( j>0 ){ @ </ul></td> } @ </tr></table> } style_footer(); } /* ** WEBPAGE: test-all-help ** ** Show all help text on a single page. Useful for proof-reading. */ void test_all_help_page(void){ int i; style_header("Testpage: All Help Text"); for(i=0; i<count(aCommand); i++){ if( memcmp(aCommand[i].zName, "test", 4)==0 ) continue; @ <h2>%s(aCommand[i].zName):</h2> @ <blockquote><pre> @ %h(aCmdHelp[i]) @ </pre></blockquote> } style_footer(); } /* ** Set the g.zBaseURL value to the full URL for the toplevel of ** the fossil tree. Set g.zTop to g.zBaseURL without the ** leading "http://" and the host and port. */ void set_base_url(void){ int i; const char *zHost; const char *zMode; const char *zCur; if( g.zBaseURL!=0 ) return; zHost = PD("HTTP_HOST",""); zMode = PD("HTTPS","off"); zCur = PD("SCRIPT_NAME","/"); i = strlen(zCur); while( i>0 && zCur[i-1]=='/' ) i--; if( fossil_stricmp(zMode,"on")==0 ){ g.zBaseURL = mprintf("https://%s%.*s", zHost, i, zCur); g.zTop = &g.zBaseURL[8+strlen(zHost)]; }else{ g.zBaseURL = mprintf("http://%s%.*s", zHost, i, zCur); g.zTop = &g.zBaseURL[7+strlen(zHost)]; } } /* ** Send an HTTP redirect back to the designated Index Page. */ void fossil_redirect_home(void){ cgi_redirectf("%s%s", g.zTop, db_get("index-page", "/index")); } /* ** If running as root, chroot to the directory containing the ** repository zRepo and then drop root privileges. Return the ** new repository name. ** |
︙ | ︙ | |||
732 733 734 735 736 737 738 | struct stat sStat; Blob dir; char *zDir; file_canonical_name(zRepo, &dir); zDir = blob_str(&dir); if( file_isdir(zDir)==1 ){ | | | > | | > | | 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 | struct stat sStat; Blob dir; char *zDir; file_canonical_name(zRepo, &dir); zDir = blob_str(&dir); if( file_isdir(zDir)==1 ){ if( chdir(zDir) || chroot(zDir) || chdir("/") ){ fossil_fatal("unable to chroot into %s", zDir); } zRepo = "/"; }else{ for(i=strlen(zDir)-1; i>0 && zDir[i]!='/'; i--){} if( zDir[i]!='/' ) fossil_panic("bad repository name: %s", zRepo); zDir[i] = 0; if( chdir(zDir) || chroot(zDir) || chdir("/") ){ fossil_fatal("unable to chroot into %s", zDir); } zDir[i] = '/'; zRepo = &zDir[i]; } if( stat(zRepo, &sStat)!=0 ){ fossil_fatal("cannot stat() repository: %s", zRepo); } setgid(sStat.st_gid); setuid(sStat.st_uid); if( g.db!=0 ){ db_close(1); db_open_repository(zRepo); } } #endif return zRepo; } |
︙ | ︙ | |||
779 780 781 782 783 784 785 | char *zPath = NULL; int idx; int i; /* If the repository has not been opened already, then find the ** repository based on the first element of PATH_INFO and open it. */ | | | > | > | | | | | | | > | > | | > > > > > > > > > > > > | | | | | | | | > > | 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 | char *zPath = NULL; int idx; int i; /* If the repository has not been opened already, then find the ** repository based on the first element of PATH_INFO and open it. */ zPathInfo = PD("PATH_INFO",""); if( !g.repositoryOpen ){ char *zRepo, *zToFree; const char *zOldScript = PD("SCRIPT_NAME", ""); char *zNewScript; int j, k; i64 szFile; i = zPathInfo[0]!=0; while( 1 ){ while( zPathInfo[i] && zPathInfo[i]!='/' ){ i++; } zRepo = zToFree = mprintf("%s%.*s.fossil",g.zRepositoryName,i,zPathInfo); /* To avoid mischief, make sure the repository basename contains no ** characters other than alphanumerics, "-", "/", and "_". */ for(j=strlen(g.zRepositoryName)+1, k=0; zRepo[j] && k<i-1; j++, k++){ if( !fossil_isalnum(zRepo[j]) && zRepo[j]!='-' && zRepo[j]!='/' ){ zRepo[j] = '_'; } } if( zRepo[0]=='/' && zRepo[1]=='/' ){ zRepo++; j--; } szFile = file_size(zRepo); if( zPathInfo[i]=='/' && szFile<0 ){ assert( strcmp(&zRepo[j], ".fossil")==0 ); zRepo[j] = 0; if( file_isdir(zRepo)==1 ){ fossil_free(zToFree); i++; continue; } zRepo[j] = '.'; } if( szFile<1024 ){ if( zNotFound ){ cgi_redirect(zNotFound); }else{ @ <h1>Not Found</h1> cgi_set_status(404, "not found"); cgi_reply(); } return; } break; } zNewScript = mprintf("%s%.*s", zOldScript, i, zPathInfo); cgi_replace_parameter("PATH_INFO", &zPathInfo[i+1]); zPathInfo += i; cgi_replace_parameter("SCRIPT_NAME", zNewScript); db_open_repository(zRepo); if( g.fHttpTrace ){ |
︙ | ︙ | |||
836 837 838 839 840 841 842 | if( zPathInfo==0 || zPathInfo[0]==0 || (zPathInfo[0]=='/' && zPathInfo[1]==0) ){ fossil_redirect_home(); }else{ zPath = mprintf("%s", zPathInfo); } | | > > > | | | | | > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | > > | > > > > | 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 | if( zPathInfo==0 || zPathInfo[0]==0 || (zPathInfo[0]=='/' && zPathInfo[1]==0) ){ fossil_redirect_home(); }else{ zPath = mprintf("%s", zPathInfo); } /* Make g.zPath point to the first element of the path. Make ** g.zExtra point to everything past that point. */ while(1){ char *zAltRepo = 0; g.zPath = &zPath[1]; for(i=1; zPath[i] && zPath[i]!='/'; i++){} if( zPath[i]=='/' ){ zPath[i] = 0; g.zExtra = &zPath[i+1]; /* Look for sub-repositories. A sub-repository is another repository ** that accepts the login credentials of the current repository. A ** subrepository is identified by a CONFIG table entry "subrepo:NAME" ** where NAME is the first component of the path. The value of the ** the CONFIG entries is the string "USER:FILENAME" where USER is the ** USER name to log in as in the subrepository and FILENAME is the ** repository filename. */ zAltRepo = db_text(0, "SELECT value FROM config WHERE name='subrepo:%q'", g.zPath); if( zAltRepo ){ int nHost; int jj; char *zUser = zAltRepo; login_check_credentials(); for(jj=0; zAltRepo[jj] && zAltRepo[jj]!=':'; jj++){} if( zAltRepo[jj]==':' ){ zAltRepo[jj] = 0; zAltRepo += jj+1; }else{ zUser = "nobody"; } if( g.zLogin==0 ) zUser = "nobody"; if( zAltRepo[0]!='/' ){ zAltRepo = mprintf("%s/../%s", g.zRepositoryName, zAltRepo); file_simplify_name(zAltRepo, -1); } db_close(1); db_open_repository(zAltRepo); login_as_user(zUser); g.okPassword = 0; zPath += i; nHost = g.zTop - g.zBaseURL; g.zBaseURL = mprintf("%z/%s", g.zBaseURL, g.zPath); g.zTop = g.zBaseURL + nHost; continue; } }else{ g.zExtra = 0; } break; } if( g.zExtra ){ /* CGI parameters get this treatment elsewhere, but places like getfile ** will use g.zExtra directly. */ dehttpize(g.zExtra); cgi_set_parameter_nocopy("name", g.zExtra); } /* Locate the method specified by the path and execute the function ** that implements that method. */ if( name_search(g.zPath, aWebpage, count(aWebpage), &idx) && name_search("not_found", aWebpage, count(aWebpage), &idx) ){ cgi_set_status(404,"Not Found"); @ <h1>Not Found</h1> @ <p>Page not found: %h(g.zPath)</p> }else if( aWebpage[idx].xFunc!=page_xfer && db_schema_is_outofdate() ){ @ <h1>Server Configuration Error</h1> @ <p>The database schema on the server is out-of-date. Please ask @ the administrator to run <b>fossil rebuild</b>.</p> }else{ aWebpage[idx].xFunc(); } /* Return the result. */ cgi_reply(); |
︙ | ︙ | |||
892 893 894 895 896 897 898 | ** The second line defines the name of the repository. After locating ** the repository, fossil will generate a webpage on stdout based on ** the values of standard CGI environment variables. */ void cmd_cgi(void){ const char *zFile; const char *zNotFound = 0; | > > | | | 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 | ** The second line defines the name of the repository. After locating ** the repository, fossil will generate a webpage on stdout based on ** the values of standard CGI environment variables. */ void cmd_cgi(void){ const char *zFile; const char *zNotFound = 0; char **azRedirect = 0; /* List of repositories to redirect to */ int nRedirect = 0; /* Number of entries in azRedirect */ Blob config, line, key, value, value2; if( g.argc==3 && fossil_strcmp(g.argv[1],"cgi")==0 ){ zFile = g.argv[2]; }else{ zFile = g.argv[1]; } g.httpOut = stdout; g.httpIn = stdin; #if defined(_WIN32) |
︙ | ︙ | |||
932 933 934 935 936 937 938 | } if( blob_eq(&key, "repository:") && blob_token(&line, &value) ){ db_open_repository(blob_str(&value)); blob_reset(&value); continue; } if( blob_eq(&key, "directory:") && blob_token(&line, &value) ){ | | > > > | > > > > > > > > > > > | > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 | } if( blob_eq(&key, "repository:") && blob_token(&line, &value) ){ db_open_repository(blob_str(&value)); blob_reset(&value); continue; } if( blob_eq(&key, "directory:") && blob_token(&line, &value) ){ db_close(1); g.zRepositoryName = mprintf("%s", blob_str(&value)); blob_reset(&value); continue; } if( blob_eq(&key, "notfound:") && blob_token(&line, &value) ){ zNotFound = mprintf("%s", blob_str(&value)); blob_reset(&value); continue; } if( blob_eq(&key, "localauth") ){ g.useLocalauth = 1; continue; } if( blob_eq(&key, "redirect:") && blob_token(&line, &value) && blob_token(&line, &value2) ){ nRedirect++; azRedirect = fossil_realloc(azRedirect, 2*nRedirect*sizeof(char*)); azRedirect[nRedirect*2-2] = mprintf("%s", blob_str(&value)); azRedirect[nRedirect*2-1] = mprintf("%s", blob_str(&value2)); blob_reset(&value); blob_reset(&value2); continue; } } blob_reset(&config); if( g.db==0 && g.zRepositoryName==0 && nRedirect==0 ){ cgi_panic("Unable to find or open the project repository"); } cgi_init(); if( nRedirect ){ redirect_web_page(nRedirect, azRedirect); }else{ process_one_web_page(zNotFound); } } /* If the CGI program contains one or more lines of the form ** ** redirect: repository-filename http://hostname/path/%s ** ** then control jumps here. Search each repository for an artifact ID ** that matches the "name" CGI parameter and for the first match, ** redirect to the corresponding URL with the "name" CGI parameter ** inserted. Paint an error page if no match is found. ** ** If there is a line of the form: ** ** redirect: * URL ** ** Then a redirect is made to URL if no match is found. Otherwise a ** very primative error message is returned. */ void redirect_web_page(int nRedirect, char **azRedirect){ int i; /* Loop counter */ const char *zNotFound = 0; /* Not found URL */ const char *zName = P("name"); set_base_url(); if( zName==0 ){ zName = P("SCRIPT_NAME"); if( zName && zName[0]=='/' ) zName++; } if( zName && validate16(zName, strlen(zName)) ){ for(i=0; i<nRedirect; i++){ if( strcmp(azRedirect[i*2],"*")==0 ){ zNotFound = azRedirect[i*2+1]; continue; } db_open_repository(azRedirect[i*2]); if( db_exists("SELECT 1 FROM blob WHERE uuid GLOB '%s*'", zName) ){ cgi_redirectf(azRedirect[i*2+1], zName); return; } db_close(1); } } if( zNotFound ){ cgi_redirectf(zNotFound, zName); }else{ @ <html> @ <head><title>No Such Object</title></head> @ <body> @ <p>No such object: <b>%h(zName)</b></p> @ </body> cgi_reply(); } } /* ** If g.argv[2] exists then it is either the name of a repository ** that will be used by a server, or else it is a directory that ** contains multiple repositories that can be served. If g.argv[2] ** is a directory, the repositories it contains must be named |
︙ | ︙ | |||
989 990 991 992 993 994 995 | ** ** fossil http REPOSITORY INFILE OUTFILE IPADDR ** ** The argv==6 form is used by the win32 server only. ** ** COMMAND: http ** | | > > > > > > > > > > > > > > > > > > > | 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 | ** ** fossil http REPOSITORY INFILE OUTFILE IPADDR ** ** The argv==6 form is used by the win32 server only. ** ** COMMAND: http ** ** Usage: %fossil http REPOSITORY [--notfound URL] [--host HOSTNAME] [--https] ** ** Handle a single HTTP request appearing on stdin. The resulting webpage ** is delivered on stdout. This method is used to launch an HTTP request ** handler from inetd, for example. The argument is the name of the ** repository. ** ** If REPOSITORY is a directory that contains one or more respositories ** with names of the form "*.fossil" then the first element of the URL ** pathname selects among the various repositories. If the pathname does ** not select a valid repository and the --notfound option is available, ** then the server redirects (HTTP code 302) to the URL of --notfound. ** ** The --host option can be used to specify the hostname for the server. ** The --https option indicates that the request came from HTTPS rather ** than HTTP. ** ** Other options: ** ** --localauth Password signin is not required if this is true and ** the input comes from 127.0.0.1 and the "localauth" ** setting is not disabled. ** ** --nossl SSL connections are not available so do not ** redirect from http: to https:. */ void cmd_http(void){ const char *zIpAddr; const char *zNotFound; const char *zHost; zNotFound = find_option("notfound", 0, 1); g.useLocalauth = find_option("localauth", 0, 0)!=0; g.sslNotAvailable = find_option("nossl", 0, 0)!=0; if( find_option("https",0,0)!=0 ) cgi_replace_parameter("HTTPS","on"); zHost = find_option("host", 0, 1); if( zHost ) cgi_replace_parameter("HTTP_HOST",zHost); g.cgiOutput = 1; if( g.argc!=2 && g.argc!=3 && g.argc!=6 ){ fossil_fatal("no repository specified"); } g.fullHttpReply = 1; if( g.argc==6 ){ g.httpIn = fopen(g.argv[3], "rb"); |
︙ | ︙ | |||
1033 1034 1035 1036 1037 1038 1039 | /* ** Note that the following command is used by ssh:// processing. ** ** COMMAND: test-http ** Works like the http command but gives setup permission to all users. */ void cmd_test_http(void){ | | | 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 | /* ** Note that the following command is used by ssh:// processing. ** ** COMMAND: test-http ** Works like the http command but gives setup permission to all users. */ void cmd_test_http(void){ login_set_capabilities("s", 0); g.httpIn = stdin; g.httpOut = stdout; find_server_repository(0); g.cgiOutput = 1; g.fullHttpReply = 1; cgi_handle_http_request(0); process_one_web_page(0); |
︙ | ︙ | |||
1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 | ** the web server. The "ui" command also binds to 127.0.0.1 and so will ** only process HTTP traffic from the local machine. ** ** In the "server" command, the REPOSITORY can be a directory (aka folder) ** that contains one or more respositories with names ending in ".fossil". ** In that case, the first element of the URL is used to select among the ** various repositories. */ void cmd_webserver(void){ int iPort, mxPort; /* Range of TCP ports allowed */ const char *zPort; /* Value of the --port option */ char *zBrowser; /* Name of web browser program */ char *zBrowserCmd = 0; /* Command to launch the web browser */ int isUiCmd; /* True if command is "ui", not "server' */ const char *zNotFound; /* The --notfound option or NULL */ int flags = 0; /* Server flags */ #if defined(_WIN32) const char *zStopperFile; /* Name of file used to terminate server */ zStopperFile = find_option("stopper", 0, 1); #endif g.thTrace = find_option("th-trace", 0, 0)!=0; if( g.thTrace ){ blob_zero(&g.thLog); } zPort = find_option("port", "P", 1); zNotFound = find_option("notfound", 0, 1); if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); isUiCmd = g.argv[1][0]=='u'; | > > > > > > > > | > > | 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 | ** the web server. The "ui" command also binds to 127.0.0.1 and so will ** only process HTTP traffic from the local machine. ** ** In the "server" command, the REPOSITORY can be a directory (aka folder) ** that contains one or more respositories with names ending in ".fossil". ** In that case, the first element of the URL is used to select among the ** various repositories. ** ** By default, the "ui" command provides full administrative access without ** having to log in. This can be disabled by setting turning off the ** "localauth" setting. Automatic login for the "server" command is available ** if the --localauth option is present and the "localauth" setting is off ** and the connection is from localhost. */ void cmd_webserver(void){ int iPort, mxPort; /* Range of TCP ports allowed */ const char *zPort; /* Value of the --port option */ char *zBrowser; /* Name of web browser program */ char *zBrowserCmd = 0; /* Command to launch the web browser */ int isUiCmd; /* True if command is "ui", not "server' */ const char *zNotFound; /* The --notfound option or NULL */ int flags = 0; /* Server flags */ #if defined(_WIN32) const char *zStopperFile; /* Name of file used to terminate server */ zStopperFile = find_option("stopper", 0, 1); #endif g.thTrace = find_option("th-trace", 0, 0)!=0; g.useLocalauth = find_option("localauth", 0, 0)!=0; if( g.thTrace ){ blob_zero(&g.thLog); } zPort = find_option("port", "P", 1); zNotFound = find_option("notfound", 0, 1); if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); isUiCmd = g.argv[1][0]=='u'; if( isUiCmd ){ flags |= HTTP_SERVER_LOCALHOST; g.useLocalauth = 1; } find_server_repository(isUiCmd); if( zPort ){ iPort = mxPort = atoi(zPort); }else{ iPort = db_get_int("http-port", 8080); mxPort = iPort+100; } |
︙ | ︙ | |||
1141 1142 1143 1144 1145 1146 1147 | } } #else zBrowser = db_get("web-browser", "open"); #endif zBrowserCmd = mprintf("%s http://localhost:%%d/ &", zBrowser); } | | > | | < < | < < < < < < > | | | > | | < < < < < < < < < | 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 | } } #else zBrowser = db_get("web-browser", "open"); #endif zBrowserCmd = mprintf("%s http://localhost:%%d/ &", zBrowser); } db_close(1); if( cgi_http_server(iPort, mxPort, zBrowserCmd, flags) ){ fossil_fatal("unable to listen on TCP socket %d", iPort); } g.sslNotAvailable = 1; g.httpIn = stdin; g.httpOut = stdout; if( g.fHttpTrace || g.fSqlTrace ){ fprintf(stderr, "====== SERVER pid %d =======\n", getpid()); } g.cgiOutput = 1; find_server_repository(isUiCmd); g.zRepositoryName = enter_chroot_jail(g.zRepositoryName); cgi_handle_http_request(0); process_one_web_page(zNotFound); #else /* Win32 implementation */ if( isUiCmd ){ zBrowser = db_get("web-browser", "start"); zBrowserCmd = mprintf("%s http://127.0.0.1:%%d/", zBrowser); } db_close(1); win32_http_server(iPort, mxPort, zBrowserCmd, zStopperFile, zNotFound, flags); #endif } /* ** COMMAND: test-echo ** ** Echo all command-line arguments (enclosed in [...]) to the screen so that ** wildcard expansion behavior of the host shell can be investigated. */ void test_echo_cmd(void){ int i; for(i=0; i<g.argc; i++){ printf("argv[%d] = [%s]\n", i, g.argv[i]); } } |
Changes to src/main.mk.
1 2 3 | # DO NOT EDIT # # This file is automatically generated. Instead of editing this | | | < < > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | # DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) SRC = \ $(SRCDIR)/add.c \ $(SRCDIR)/allrepo.c \ $(SRCDIR)/attach.c \ $(SRCDIR)/bag.c \ $(SRCDIR)/bisect.c \ $(SRCDIR)/blob.c \ $(SRCDIR)/branch.c \ $(SRCDIR)/browse.c \ $(SRCDIR)/captcha.c \ $(SRCDIR)/cgi.c \ $(SRCDIR)/checkin.c \ $(SRCDIR)/checkout.c \ |
︙ | ︙ | |||
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | $(SRCDIR)/diffcmd.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ $(SRCDIR)/http_transport.c \ $(SRCDIR)/import.c \ $(SRCDIR)/info.c \ $(SRCDIR)/login.c \ $(SRCDIR)/main.c \ $(SRCDIR)/manifest.c \ $(SRCDIR)/md5.c \ $(SRCDIR)/merge.c \ $(SRCDIR)/merge3.c \ $(SRCDIR)/name.c \ $(SRCDIR)/pivot.c \ $(SRCDIR)/popen.c \ $(SRCDIR)/pqueue.c \ $(SRCDIR)/printf.c \ $(SRCDIR)/rebuild.c \ $(SRCDIR)/report.c \ $(SRCDIR)/rss.c \ $(SRCDIR)/schema.c \ $(SRCDIR)/search.c \ $(SRCDIR)/setup.c \ $(SRCDIR)/sha1.c \ $(SRCDIR)/shun.c \ $(SRCDIR)/skins.c \ $(SRCDIR)/stat.c \ $(SRCDIR)/style.c \ $(SRCDIR)/sync.c \ $(SRCDIR)/tag.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ $(SRCDIR)/wiki.c \ $(SRCDIR)/wikiformat.c \ $(SRCDIR)/winhttp.c \ $(SRCDIR)/xfer.c \ $(SRCDIR)/zip.c TRANS_SRC = \ | > > > > > > > | | | | > | | | | | | | | | | | | | | | | | | | | | | | | > | > | | | | | | > | | | | | | | > | | | | | | | | | | | | | > > | | | | > | | | | | | | | | | | | | | | > | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | $(SRCDIR)/diffcmd.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ $(SRCDIR)/http_transport.c \ $(SRCDIR)/import.c \ $(SRCDIR)/info.c \ $(SRCDIR)/leaf.c \ $(SRCDIR)/login.c \ $(SRCDIR)/main.c \ $(SRCDIR)/manifest.c \ $(SRCDIR)/md5.c \ $(SRCDIR)/merge.c \ $(SRCDIR)/merge3.c \ $(SRCDIR)/name.c \ $(SRCDIR)/path.c \ $(SRCDIR)/pivot.c \ $(SRCDIR)/popen.c \ $(SRCDIR)/pqueue.c \ $(SRCDIR)/printf.c \ $(SRCDIR)/rebuild.c \ $(SRCDIR)/report.c \ $(SRCDIR)/rss.c \ $(SRCDIR)/schema.c \ $(SRCDIR)/search.c \ $(SRCDIR)/setup.c \ $(SRCDIR)/sha1.c \ $(SRCDIR)/shun.c \ $(SRCDIR)/skins.c \ $(SRCDIR)/sqlcmd.c \ $(SRCDIR)/stash.c \ $(SRCDIR)/stat.c \ $(SRCDIR)/style.c \ $(SRCDIR)/sync.c \ $(SRCDIR)/tag.c \ $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ $(SRCDIR)/wiki.c \ $(SRCDIR)/wikiformat.c \ $(SRCDIR)/winhttp.c \ $(SRCDIR)/xfer.c \ $(SRCDIR)/zip.c TRANS_SRC = \ $(OBJDIR)/add_.c \ $(OBJDIR)/allrepo_.c \ $(OBJDIR)/attach_.c \ $(OBJDIR)/bag_.c \ $(OBJDIR)/bisect_.c \ $(OBJDIR)/blob_.c \ $(OBJDIR)/branch_.c \ $(OBJDIR)/browse_.c \ $(OBJDIR)/captcha_.c \ $(OBJDIR)/cgi_.c \ $(OBJDIR)/checkin_.c \ $(OBJDIR)/checkout_.c \ $(OBJDIR)/clearsign_.c \ $(OBJDIR)/clone_.c \ $(OBJDIR)/comformat_.c \ $(OBJDIR)/configure_.c \ $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ $(OBJDIR)/http_transport_.c \ $(OBJDIR)/import_.c \ $(OBJDIR)/info_.c \ $(OBJDIR)/leaf_.c \ $(OBJDIR)/login_.c \ $(OBJDIR)/main_.c \ $(OBJDIR)/manifest_.c \ $(OBJDIR)/md5_.c \ $(OBJDIR)/merge_.c \ $(OBJDIR)/merge3_.c \ $(OBJDIR)/name_.c \ $(OBJDIR)/path_.c \ $(OBJDIR)/pivot_.c \ $(OBJDIR)/popen_.c \ $(OBJDIR)/pqueue_.c \ $(OBJDIR)/printf_.c \ $(OBJDIR)/rebuild_.c \ $(OBJDIR)/report_.c \ $(OBJDIR)/rss_.c \ $(OBJDIR)/schema_.c \ $(OBJDIR)/search_.c \ $(OBJDIR)/setup_.c \ $(OBJDIR)/sha1_.c \ $(OBJDIR)/shun_.c \ $(OBJDIR)/skins_.c \ $(OBJDIR)/sqlcmd_.c \ $(OBJDIR)/stash_.c \ $(OBJDIR)/stat_.c \ $(OBJDIR)/style_.c \ $(OBJDIR)/sync_.c \ $(OBJDIR)/tag_.c \ $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ $(OBJDIR)/wiki_.c \ $(OBJDIR)/wikiformat_.c \ $(OBJDIR)/winhttp_.c \ $(OBJDIR)/xfer_.c \ $(OBJDIR)/zip_.c OBJ = \ $(OBJDIR)/add.o \ $(OBJDIR)/allrepo.o \ $(OBJDIR)/attach.o \ $(OBJDIR)/bag.o \ $(OBJDIR)/bisect.o \ $(OBJDIR)/blob.o \ $(OBJDIR)/branch.o \ $(OBJDIR)/browse.o \ $(OBJDIR)/captcha.o \ $(OBJDIR)/cgi.o \ $(OBJDIR)/checkin.o \ $(OBJDIR)/checkout.o \ |
︙ | ︙ | |||
189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 | $(OBJDIR)/diffcmd.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ $(OBJDIR)/http_transport.o \ $(OBJDIR)/import.o \ $(OBJDIR)/info.o \ $(OBJDIR)/login.o \ $(OBJDIR)/main.o \ $(OBJDIR)/manifest.o \ $(OBJDIR)/md5.o \ $(OBJDIR)/merge.o \ $(OBJDIR)/merge3.o \ $(OBJDIR)/name.o \ $(OBJDIR)/pivot.o \ $(OBJDIR)/popen.o \ $(OBJDIR)/pqueue.o \ $(OBJDIR)/printf.o \ $(OBJDIR)/rebuild.o \ $(OBJDIR)/report.o \ $(OBJDIR)/rss.o \ $(OBJDIR)/schema.o \ $(OBJDIR)/search.o \ $(OBJDIR)/setup.o \ $(OBJDIR)/sha1.o \ $(OBJDIR)/shun.o \ $(OBJDIR)/skins.o \ $(OBJDIR)/stat.o \ $(OBJDIR)/style.o \ $(OBJDIR)/sync.o \ $(OBJDIR)/tag.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ | > > > > > > > | 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 | $(OBJDIR)/diffcmd.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ $(OBJDIR)/http_transport.o \ $(OBJDIR)/import.o \ $(OBJDIR)/info.o \ $(OBJDIR)/leaf.o \ $(OBJDIR)/login.o \ $(OBJDIR)/main.o \ $(OBJDIR)/manifest.o \ $(OBJDIR)/md5.o \ $(OBJDIR)/merge.o \ $(OBJDIR)/merge3.o \ $(OBJDIR)/name.o \ $(OBJDIR)/path.o \ $(OBJDIR)/pivot.o \ $(OBJDIR)/popen.o \ $(OBJDIR)/pqueue.o \ $(OBJDIR)/printf.o \ $(OBJDIR)/rebuild.o \ $(OBJDIR)/report.o \ $(OBJDIR)/rss.o \ $(OBJDIR)/schema.o \ $(OBJDIR)/search.o \ $(OBJDIR)/setup.o \ $(OBJDIR)/sha1.o \ $(OBJDIR)/shun.o \ $(OBJDIR)/skins.o \ $(OBJDIR)/sqlcmd.o \ $(OBJDIR)/stash.o \ $(OBJDIR)/stat.o \ $(OBJDIR)/style.o \ $(OBJDIR)/sync.o \ $(OBJDIR)/tag.o \ $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ |
︙ | ︙ | |||
248 249 250 251 252 253 254 | install: $(APPNAME) mv $(APPNAME) $(INSTALLDIR) $(OBJDIR): -mkdir $(OBJDIR) | | | | | | | | | | | | | | < | | | < > | | | | | | | | | | | | | | | | | | | | | > > > | > > > > | | | | | | | | | | | | | | | | | | | | | | | | | | | < < < > > > | | | | | | | | | | | | | | | | | | | | | | | < > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > > > > > > > | | | | | | | | | | | | | | | | | | | | | | | > | > > > > > > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > > > | > > > > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > > > > > > | > | | | | | | | | | | | | | | | | | | | | | | > | < | | | | | | | | | | | < | > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > > > > > > > | | | | > > > | > > > > | | | | | | | | | | | | | | | | | | | > > > > > > > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 | install: $(APPNAME) mv $(APPNAME) $(INSTALLDIR) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(BCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(BCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(BCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c # WARNING. DANGER. Running the testsuite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. test: $(APPNAME) $(TCLSH) test/tester.tcl $(APPNAME) $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest awk '{ printf "#define MANIFEST_UUID \"%s\"\n", $$1}' $(SRCDIR)/../manifest.uuid >$(OBJDIR)/VERSION.h awk '{ printf "#define MANIFEST_VERSION \"[%.10s]\"\n", $$1}' $(SRCDIR)/../manifest.uuid >>$(OBJDIR)/VERSION.h awk '$$1=="D"{printf "#define MANIFEST_DATE \"%s %s\"\n", substr($$2,1,10),substr($$2,12,8)}' $(SRCDIR)/../manifest >>$(OBJDIR)/VERSION.h EXTRAOBJ = $(OBJDIR)/sqlite3.o $(OBJDIR)/shell.o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o $(APPNAME): $(OBJDIR)/headers $(OBJ) $(EXTRAOBJ) $(TCC) -o $(APPNAME) $(OBJ) $(EXTRAOBJ) $(LIB) # This rule prevents make from using its default rules to try build # an executable named "manifest" out of the file named "manifest.c" # $(SRCDIR)/../manifest: # noop clean: rm -rf $(OBJDIR)/* $(APPNAME) $(OBJDIR)/page_index.h: $(TRANS_SRC) $(OBJDIR)/mkindex $(OBJDIR)/mkindex $(TRANS_SRC) >$@ $(OBJDIR)/headers: $(OBJDIR)/page_index.h $(OBJDIR)/makeheaders $(OBJDIR)/VERSION.h $(OBJDIR)/makeheaders $(OBJDIR)/add_.c:$(OBJDIR)/add.h $(OBJDIR)/allrepo_.c:$(OBJDIR)/allrepo.h $(OBJDIR)/attach_.c:$(OBJDIR)/attach.h $(OBJDIR)/bag_.c:$(OBJDIR)/bag.h $(OBJDIR)/bisect_.c:$(OBJDIR)/bisect.h $(OBJDIR)/blob_.c:$(OBJDIR)/blob.h $(OBJDIR)/branch_.c:$(OBJDIR)/branch.h $(OBJDIR)/browse_.c:$(OBJDIR)/browse.h $(OBJDIR)/captcha_.c:$(OBJDIR)/captcha.h $(OBJDIR)/cgi_.c:$(OBJDIR)/cgi.h $(OBJDIR)/checkin_.c:$(OBJDIR)/checkin.h $(OBJDIR)/checkout_.c:$(OBJDIR)/checkout.h $(OBJDIR)/clearsign_.c:$(OBJDIR)/clearsign.h $(OBJDIR)/clone_.c:$(OBJDIR)/clone.h $(OBJDIR)/comformat_.c:$(OBJDIR)/comformat.h $(OBJDIR)/configure_.c:$(OBJDIR)/configure.h $(OBJDIR)/content_.c:$(OBJDIR)/content.h $(OBJDIR)/db_.c:$(OBJDIR)/db.h $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h $(OBJDIR)/event_.c:$(OBJDIR)/event.h $(OBJDIR)/export_.c:$(OBJDIR)/export.h $(OBJDIR)/file_.c:$(OBJDIR)/file.h $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h $(OBJDIR)/http_.c:$(OBJDIR)/http.h $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h $(OBJDIR)/http_transport_.c:$(OBJDIR)/http_transport.h $(OBJDIR)/import_.c:$(OBJDIR)/import.h $(OBJDIR)/info_.c:$(OBJDIR)/info.h $(OBJDIR)/leaf_.c:$(OBJDIR)/leaf.h $(OBJDIR)/login_.c:$(OBJDIR)/login.h $(OBJDIR)/main_.c:$(OBJDIR)/main.h $(OBJDIR)/manifest_.c:$(OBJDIR)/manifest.h $(OBJDIR)/md5_.c:$(OBJDIR)/md5.h $(OBJDIR)/merge_.c:$(OBJDIR)/merge.h $(OBJDIR)/merge3_.c:$(OBJDIR)/merge3.h $(OBJDIR)/name_.c:$(OBJDIR)/name.h $(OBJDIR)/path_.c:$(OBJDIR)/path.h $(OBJDIR)/pivot_.c:$(OBJDIR)/pivot.h $(OBJDIR)/popen_.c:$(OBJDIR)/popen.h $(OBJDIR)/pqueue_.c:$(OBJDIR)/pqueue.h $(OBJDIR)/printf_.c:$(OBJDIR)/printf.h $(OBJDIR)/rebuild_.c:$(OBJDIR)/rebuild.h $(OBJDIR)/report_.c:$(OBJDIR)/report.h $(OBJDIR)/rss_.c:$(OBJDIR)/rss.h $(OBJDIR)/schema_.c:$(OBJDIR)/schema.h $(OBJDIR)/search_.c:$(OBJDIR)/search.h $(OBJDIR)/setup_.c:$(OBJDIR)/setup.h $(OBJDIR)/sha1_.c:$(OBJDIR)/sha1.h $(OBJDIR)/shun_.c:$(OBJDIR)/shun.h $(OBJDIR)/skins_.c:$(OBJDIR)/skins.h $(OBJDIR)/sqlcmd_.c:$(OBJDIR)/sqlcmd.h $(OBJDIR)/stash_.c:$(OBJDIR)/stash.h $(OBJDIR)/stat_.c:$(OBJDIR)/stat.h $(OBJDIR)/style_.c:$(OBJDIR)/style.h $(OBJDIR)/sync_.c:$(OBJDIR)/sync.h $(OBJDIR)/tag_.c:$(OBJDIR)/tag.h $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h $(OBJDIR)/update_.c:$(OBJDIR)/update.h $(OBJDIR)/url_.c:$(OBJDIR)/url.h $(OBJDIR)/user_.c:$(OBJDIR)/user.h $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h $(OBJDIR)/wiki_.c:$(OBJDIR)/wiki.h $(OBJDIR)/wikiformat_.c:$(OBJDIR)/wikiformat.h $(OBJDIR)/winhttp_.c:$(OBJDIR)/winhttp.h $(OBJDIR)/xfer_.c:$(OBJDIR)/xfer.h $(OBJDIR)/zip_.c:$(OBJDIR)/zip.h $(SRCDIR)/sqlite3.h $(SRCDIR)/th.h $(OBJDIR)/VERSION.h touch $(OBJDIR)/headers $(OBJDIR)/headers: Makefile Makefile: $(OBJDIR)/add_.c: $(SRCDIR)/add.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/add.c >$(OBJDIR)/add_.c $(OBJDIR)/add.o: $(OBJDIR)/add_.c $(OBJDIR)/add.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/add.o -c $(OBJDIR)/add_.c $(OBJDIR)/add.h: $(OBJDIR)/headers $(OBJDIR)/allrepo_.c: $(SRCDIR)/allrepo.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/allrepo.c >$(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.o: $(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/allrepo.o -c $(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.h: $(OBJDIR)/headers $(OBJDIR)/attach_.c: $(SRCDIR)/attach.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/attach.c >$(OBJDIR)/attach_.c $(OBJDIR)/attach.o: $(OBJDIR)/attach_.c $(OBJDIR)/attach.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/attach.o -c $(OBJDIR)/attach_.c $(OBJDIR)/attach.h: $(OBJDIR)/headers $(OBJDIR)/bag_.c: $(SRCDIR)/bag.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/bag.c >$(OBJDIR)/bag_.c $(OBJDIR)/bag.o: $(OBJDIR)/bag_.c $(OBJDIR)/bag.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/bag.o -c $(OBJDIR)/bag_.c $(OBJDIR)/bag.h: $(OBJDIR)/headers $(OBJDIR)/bisect_.c: $(SRCDIR)/bisect.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/bisect.c >$(OBJDIR)/bisect_.c $(OBJDIR)/bisect.o: $(OBJDIR)/bisect_.c $(OBJDIR)/bisect.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/bisect.o -c $(OBJDIR)/bisect_.c $(OBJDIR)/bisect.h: $(OBJDIR)/headers $(OBJDIR)/blob_.c: $(SRCDIR)/blob.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/blob.c >$(OBJDIR)/blob_.c $(OBJDIR)/blob.o: $(OBJDIR)/blob_.c $(OBJDIR)/blob.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/blob.o -c $(OBJDIR)/blob_.c $(OBJDIR)/blob.h: $(OBJDIR)/headers $(OBJDIR)/branch_.c: $(SRCDIR)/branch.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/branch.c >$(OBJDIR)/branch_.c $(OBJDIR)/branch.o: $(OBJDIR)/branch_.c $(OBJDIR)/branch.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/branch.o -c $(OBJDIR)/branch_.c $(OBJDIR)/branch.h: $(OBJDIR)/headers $(OBJDIR)/browse_.c: $(SRCDIR)/browse.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/browse.c >$(OBJDIR)/browse_.c $(OBJDIR)/browse.o: $(OBJDIR)/browse_.c $(OBJDIR)/browse.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/browse.o -c $(OBJDIR)/browse_.c $(OBJDIR)/browse.h: $(OBJDIR)/headers $(OBJDIR)/captcha_.c: $(SRCDIR)/captcha.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/captcha.c >$(OBJDIR)/captcha_.c $(OBJDIR)/captcha.o: $(OBJDIR)/captcha_.c $(OBJDIR)/captcha.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/captcha.o -c $(OBJDIR)/captcha_.c $(OBJDIR)/captcha.h: $(OBJDIR)/headers $(OBJDIR)/cgi_.c: $(SRCDIR)/cgi.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/cgi.c >$(OBJDIR)/cgi_.c $(OBJDIR)/cgi.o: $(OBJDIR)/cgi_.c $(OBJDIR)/cgi.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/cgi.o -c $(OBJDIR)/cgi_.c $(OBJDIR)/cgi.h: $(OBJDIR)/headers $(OBJDIR)/checkin_.c: $(SRCDIR)/checkin.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/checkin.c >$(OBJDIR)/checkin_.c $(OBJDIR)/checkin.o: $(OBJDIR)/checkin_.c $(OBJDIR)/checkin.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/checkin.o -c $(OBJDIR)/checkin_.c $(OBJDIR)/checkin.h: $(OBJDIR)/headers $(OBJDIR)/checkout_.c: $(SRCDIR)/checkout.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/checkout.c >$(OBJDIR)/checkout_.c $(OBJDIR)/checkout.o: $(OBJDIR)/checkout_.c $(OBJDIR)/checkout.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/checkout.o -c $(OBJDIR)/checkout_.c $(OBJDIR)/checkout.h: $(OBJDIR)/headers $(OBJDIR)/clearsign_.c: $(SRCDIR)/clearsign.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/clearsign.c >$(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.o: $(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/clearsign.o -c $(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.h: $(OBJDIR)/headers $(OBJDIR)/clone_.c: $(SRCDIR)/clone.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/clone.c >$(OBJDIR)/clone_.c $(OBJDIR)/clone.o: $(OBJDIR)/clone_.c $(OBJDIR)/clone.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/clone.o -c $(OBJDIR)/clone_.c $(OBJDIR)/clone.h: $(OBJDIR)/headers $(OBJDIR)/comformat_.c: $(SRCDIR)/comformat.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/comformat.c >$(OBJDIR)/comformat_.c $(OBJDIR)/comformat.o: $(OBJDIR)/comformat_.c $(OBJDIR)/comformat.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/comformat.o -c $(OBJDIR)/comformat_.c $(OBJDIR)/comformat.h: $(OBJDIR)/headers $(OBJDIR)/configure_.c: $(SRCDIR)/configure.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/configure.c >$(OBJDIR)/configure_.c $(OBJDIR)/configure.o: $(OBJDIR)/configure_.c $(OBJDIR)/configure.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/configure.o -c $(OBJDIR)/configure_.c $(OBJDIR)/configure.h: $(OBJDIR)/headers $(OBJDIR)/content_.c: $(SRCDIR)/content.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/content.c >$(OBJDIR)/content_.c $(OBJDIR)/content.o: $(OBJDIR)/content_.c $(OBJDIR)/content.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/content.o -c $(OBJDIR)/content_.c $(OBJDIR)/content.h: $(OBJDIR)/headers $(OBJDIR)/db_.c: $(SRCDIR)/db.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/db.c >$(OBJDIR)/db_.c $(OBJDIR)/db.o: $(OBJDIR)/db_.c $(OBJDIR)/db.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/db.o -c $(OBJDIR)/db_.c $(OBJDIR)/db.h: $(OBJDIR)/headers $(OBJDIR)/delta_.c: $(SRCDIR)/delta.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/delta.c >$(OBJDIR)/delta_.c $(OBJDIR)/delta.o: $(OBJDIR)/delta_.c $(OBJDIR)/delta.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/delta.o -c $(OBJDIR)/delta_.c $(OBJDIR)/delta.h: $(OBJDIR)/headers $(OBJDIR)/deltacmd_.c: $(SRCDIR)/deltacmd.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/deltacmd.c >$(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.o: $(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/deltacmd.o -c $(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.h: $(OBJDIR)/headers $(OBJDIR)/descendants_.c: $(SRCDIR)/descendants.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/descendants.c >$(OBJDIR)/descendants_.c $(OBJDIR)/descendants.o: $(OBJDIR)/descendants_.c $(OBJDIR)/descendants.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/descendants.o -c $(OBJDIR)/descendants_.c $(OBJDIR)/descendants.h: $(OBJDIR)/headers $(OBJDIR)/diff_.c: $(SRCDIR)/diff.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/diff.c >$(OBJDIR)/diff_.c $(OBJDIR)/diff.o: $(OBJDIR)/diff_.c $(OBJDIR)/diff.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diff.o -c $(OBJDIR)/diff_.c $(OBJDIR)/diff.h: $(OBJDIR)/headers $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/diffcmd.c >$(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/doc.c >$(OBJDIR)/doc_.c $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c $(OBJDIR)/doc.h: $(OBJDIR)/headers $(OBJDIR)/encode_.c: $(SRCDIR)/encode.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/encode.c >$(OBJDIR)/encode_.c $(OBJDIR)/encode.o: $(OBJDIR)/encode_.c $(OBJDIR)/encode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/encode.o -c $(OBJDIR)/encode_.c $(OBJDIR)/encode.h: $(OBJDIR)/headers $(OBJDIR)/event_.c: $(SRCDIR)/event.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/event.c >$(OBJDIR)/event_.c $(OBJDIR)/event.o: $(OBJDIR)/event_.c $(OBJDIR)/event.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/event.o -c $(OBJDIR)/event_.c $(OBJDIR)/event.h: $(OBJDIR)/headers $(OBJDIR)/export_.c: $(SRCDIR)/export.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/export.c >$(OBJDIR)/export_.c $(OBJDIR)/export.o: $(OBJDIR)/export_.c $(OBJDIR)/export.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/export.o -c $(OBJDIR)/export_.c $(OBJDIR)/export.h: $(OBJDIR)/headers $(OBJDIR)/file_.c: $(SRCDIR)/file.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/file.c >$(OBJDIR)/file_.c $(OBJDIR)/file.o: $(OBJDIR)/file_.c $(OBJDIR)/file.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/file.o -c $(OBJDIR)/file_.c $(OBJDIR)/file.h: $(OBJDIR)/headers $(OBJDIR)/finfo_.c: $(SRCDIR)/finfo.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/finfo.c >$(OBJDIR)/finfo_.c $(OBJDIR)/finfo.o: $(OBJDIR)/finfo_.c $(OBJDIR)/finfo.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/finfo.o -c $(OBJDIR)/finfo_.c $(OBJDIR)/finfo.h: $(OBJDIR)/headers $(OBJDIR)/glob_.c: $(SRCDIR)/glob.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/glob.c >$(OBJDIR)/glob_.c $(OBJDIR)/glob.o: $(OBJDIR)/glob_.c $(OBJDIR)/glob.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/glob.o -c $(OBJDIR)/glob_.c $(OBJDIR)/glob.h: $(OBJDIR)/headers $(OBJDIR)/graph_.c: $(SRCDIR)/graph.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/graph.c >$(OBJDIR)/graph_.c $(OBJDIR)/graph.o: $(OBJDIR)/graph_.c $(OBJDIR)/graph.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/graph.o -c $(OBJDIR)/graph_.c $(OBJDIR)/graph.h: $(OBJDIR)/headers $(OBJDIR)/gzip_.c: $(SRCDIR)/gzip.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/gzip.c >$(OBJDIR)/gzip_.c $(OBJDIR)/gzip.o: $(OBJDIR)/gzip_.c $(OBJDIR)/gzip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/gzip.o -c $(OBJDIR)/gzip_.c $(OBJDIR)/gzip.h: $(OBJDIR)/headers $(OBJDIR)/http_.c: $(SRCDIR)/http.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/http.c >$(OBJDIR)/http_.c $(OBJDIR)/http.o: $(OBJDIR)/http_.c $(OBJDIR)/http.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http.o -c $(OBJDIR)/http_.c $(OBJDIR)/http.h: $(OBJDIR)/headers $(OBJDIR)/http_socket_.c: $(SRCDIR)/http_socket.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/http_socket.c >$(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.o: $(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http_socket.o -c $(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.h: $(OBJDIR)/headers $(OBJDIR)/http_ssl_.c: $(SRCDIR)/http_ssl.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/http_ssl.c >$(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.o: $(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http_ssl.o -c $(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.h: $(OBJDIR)/headers $(OBJDIR)/http_transport_.c: $(SRCDIR)/http_transport.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/http_transport.c >$(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.o: $(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http_transport.o -c $(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.h: $(OBJDIR)/headers $(OBJDIR)/import_.c: $(SRCDIR)/import.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/import.c >$(OBJDIR)/import_.c $(OBJDIR)/import.o: $(OBJDIR)/import_.c $(OBJDIR)/import.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/import.o -c $(OBJDIR)/import_.c $(OBJDIR)/import.h: $(OBJDIR)/headers $(OBJDIR)/info_.c: $(SRCDIR)/info.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/info.c >$(OBJDIR)/info_.c $(OBJDIR)/info.o: $(OBJDIR)/info_.c $(OBJDIR)/info.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/info.o -c $(OBJDIR)/info_.c $(OBJDIR)/info.h: $(OBJDIR)/headers $(OBJDIR)/leaf_.c: $(SRCDIR)/leaf.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/leaf.c >$(OBJDIR)/leaf_.c $(OBJDIR)/leaf.o: $(OBJDIR)/leaf_.c $(OBJDIR)/leaf.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/leaf.o -c $(OBJDIR)/leaf_.c $(OBJDIR)/leaf.h: $(OBJDIR)/headers $(OBJDIR)/login_.c: $(SRCDIR)/login.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/login.c >$(OBJDIR)/login_.c $(OBJDIR)/login.o: $(OBJDIR)/login_.c $(OBJDIR)/login.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/login.o -c $(OBJDIR)/login_.c $(OBJDIR)/login.h: $(OBJDIR)/headers $(OBJDIR)/main_.c: $(SRCDIR)/main.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/main.c >$(OBJDIR)/main_.c $(OBJDIR)/main.o: $(OBJDIR)/main_.c $(OBJDIR)/main.h $(OBJDIR)/page_index.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/main.o -c $(OBJDIR)/main_.c $(OBJDIR)/main.h: $(OBJDIR)/headers $(OBJDIR)/manifest_.c: $(SRCDIR)/manifest.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/manifest.c >$(OBJDIR)/manifest_.c $(OBJDIR)/manifest.o: $(OBJDIR)/manifest_.c $(OBJDIR)/manifest.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/manifest.o -c $(OBJDIR)/manifest_.c $(OBJDIR)/manifest.h: $(OBJDIR)/headers $(OBJDIR)/md5_.c: $(SRCDIR)/md5.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/md5.c >$(OBJDIR)/md5_.c $(OBJDIR)/md5.o: $(OBJDIR)/md5_.c $(OBJDIR)/md5.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/md5.o -c $(OBJDIR)/md5_.c $(OBJDIR)/md5.h: $(OBJDIR)/headers $(OBJDIR)/merge_.c: $(SRCDIR)/merge.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/merge.c >$(OBJDIR)/merge_.c $(OBJDIR)/merge.o: $(OBJDIR)/merge_.c $(OBJDIR)/merge.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/merge.o -c $(OBJDIR)/merge_.c $(OBJDIR)/merge.h: $(OBJDIR)/headers $(OBJDIR)/merge3_.c: $(SRCDIR)/merge3.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/merge3.c >$(OBJDIR)/merge3_.c $(OBJDIR)/merge3.o: $(OBJDIR)/merge3_.c $(OBJDIR)/merge3.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/merge3.o -c $(OBJDIR)/merge3_.c $(OBJDIR)/merge3.h: $(OBJDIR)/headers $(OBJDIR)/name_.c: $(SRCDIR)/name.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/name.c >$(OBJDIR)/name_.c $(OBJDIR)/name.o: $(OBJDIR)/name_.c $(OBJDIR)/name.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/name.o -c $(OBJDIR)/name_.c $(OBJDIR)/name.h: $(OBJDIR)/headers $(OBJDIR)/path_.c: $(SRCDIR)/path.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/path.c >$(OBJDIR)/path_.c $(OBJDIR)/path.o: $(OBJDIR)/path_.c $(OBJDIR)/path.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/path.o -c $(OBJDIR)/path_.c $(OBJDIR)/path.h: $(OBJDIR)/headers $(OBJDIR)/pivot_.c: $(SRCDIR)/pivot.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/pivot.c >$(OBJDIR)/pivot_.c $(OBJDIR)/pivot.o: $(OBJDIR)/pivot_.c $(OBJDIR)/pivot.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/pivot.o -c $(OBJDIR)/pivot_.c $(OBJDIR)/pivot.h: $(OBJDIR)/headers $(OBJDIR)/popen_.c: $(SRCDIR)/popen.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/popen.c >$(OBJDIR)/popen_.c $(OBJDIR)/popen.o: $(OBJDIR)/popen_.c $(OBJDIR)/popen.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/popen.o -c $(OBJDIR)/popen_.c $(OBJDIR)/popen.h: $(OBJDIR)/headers $(OBJDIR)/pqueue_.c: $(SRCDIR)/pqueue.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/pqueue.c >$(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.o: $(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/pqueue.o -c $(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.h: $(OBJDIR)/headers $(OBJDIR)/printf_.c: $(SRCDIR)/printf.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/printf.c >$(OBJDIR)/printf_.c $(OBJDIR)/printf.o: $(OBJDIR)/printf_.c $(OBJDIR)/printf.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/printf.o -c $(OBJDIR)/printf_.c $(OBJDIR)/printf.h: $(OBJDIR)/headers $(OBJDIR)/rebuild_.c: $(SRCDIR)/rebuild.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/rebuild.c >$(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.o: $(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/rebuild.o -c $(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.h: $(OBJDIR)/headers $(OBJDIR)/report_.c: $(SRCDIR)/report.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/report.c >$(OBJDIR)/report_.c $(OBJDIR)/report.o: $(OBJDIR)/report_.c $(OBJDIR)/report.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/report.o -c $(OBJDIR)/report_.c $(OBJDIR)/report.h: $(OBJDIR)/headers $(OBJDIR)/rss_.c: $(SRCDIR)/rss.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/rss.c >$(OBJDIR)/rss_.c $(OBJDIR)/rss.o: $(OBJDIR)/rss_.c $(OBJDIR)/rss.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/rss.o -c $(OBJDIR)/rss_.c $(OBJDIR)/rss.h: $(OBJDIR)/headers $(OBJDIR)/schema_.c: $(SRCDIR)/schema.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/schema.c >$(OBJDIR)/schema_.c $(OBJDIR)/schema.o: $(OBJDIR)/schema_.c $(OBJDIR)/schema.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/schema.o -c $(OBJDIR)/schema_.c $(OBJDIR)/schema.h: $(OBJDIR)/headers $(OBJDIR)/search_.c: $(SRCDIR)/search.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/search.c >$(OBJDIR)/search_.c $(OBJDIR)/search.o: $(OBJDIR)/search_.c $(OBJDIR)/search.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/search.o -c $(OBJDIR)/search_.c $(OBJDIR)/search.h: $(OBJDIR)/headers $(OBJDIR)/setup_.c: $(SRCDIR)/setup.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/setup.c >$(OBJDIR)/setup_.c $(OBJDIR)/setup.o: $(OBJDIR)/setup_.c $(OBJDIR)/setup.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/setup.o -c $(OBJDIR)/setup_.c $(OBJDIR)/setup.h: $(OBJDIR)/headers $(OBJDIR)/sha1_.c: $(SRCDIR)/sha1.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/sha1.c >$(OBJDIR)/sha1_.c $(OBJDIR)/sha1.o: $(OBJDIR)/sha1_.c $(OBJDIR)/sha1.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/sha1.o -c $(OBJDIR)/sha1_.c $(OBJDIR)/sha1.h: $(OBJDIR)/headers $(OBJDIR)/shun_.c: $(SRCDIR)/shun.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/shun.c >$(OBJDIR)/shun_.c $(OBJDIR)/shun.o: $(OBJDIR)/shun_.c $(OBJDIR)/shun.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/shun.o -c $(OBJDIR)/shun_.c $(OBJDIR)/shun.h: $(OBJDIR)/headers $(OBJDIR)/skins_.c: $(SRCDIR)/skins.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/skins.c >$(OBJDIR)/skins_.c $(OBJDIR)/skins.o: $(OBJDIR)/skins_.c $(OBJDIR)/skins.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/skins.o -c $(OBJDIR)/skins_.c $(OBJDIR)/skins.h: $(OBJDIR)/headers $(OBJDIR)/sqlcmd_.c: $(SRCDIR)/sqlcmd.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/sqlcmd.c >$(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.o: $(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/sqlcmd.o -c $(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.h: $(OBJDIR)/headers $(OBJDIR)/stash_.c: $(SRCDIR)/stash.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/stash.c >$(OBJDIR)/stash_.c $(OBJDIR)/stash.o: $(OBJDIR)/stash_.c $(OBJDIR)/stash.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/stash.o -c $(OBJDIR)/stash_.c $(OBJDIR)/stash.h: $(OBJDIR)/headers $(OBJDIR)/stat_.c: $(SRCDIR)/stat.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/stat.c >$(OBJDIR)/stat_.c $(OBJDIR)/stat.o: $(OBJDIR)/stat_.c $(OBJDIR)/stat.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/stat.o -c $(OBJDIR)/stat_.c $(OBJDIR)/stat.h: $(OBJDIR)/headers $(OBJDIR)/style_.c: $(SRCDIR)/style.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/style.c >$(OBJDIR)/style_.c $(OBJDIR)/style.o: $(OBJDIR)/style_.c $(OBJDIR)/style.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/style.o -c $(OBJDIR)/style_.c $(OBJDIR)/style.h: $(OBJDIR)/headers $(OBJDIR)/sync_.c: $(SRCDIR)/sync.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/sync.c >$(OBJDIR)/sync_.c $(OBJDIR)/sync.o: $(OBJDIR)/sync_.c $(OBJDIR)/sync.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/sync.o -c $(OBJDIR)/sync_.c $(OBJDIR)/sync.h: $(OBJDIR)/headers $(OBJDIR)/tag_.c: $(SRCDIR)/tag.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/tag.c >$(OBJDIR)/tag_.c $(OBJDIR)/tag.o: $(OBJDIR)/tag_.c $(OBJDIR)/tag.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tag.o -c $(OBJDIR)/tag_.c $(OBJDIR)/tag.h: $(OBJDIR)/headers $(OBJDIR)/tar_.c: $(SRCDIR)/tar.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/tar.c >$(OBJDIR)/tar_.c $(OBJDIR)/tar.o: $(OBJDIR)/tar_.c $(OBJDIR)/tar.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tar.o -c $(OBJDIR)/tar_.c $(OBJDIR)/tar.h: $(OBJDIR)/headers $(OBJDIR)/th_main_.c: $(SRCDIR)/th_main.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/th_main.c >$(OBJDIR)/th_main_.c $(OBJDIR)/th_main.o: $(OBJDIR)/th_main_.c $(OBJDIR)/th_main.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/th_main.o -c $(OBJDIR)/th_main_.c $(OBJDIR)/th_main.h: $(OBJDIR)/headers $(OBJDIR)/timeline_.c: $(SRCDIR)/timeline.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/timeline.c >$(OBJDIR)/timeline_.c $(OBJDIR)/timeline.o: $(OBJDIR)/timeline_.c $(OBJDIR)/timeline.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/timeline.o -c $(OBJDIR)/timeline_.c $(OBJDIR)/timeline.h: $(OBJDIR)/headers $(OBJDIR)/tkt_.c: $(SRCDIR)/tkt.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/tkt.c >$(OBJDIR)/tkt_.c $(OBJDIR)/tkt.o: $(OBJDIR)/tkt_.c $(OBJDIR)/tkt.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tkt.o -c $(OBJDIR)/tkt_.c $(OBJDIR)/tkt.h: $(OBJDIR)/headers $(OBJDIR)/tktsetup_.c: $(SRCDIR)/tktsetup.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/tktsetup.c >$(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.o: $(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tktsetup.o -c $(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.h: $(OBJDIR)/headers $(OBJDIR)/undo_.c: $(SRCDIR)/undo.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/undo.c >$(OBJDIR)/undo_.c $(OBJDIR)/undo.o: $(OBJDIR)/undo_.c $(OBJDIR)/undo.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/undo.o -c $(OBJDIR)/undo_.c $(OBJDIR)/undo.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/update.c >$(OBJDIR)/update_.c $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c $(OBJDIR)/update.h: $(OBJDIR)/headers $(OBJDIR)/url_.c: $(SRCDIR)/url.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/url.c >$(OBJDIR)/url_.c $(OBJDIR)/url.o: $(OBJDIR)/url_.c $(OBJDIR)/url.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/url.o -c $(OBJDIR)/url_.c $(OBJDIR)/url.h: $(OBJDIR)/headers $(OBJDIR)/user_.c: $(SRCDIR)/user.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/user.c >$(OBJDIR)/user_.c $(OBJDIR)/user.o: $(OBJDIR)/user_.c $(OBJDIR)/user.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/user.o -c $(OBJDIR)/user_.c $(OBJDIR)/user.h: $(OBJDIR)/headers $(OBJDIR)/verify_.c: $(SRCDIR)/verify.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/verify.c >$(OBJDIR)/verify_.c $(OBJDIR)/verify.o: $(OBJDIR)/verify_.c $(OBJDIR)/verify.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/verify.o -c $(OBJDIR)/verify_.c $(OBJDIR)/verify.h: $(OBJDIR)/headers $(OBJDIR)/vfile_.c: $(SRCDIR)/vfile.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/vfile.c >$(OBJDIR)/vfile_.c $(OBJDIR)/vfile.o: $(OBJDIR)/vfile_.c $(OBJDIR)/vfile.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/vfile.o -c $(OBJDIR)/vfile_.c $(OBJDIR)/vfile.h: $(OBJDIR)/headers $(OBJDIR)/wiki_.c: $(SRCDIR)/wiki.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/wiki.c >$(OBJDIR)/wiki_.c $(OBJDIR)/wiki.o: $(OBJDIR)/wiki_.c $(OBJDIR)/wiki.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/wiki.o -c $(OBJDIR)/wiki_.c $(OBJDIR)/wiki.h: $(OBJDIR)/headers $(OBJDIR)/wikiformat_.c: $(SRCDIR)/wikiformat.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/wikiformat.c >$(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.o: $(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/wikiformat.o -c $(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.h: $(OBJDIR)/headers $(OBJDIR)/winhttp_.c: $(SRCDIR)/winhttp.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/winhttp.c >$(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.o: $(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/winhttp.o -c $(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.h: $(OBJDIR)/headers $(OBJDIR)/xfer_.c: $(SRCDIR)/xfer.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/xfer.c >$(OBJDIR)/xfer_.c $(OBJDIR)/xfer.o: $(OBJDIR)/xfer_.c $(OBJDIR)/xfer.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/xfer.o -c $(OBJDIR)/xfer_.c $(OBJDIR)/xfer.h: $(OBJDIR)/headers $(OBJDIR)/zip_.c: $(SRCDIR)/zip.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/zip.c >$(OBJDIR)/zip_.c $(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c $(OBJDIR)/zip.h: $(OBJDIR)/headers $(OBJDIR)/sqlite3.o: $(SRCDIR)/sqlite3.c $(XTCC) -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_ENABLE_STAT2 -Dlocaltime=fossil_localtime -DSQLITE_ENABLE_LOCKING_STYLE=0 -c $(SRCDIR)/sqlite3.c -o $(OBJDIR)/sqlite3.o $(OBJDIR)/shell.o: $(SRCDIR)/shell.c $(XTCC) -Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -c $(SRCDIR)/shell.c -o $(OBJDIR)/shell.o $(OBJDIR)/th.o: $(SRCDIR)/th.c $(XTCC) -I$(SRCDIR) -c $(SRCDIR)/th.c -o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o: $(SRCDIR)/th_lang.c $(XTCC) -I$(SRCDIR) -c $(SRCDIR)/th_lang.c -o $(OBJDIR)/th_lang.o |
Changes to src/makeheaders.c.
|
| < | > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | /* ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** ** Copyright 1993 D. Richard Hipp. All rights reserved. ** ** Redistribution and use in source and binary forms, with or ** without modification, are permitted provided that the following ** conditions are met: ** ** 1. Redistributions of source code must retain the above copyright ** notice, this list of conditions and the following disclaimer. ** ** 2. Redistributions in binary form must reproduce the above copyright ** notice, this list of conditions and the following disclaimer in ** the documentation and/or other materials provided with the ** distribution. ** ** This software is provided "as is" and any express or implied warranties, ** including, but not limited to, the implied warranties of merchantability ** and fitness for a particular purpose are disclaimed. In no event shall ** the author or contributors be liable for any direct, indirect, incidental, ** special, exemplary, or consequential damages (including, but not limited ** to, procurement of substitute goods or services; loss of use, data or ** profits; or business interruption) however caused and on any theory of ** liability, whether in contract, strict liability, or tort (including ** negligence or otherwise) arising in any way out of the use of this ** software, even if advised of the possibility of such damage. ** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** appropriate header files. */ #include <stdio.h> #include <stdlib.h> |
︙ | ︙ | |||
2318 2319 2320 2321 2322 2323 2324 | flags |= PS_Extern; } pStart = pList; } break; case 'i': | | > > | 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 | flags |= PS_Extern; } pStart = pList; } break; case 'i': if( pList->nText==6 && strncmp(pList->zText,"inline",6)==0 && (flags & PS_Static)==0 ){ nErr += ProcessInlineProc(pList,flags,&resetFlag); } break; case 'L': if( pList->nText==5 && strncmp(pList->zText,"LOCAL",5)==0 ){ flags |= PS_Local2; |
︙ | ︙ | |||
3008 3009 3010 3011 3012 3013 3014 | static InFile *CreateInFile(char *zArg, int *pnErr){ int nSrc; char *zSrc; InFile *pFile; int i; /* | | > > > > > > | | 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 | static InFile *CreateInFile(char *zArg, int *pnErr){ int nSrc; char *zSrc; InFile *pFile; int i; /* ** Get the name of the input file to be scanned. The input file is ** everything before the first ':' or the whole file if no ':' is seen. ** ** Except, on windows, ignore any ':' that occurs as the second character ** since it might be part of the drive specifier. So really, the ":' has ** to be the 3rd or later character in the name. This precludes 1-character ** file names, which really should not be a problem. */ zSrc = zArg; for(nSrc=2; zSrc[nSrc] && zArg[nSrc]!=':'; nSrc++){} pFile = SafeMalloc( sizeof(InFile) ); memset(pFile,0,sizeof(InFile)); pFile->zSrc = StrDup(zSrc,nSrc); /* Figure out if we are dealing with C or C++ code. Assume any ** file with ".c" or ".h" is C code and all else is C++. */ |
︙ | ︙ |
Changes to src/makeheaders.html.
︙ | ︙ | |||
41 42 43 44 45 46 47 | <li><a href=makeheaders.html#H0014>3.8 Caveats</a> </ul> <li><a href=makeheaders.html#H0015>4.0 Using Makeheaders To Generate Documentation</a> <li><a href=makeheaders.html#H0016>5.0 Compiling The Makeheaders Program</a> <li><a href=makeheaders.html#H0017>6.0 Summary And Conclusion</a> | | | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | <li><a href=makeheaders.html#H0014>3.8 Caveats</a> </ul> <li><a href=makeheaders.html#H0015>4.0 Using Makeheaders To Generate Documentation</a> <li><a href=makeheaders.html#H0016>5.0 Compiling The Makeheaders Program</a> <li><a href=makeheaders.html#H0017>6.0 Summary And Conclusion</a> </ul><a name="H0002"></a> <h2>1.0 Background</h2> <p> A piece of C source code can be one of two things: a <em>declaration</em> or a <em>definition</em>. A declaration is source text that gives information to the compiler but doesn't directly result in any code being generated. |
︙ | ︙ | |||
96 97 98 99 100 101 102 | The .c files contain ``<code>#include</code>'' preprocessor statements that cause the contents of .h files to be included as part of the source code when the .c file is compiled. In this way, the .h files define the interface to a subsystem and the .c files define how the subsystem is implemented. </p> | | | 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | The .c files contain ``<code>#include</code>'' preprocessor statements that cause the contents of .h files to be included as part of the source code when the .c file is compiled. In this way, the .h files define the interface to a subsystem and the .c files define how the subsystem is implemented. </p> <a name="H0003"></a> <h3>1.1 Problems With The Traditional Approach</h3> <p> As the art of computer programming continues to advance, and the size and complexity of programs continues to swell, the traditional C approach of placing declarations and definitions in separate files begins to present the programmer with logistics and |
︙ | ︙ | |||
150 151 152 153 154 155 156 | In a program with complex, interwoven data structures, the correct declaration order can become very difficult to determine manually, especially when the declarations involved are spread out over several files. </ol> </p> | | | 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | In a program with complex, interwoven data structures, the correct declaration order can become very difficult to determine manually, especially when the declarations involved are spread out over several files. </ol> </p> <a name="H0004"></a> <h3>1.2 The Makeheaders Solution</h3> <p> The makeheaders program is designed to ameliorate the problems associated with the traditional C programming model by automatically generating the interface information in the .h files from interface information contained in other .h files and |
︙ | ︙ | |||
213 214 215 216 217 218 219 | so that makeheaders will be run automatically whenever the project is rebuilt. And the burden of running makeheaders is light. It will easily process tens of thousands of lines of source code per second. </p> | | | 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 | so that makeheaders will be run automatically whenever the project is rebuilt. And the burden of running makeheaders is light. It will easily process tens of thousands of lines of source code per second. </p> <a name="H0005"></a> <h2>2.0 Running The Makeheaders Program</h2> <p> The makeheaders program is very easy to run. If you have a collection of C source code and include files in the working directory, then you can run makeheaders to generate appropriate .h files using the following command: |
︙ | ︙ | |||
361 362 363 364 365 366 367 | you can prepend a ``./'' to its name in order to get it accepted by the command line parser. Or, you can insert the special option ``--'' on the command line to cause all subsequent command line arguments to be treated as filenames even if their names beginn with ``-''. </p> | | | | 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | you can prepend a ``./'' to its name in order to get it accepted by the command line parser. Or, you can insert the special option ``--'' on the command line to cause all subsequent command line arguments to be treated as filenames even if their names beginn with ``-''. </p> <a name="H0006"></a> <h2>3.0 Preparing Source Files For Use With Makeheaders</h2> <p> Very little has to be done to prepare source files for use with makeheaders since makeheaders will read and understand ordinary C code. But it is important that you structure your files in a way that makes sense in the makeheaders context. This section will describe several typical uses of makeheaders. </p> <a name="H0007"></a> <h3>3.1 The Basic Setup</h3> <p> The simpliest way to use makeheaders is to put all definitions in one or more .c files and all structure and type declarations in separate .h files. The only restriction is that you should take care to chose basenames |
︙ | ︙ | |||
472 473 474 475 476 477 478 | those entered manually be the programmer and others generated automatically by a prior run of makeheaders. But that is not a problem. The makeheaders program will recognize and ignore any files it has previously generated that show up on its input list. </p> | | | 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 | those entered manually be the programmer and others generated automatically by a prior run of makeheaders. But that is not a problem. The makeheaders program will recognize and ignore any files it has previously generated that show up on its input list. </p> <a name="H0008"></a> <h3>3.2 What Declarations Get Copied</h3> <p> The following list details all of the code constructs that makeheaders will extract and place in the automatically generated .h files: </p> |
︙ | ︙ | |||
575 576 577 578 579 580 581 | As a final note, we observe that automatically generated declarations are ordered as required by the ANSI-C programming language. If the declaration of some structure ``X'' requires a prior declaration of another structure ``Y'', then Y will appear first in the generated headers. </p> | | | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 | As a final note, we observe that automatically generated declarations are ordered as required by the ANSI-C programming language. If the declaration of some structure ``X'' requires a prior declaration of another structure ``Y'', then Y will appear first in the generated headers. </p> <a name="H0009"></a> <h3>3.3 How To Avoid Having To Write Any Header Files</h3> <p> In my experience, large projects work better if all of the manually written code is placed in .c files and all .h files are generated automatically. This is slightly different for the traditional C method of placing |
︙ | ︙ | |||
642 643 644 645 646 647 648 | ``#if INTERFACE'' regions of .c files. Makeheaders treats all declarations alike, no matter where they come from. You should also note that a single .c file can contain as many ``#if INTERFACE'' regions as desired. </p> | | | 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 | ``#if INTERFACE'' regions of .c files. Makeheaders treats all declarations alike, no matter where they come from. You should also note that a single .c file can contain as many ``#if INTERFACE'' regions as desired. </p> <a name="H0010"></a> <h3>3.4 Designating Declarations For Export</h3> <p> In a large project, one will often construct a hierarchy of interfaces. For example, you may have a group of 20 or so files that form a library used in several other parts of the system. |
︙ | ︙ | |||
731 732 733 734 735 736 737 | The ``#if EXPORT_INTERFACE'' mechanism can be used in either .c or .h files. (The ``#if INTERFACE'' can also be used in both .h and .c files, but since it's use in a .h file would be redundant, we haven't mentioned it before.) </p> | | | 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 | The ``#if EXPORT_INTERFACE'' mechanism can be used in either .c or .h files. (The ``#if INTERFACE'' can also be used in both .h and .c files, but since it's use in a .h file would be redundant, we haven't mentioned it before.) </p> <a name="H0011"></a> <h3>3.5 Local declarations processed by makeheaders</h3> <p> Structure declarations and typedefs that appear in .c files are normally ignored by makeheaders. Such declarations are only intended for use by the source file in which they appear and so makeheaders doesn't need to copy them into any |
︙ | ︙ | |||
769 770 771 772 773 774 775 | A ``LOCAL_INTERFACE'' block works very much like the ``INTERFACE'' and ``EXPORT_INTERFACE'' blocks described above, except that makeheaders insures that the objects declared in a LOCAL_INTERFACE are only visible to the file containing the LOCAL_INTERFACE. </p> | | | 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 | A ``LOCAL_INTERFACE'' block works very much like the ``INTERFACE'' and ``EXPORT_INTERFACE'' blocks described above, except that makeheaders insures that the objects declared in a LOCAL_INTERFACE are only visible to the file containing the LOCAL_INTERFACE. </p> <a name="H0012"></a> <h3>3.6 Using Makeheaders With C++ Code</h3> <p> You can use makeheaders to generate header files for C++ code, in addition to C. Makeheaders will recognize and copy both ``class'' declarations and inline function definitions, and it knows not to try to generate |
︙ | ︙ | |||
868 869 870 871 872 873 874 | <p> Makeheaders does not understand more recent C++ syntax such as templates and namespaces. Perhaps these issued will be addressed in future revisions. </p> | | | 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 | <p> Makeheaders does not understand more recent C++ syntax such as templates and namespaces. Perhaps these issued will be addressed in future revisions. </p> <a name="H0013"></a> <h3>3.7 Conditional Compilation</h3> <p> The makeheaders program understands and tracks the conditional compilation constructs in the source code files it scans. Hence, if the following code appears in a source file <pre> |
︙ | ︙ | |||
901 902 903 904 905 906 907 | <pre> #if 0 #endif </pre> and treats the enclosed text as a comment. </p> | | | 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 | <pre> #if 0 #endif </pre> and treats the enclosed text as a comment. </p> <a name="H0014"></a> <h3>3.8 Caveats</h3> <p> The makeheaders system is designed to be robust but it is possible for a devious programmer to fool the system, usually with unhelpful consequences. This subsection is a guide to helping you avoid trouble. |
︙ | ︙ | |||
971 972 973 974 975 976 977 | For most projects the code constructs that makeheaders cannot handle are very rare. As long as you avoid excessive cleverness, makeheaders will probably be able to figure out what you want and will do the right thing. </p> | | | 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 | For most projects the code constructs that makeheaders cannot handle are very rare. As long as you avoid excessive cleverness, makeheaders will probably be able to figure out what you want and will do the right thing. </p> <a name="H0015"></a> <h2>4.0 Using Makeheaders To Generate Documentation</h2> <p> Many people have observed the advantages of generating program documentation directly from the source code: <ul> <li> Less effort is involved. It is easier to write a program than |
︙ | ︙ | |||
1035 1036 1037 1038 1039 1040 1041 | <li> The complete text of a declaration for the object. </ul> The exact output format will not be described here. It is simple to understand and parse and should be obvious to anyone who inspects some sample output. </p> | | | | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 | <li> The complete text of a declaration for the object. </ul> The exact output format will not be described here. It is simple to understand and parse and should be obvious to anyone who inspects some sample output. </p> <a name="H0016"></a> <h2>5.0 Compiling The Makeheaders Program</h2> <p> The source code for makeheaders is a single file of ANSI-C code, less than 3000 lines in length. The program makes only modest demands of the system and C library and should compile without alteration on most ANSI C compilers and on most operating systems. It is known to compile using several variations of GCC for Unix as well as Cygwin32 and MSVC 5.0 for Win32. </p> <a name="H0017"></a> <h2>6.0 Summary And Conclusion</h2> <p> The makeheaders program will automatically generate a minimal header file for each of a set of C source and header files, and will generate a composite header file for the entire source file suite, for either internal or external use. |
︙ | ︙ |
Changes to src/makemake.tcl.
1 2 | #!/usr/bin/tclsh # | | > > > > | > > > > > > | > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | #!/usr/bin/tclsh # # Run this TCL script to generate the various makefiles for a variety # of platforms. Files generated include: # # src/main.mk # makefile for all unix systems # win/Makefile.mingw # makefile for mingw on windows # win/Makefile.* # makefiles for other windows compilers # # Run this script while in the "src" subdirectory. Like this: # # tclsh makemake.tcl # ############################################################################# # Basenames of all source files that get preprocessed using # "translate" and "makeheaders". To add new source files to the # project, simply add the basename to this list and rerun this script. # set src { add allrepo attach bag bisect blob branch browse captcha cgi checkin checkout |
︙ | ︙ | |||
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | diffcmd doc encode event export file finfo graph http http_socket http_transport import info login main manifest md5 merge merge3 name pivot popen pqueue printf rebuild report rss schema search setup sha1 shun skins stat style sync tag th_main timeline tkt tktsetup undo update url | > > > > > > > | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | diffcmd doc encode event export file finfo glob graph gzip http http_socket http_transport import info leaf login main manifest md5 merge merge3 name path pivot popen pqueue printf rebuild report rss schema search setup sha1 shun skins sqlcmd stash stat style sync tag tar th_main timeline tkt tktsetup undo update url |
︙ | ︙ | |||
82 83 84 85 86 87 88 | zip http_ssl } # Name of the final application # set name fossil | | > > > > > > > > > > > > > > > > > > > > > > > > | | | < < | | | | | | | | | | | | | | | | | | | | | | | | | < > | > | > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | | | > | | | | | | | | < < | | < < < < | | | | | | | | | > > > > > > | | > > > > | | < < < < | | > | | | | | | > | > > > > > | | | | | | | > > | > > > | | 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 | zip http_ssl } # Name of the final application # set name fossil # The "writeln" command sends output to the target makefile. # proc writeln {args} { global output_file if {[lindex $args 0]=="-nonewline"} { puts -nonewline $output_file [lindex $args 1] } else { puts $output_file [lindex $args 0] } } # STOP HERE. # Unless the build procedures changes, you should not have to edit anything # below this line. ############################################################################## ############################################################################## ############################################################################## # Start by generating the "main.mk" makefile used for all unix systems. # puts "building main.mk" set output_file [open main.mk w] fconfigure $output_file -translation binary writeln {# DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } writeln "\n" writeln -nonewline "TRANS_SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(OBJDIR)/${s}_.c" } writeln "\n" writeln -nonewline "OBJ =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(OBJDIR)/$s.o" } writeln "\n" writeln "APPNAME = $name\$(E)" writeln "\n" writeln { all: $(OBJDIR) $(APPNAME) install: $(APPNAME) mv $(APPNAME) $(INSTALLDIR) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(BCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(BCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(BCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c # WARNING. DANGER. Running the testsuite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. test: $(APPNAME) $(TCLSH) test/tester.tcl $(APPNAME) $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest awk '{ printf "#define MANIFEST_UUID \"%s\"\n", $$1}' \ $(SRCDIR)/../manifest.uuid >$(OBJDIR)/VERSION.h awk '{ printf "#define MANIFEST_VERSION \"[%.10s]\"\n", $$1}' \ $(SRCDIR)/../manifest.uuid >>$(OBJDIR)/VERSION.h awk '$$1=="D"{printf "#define MANIFEST_DATE \"%s %s\"\n",\ substr($$2,1,10),substr($$2,12,8)}' \ $(SRCDIR)/../manifest >>$(OBJDIR)/VERSION.h EXTRAOBJ = \ $(OBJDIR)/sqlite3.o \ $(OBJDIR)/shell.o \ $(OBJDIR)/th.o \ $(OBJDIR)/th_lang.o $(APPNAME): $(OBJDIR)/headers $(OBJ) $(EXTRAOBJ) $(TCC) -o $(APPNAME) $(OBJ) $(EXTRAOBJ) $(LIB) # This rule prevents make from using its default rules to try build # an executable named "manifest" out of the file named "manifest.c" # $(SRCDIR)/../manifest: # noop clean: rm -rf $(OBJDIR)/* $(APPNAME) } set mhargs {} foreach s [lsort $src] { append mhargs " \$(OBJDIR)/${s}_.c:\$(OBJDIR)/$s.h" set extra_h($s) {} } append mhargs " \$(SRCDIR)/sqlite3.h" append mhargs " \$(SRCDIR)/th.h" append mhargs " \$(OBJDIR)/VERSION.h" writeln "\$(OBJDIR)/page_index.h: \$(TRANS_SRC) \$(OBJDIR)/mkindex" writeln "\t\$(OBJDIR)/mkindex \$(TRANS_SRC) >$@" writeln "\$(OBJDIR)/headers:\t\$(OBJDIR)/page_index.h \$(OBJDIR)/makeheaders \$(OBJDIR)/VERSION.h" writeln "\t\$(OBJDIR)/makeheaders $mhargs" writeln "\ttouch \$(OBJDIR)/headers" writeln "\$(OBJDIR)/headers: Makefile" writeln "Makefile:" set extra_h(main) \$(OBJDIR)/page_index.h foreach s [lsort $src] { writeln "\$(OBJDIR)/${s}_.c:\t\$(SRCDIR)/$s.c \$(OBJDIR)/translate" writeln "\t\$(OBJDIR)/translate \$(SRCDIR)/$s.c >\$(OBJDIR)/${s}_.c\n" writeln "\$(OBJDIR)/$s.o:\t\$(OBJDIR)/${s}_.c \$(OBJDIR)/$s.h $extra_h($s) \$(SRCDIR)/config.h" writeln "\t\$(XTCC) -o \$(OBJDIR)/$s.o -c \$(OBJDIR)/${s}_.c\n" writeln "\$(OBJDIR)/$s.h:\t\$(OBJDIR)/headers" } writeln "\$(OBJDIR)/sqlite3.o:\t\$(SRCDIR)/sqlite3.c" set opt {-DSQLITE_OMIT_LOAD_EXTENSION=1} append opt " -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4" #append opt " -DSQLITE_ENABLE_FTS3=1" append opt " -DSQLITE_ENABLE_STAT2" append opt " -Dlocaltime=fossil_localtime" append opt " -DSQLITE_ENABLE_LOCKING_STYLE=0" set SQLITE_OPTIONS $opt writeln "\t\$(XTCC) $opt -c \$(SRCDIR)/sqlite3.c -o \$(OBJDIR)/sqlite3.o\n" writeln "\$(OBJDIR)/shell.o:\t\$(SRCDIR)/shell.c" set opt {-Dmain=sqlite3_shell} append opt " -DSQLITE_OMIT_LOAD_EXTENSION=1" writeln "\t\$(XTCC) $opt -c \$(SRCDIR)/shell.c -o \$(OBJDIR)/shell.o\n" writeln "\$(OBJDIR)/th.o:\t\$(SRCDIR)/th.c" writeln "\t\$(XTCC) -I\$(SRCDIR) -c \$(SRCDIR)/th.c -o \$(OBJDIR)/th.o\n" writeln "\$(OBJDIR)/th_lang.o:\t\$(SRCDIR)/th_lang.c" writeln "\t\$(XTCC) -I\$(SRCDIR) -c \$(SRCDIR)/th_lang.c -o \$(OBJDIR)/th_lang.o\n" close $output_file # # End of the main.mk output ############################################################################## ############################################################################## ############################################################################## # Begin win/Makefile.mingw # puts "building ../win/Makefile.mingw" set output_file [open ../win/Makefile.mingw w] fconfigure $output_file -translation binary writeln {#!/usr/bin/make # # This is a makefile for us on windows using mingw. # #### The toplevel directory of the source tree. Fossil can be built # in a directory that is separate from the source tree. Just change # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # # OBJDIR = wbld #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # BCC = gcc #### Enable HTTPS support via OpenSSL (links to libssl and libcrypto) # # FOSSIL_ENABLE_SSL=1 #### The directory in which the zlib compression library is installed. # # ZLIBDIR = /programs/gnuwin32 #### C Compile and options for use in building executables that # will run on the target platform. This is usually the same # as BCC, unless you are cross-compiling. This C compiler builds # the finished binary for fossil. The BCC compiler above is used # for building intermediate code-generator tools. # TCC = gcc -Os -Wall -L$(ZLIBDIR)/lib -I$(ZLIBDIR)/include # With HTTPS support ifdef FOSSIL_ENABLE_SSL TCC += -static -DFOSSIL_ENABLE_SSL=1 endif #### Extra arguments for linking the finished binary. Fossil needs # to link against the Z-Lib compression library. There are no # other dependencies. We sometimes add the -static option here # so that we can build a static executable that will run in a # chroot jail. # #LIB = -lz -lws2_32 # OpenSSL: ifdef FOSSIL_ENABLE_SSL LIB += -lssl -lcrypto -lgdi32 endif LIB += -lmingwex -lz -lws2_32 #### Tcl shell for use in running the fossil testsuite. This is only # used for testing. If you do not run # TCLSH = tclsh #### Nullsoft installer makensis location # MAKENSIS = "c:\Program Files\NSIS\makensis.exe" #### Include a configuration file that can override any one of these settings. # -include config.w32 # STOP HERE # You should not need to change anything below this line #-------------------------------------------------------- XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } writeln "\n" writeln -nonewline "TRANS_SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(OBJDIR)/${s}_.c" } writeln "\n" writeln -nonewline "OBJ =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(OBJDIR)/$s.o" } writeln "\n" writeln "APPNAME = ${name}.exe" writeln {TRANSLATE = $(subst /,\\,$(OBJDIR)/translate.exe) MAKEHEADERS = $(subst /,\\,$(OBJDIR)/makeheaders.exe) MKINDEX = $(subst /,\\,$(OBJDIR)/mkindex.exe) VERSION = $(subst /,\\,$(OBJDIR)/version.exe) } writeln { all: $(OBJDIR) $(APPNAME) $(OBJDIR)/icon.o: $(SRCDIR)/../win/icon.rc cp $(SRCDIR)/../win/icon.rc $(OBJDIR) windres $(OBJDIR)/icon.rc -o $(OBJDIR)/icon.o install: $(APPNAME) mv $(APPNAME) $(INSTALLDIR) $(OBJDIR): mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(BCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(BCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(BCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c $(VERSION): $(SRCDIR)/../win/version.c $(BCC) -o $(OBJDIR)/version $(SRCDIR)/../win/version.c # WARNING. DANGER. Running the testsuite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. test: $(APPNAME) $(TCLSH) test/tester.tcl $(APPNAME) $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(VERSION) $(VERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest >$(OBJDIR)/VERSION.h EXTRAOBJ = \ $(OBJDIR)/sqlite3.o \ $(OBJDIR)/shell.o \ $(OBJDIR)/th.o \ $(OBJDIR)/th_lang.o $(APPNAME): $(OBJDIR)/headers $(OBJ) $(EXTRAOBJ) $(OBJDIR)/icon.o $(TCC) -o $(APPNAME) $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/icon.o # This rule prevents make from using its default rules to try build # an executable named "manifest" out of the file named "manifest.c" # $(SRCDIR)/../manifest: # noop # Requires msys to be installed in addition to the mingw, for the "rm" # command. "del" will not work here because it is not a separate command # but a MSDOS-shell builtin. # clean: rm -rf $(OBJDIR) $(APPNAME) setup: $(OBJDIR) $(APPNAME) $(MAKENSIS) ./fossil.nsi } set mhargs {} foreach s [lsort $src] { append mhargs " \$(OBJDIR)/${s}_.c:\$(OBJDIR)/$s.h" set extra_h($s) {} } append mhargs " \$(SRCDIR)/sqlite3.h" append mhargs " \$(SRCDIR)/th.h" append mhargs " \$(OBJDIR)/VERSION.h" writeln "\$(OBJDIR)/page_index.h: \$(TRANS_SRC) \$(OBJDIR)/mkindex" writeln "\t\$(MKINDEX) \$(TRANS_SRC) >$@" writeln "\$(OBJDIR)/headers:\t\$(OBJDIR)/page_index.h \$(OBJDIR)/makeheaders \$(OBJDIR)/VERSION.h" writeln "\t\$(MAKEHEADERS) $mhargs" writeln "\techo Done >\$(OBJDIR)/headers" writeln "" writeln "\$(OBJDIR)/headers: Makefile" writeln "Makefile:" set extra_h(main) \$(OBJDIR)/page_index.h foreach s [lsort $src] { writeln "\$(OBJDIR)/${s}_.c:\t\$(SRCDIR)/$s.c \$(OBJDIR)/translate" writeln "\t\$(TRANSLATE) \$(SRCDIR)/$s.c >\$(OBJDIR)/${s}_.c\n" writeln "\$(OBJDIR)/$s.o:\t\$(OBJDIR)/${s}_.c \$(OBJDIR)/$s.h $extra_h($s) \$(SRCDIR)/config.h" writeln "\t\$(XTCC) -o \$(OBJDIR)/$s.o -c \$(OBJDIR)/${s}_.c\n" writeln "$s.h:\t\$(OBJDIR)/headers" } writeln "\$(OBJDIR)/sqlite3.o:\t\$(SRCDIR)/sqlite3.c" set opt $SQLITE_OPTIONS writeln "\t\$(XTCC) $opt -c \$(SRCDIR)/sqlite3.c -o \$(OBJDIR)/sqlite3.o\n" writeln "\$(OBJDIR)/shell.o:\t\$(SRCDIR)/shell.c" set opt {-Dmain=sqlite3_shell} append opt " -DSQLITE_OMIT_LOAD_EXTENSION=1" writeln "\t\$(XTCC) $opt -c \$(SRCDIR)/shell.c -o \$(OBJDIR)/shell.o\n" writeln "\$(OBJDIR)/th.o:\t\$(SRCDIR)/th.c" writeln "\t\$(XTCC) -I\$(SRCDIR) -c \$(SRCDIR)/th.c -o \$(OBJDIR)/th.o\n" writeln "\$(OBJDIR)/th_lang.o:\t\$(SRCDIR)/th_lang.c" writeln "\t\$(XTCC) -I\$(SRCDIR) -c \$(SRCDIR)/th_lang.c -o \$(OBJDIR)/th_lang.o\n" close $output_file # # End of the main.mk output ############################################################################## ############################################################################## ############################################################################## # Begin win/Makefile.dmc # puts "building ../win/Makefile.dmc" set output_file [open ../win/Makefile.dmc w] fconfigure $output_file -translation binary writeln {# DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh src/makemake.tcl" # to regenerate this file. B = .. SRCDIR = $B\src OBJDIR = . O = .obj E = .exe # Maybe DMDIR, SSL or INCL needs adjustment DMDIR = c:\DM INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(DMDIR)\extra\include #SSL = -DFOSSIL_ENABLE_SSL=1 SSL = CFLAGS = -o BCC = $(DMDIR)\bin\dmc $(CFLAGS) TCC = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL) LIBS = $(DMDIR)\extra\lib\ zlib wsock32 } writeln "SQLITE_OPTIONS = $SQLITE_OPTIONS\n" writeln -nonewline "SRC = " foreach s [lsort $src] { writeln -nonewline "${s}_.c " } writeln "\n" writeln -nonewline "OBJ = " foreach s [lsort $src] { writeln -nonewline "\$(OBJDIR)\\$s\$O " } writeln "\$(OBJDIR)\\shell\$O \$(OBJDIR)\\sqlite3\$O \$(OBJDIR)\\th\$O \$(OBJDIR)\\th_lang\$O " writeln { RC=$(DMDIR)\bin\rcc RCFLAGS=-32 -w1 -I$(SRCDIR) /D__DMC__ APPNAME = $(OBJDIR)\fossil$(E) all: $(APPNAME) $(APPNAME) : translate$E mkindex$E headers $(OBJ) $(OBJDIR)\link cd $(OBJDIR) $(DMDIR)\bin\link @link $(OBJDIR)\fossil.res: $B\win\fossil.rc $(RC) $(RCFLAGS) -o$@ $** $(OBJDIR)\link: $B\win\Makefile.dmc $(OBJDIR)\fossil.res} writeln -nonewline "\t+echo " foreach s [lsort $src] { writeln -nonewline "$s " } writeln "shell sqlite3 th th_lang > \$@" writeln "\t+echo fossil >> \$@" writeln "\t+echo fossil >> \$@" writeln "\t+echo \$(LIBS) >> \$@" writeln "\t+echo. >> \$@" writeln "\t+echo fossil >> \$@" writeln { translate$E: $(SRCDIR)\translate.c $(BCC) -o$@ $** makeheaders$E: $(SRCDIR)\makeheaders.c $(BCC) -o$@ $** mkindex$E: $(SRCDIR)\mkindex.c $(BCC) -o$@ $** version$E: $B\win\version.c $(BCC) -o$@ $** $(OBJDIR)\shell$O : $(SRCDIR)\shell.c $(TCC) -o$@ -c -Dmain=sqlite3_shell $(SQLITE_OPTIONS) $** $(OBJDIR)\sqlite3$O : $(SRCDIR)\sqlite3.c $(TCC) -o$@ -c $(SQLITE_OPTIONS) $** $(OBJDIR)\th$O : $(SRCDIR)\th.c $(TCC) -o$@ -c $** $(OBJDIR)\th_lang$O : $(SRCDIR)\th_lang.c $(TCC) -o$@ -c $** |
︙ | ︙ | |||
316 317 318 319 320 321 322 | -del *.obj *_.c *.h *.map realclean: -del $(APPNAME) translate$E mkindex$E makeheaders$E version$E } foreach s [lsort $src] { | | | | | | | | | < | > | > > > > > > | > > | | < > | 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 | -del *.obj *_.c *.h *.map realclean: -del $(APPNAME) translate$E mkindex$E makeheaders$E version$E } foreach s [lsort $src] { writeln "\$(OBJDIR)\\$s\$O : ${s}_.c ${s}.h" writeln "\t\$(TCC) -o\$@ -c ${s}_.c\n" writeln "${s}_.c : \$(SRCDIR)\\$s.c" writeln "\t+translate\$E \$** > \$@\n" } writeln -nonewline "headers: makeheaders\$E page_index.h VERSION.h\n\t +makeheaders\$E " foreach s [lsort $src] { writeln -nonewline "${s}_.c:$s.h " } writeln "\$(SRCDIR)\\sqlite3.h \$(SRCDIR)\\th.h VERSION.h" writeln "\t@copy /Y nul: headers" close $output_file # # End of the win/Makefile.dmc output ############################################################################## ############################################################################## ############################################################################## # Begin win/Makefile.msc # puts "building ../win/Makefile.msc" set output_file [open ../win/Makefile.msc w] fconfigure $output_file -translation binary writeln {# DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh src/makemake.tcl" # to regenerate this file. B = .. SRCDIR = $B\src OBJDIR = . OX = . O = .obj E = .exe # Maybe MSCDIR, SSL, ZLIB, or INCL needs adjustment MSCDIR = c:\msc # Uncomment below for SSL support |
︙ | ︙ | |||
365 366 367 368 369 370 371 | #ZLIB = zdll.lib ZINCDIR = $(MSCDIR)\extra\include ZLIBDIR = $(MSCDIR)\extra\lib ZLIB = zlib.lib INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(MSCDIR)\extra\include -I$(ZINCDIR) | < < < | > > | | | | | | | | | | | | | | | | | | > > > | | | | | | | | | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 | #ZLIB = zdll.lib ZINCDIR = $(MSCDIR)\extra\include ZLIBDIR = $(MSCDIR)\extra\lib ZLIB = zlib.lib INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(MSCDIR)\extra\include -I$(ZINCDIR) CFLAGS = -nologo -MT -O2 BCC = $(CC) $(CFLAGS) TCC = $(CC) -c $(CFLAGS) $(MSCDEF) $(SSL) $(INCL) LIBS = $(ZLIB) ws2_32.lib $(SSLLIB) LIBDIR = -LIBPATH:$(MSCDIR)\extra\lib -LIBPATH:$(ZLIBDIR) } regsub -all {[-]D} $SQLITE_OPTIONS {/D} MSC_SQLITE_OPTIONS writeln "SQLITE_OPTIONS = $MSC_SQLITE_OPTIONS\n" writeln -nonewline "SRC = " foreach s [lsort $src] { writeln -nonewline "${s}_.c " } writeln "\n" writeln -nonewline "OBJ = " foreach s [lsort $src] { writeln -nonewline "\$(OX)\\$s\$O " } writeln "\$(OX)\\shell\$O \$(OX)\\sqlite3\$O \$(OX)\\th\$O \$(OX)\\th_lang\$O " writeln { APPNAME = $(OX)\fossil$(E) all: $(OX) $(APPNAME) $(APPNAME) : translate$E mkindex$E headers $(OBJ) $(OX)\linkopts cd $(OX) link -LINK -OUT:$@ $(LIBDIR) @linkopts $(OX)\linkopts: $B\win\Makefile.msc} writeln -nonewline "\techo " foreach s [lsort $src] { writeln -nonewline "$s " } writeln "sqlite3 th th_lang > \$@" writeln "\techo \$(LIBS) >> \$@\n\n" writeln { $(OX): @-mkdir $@ translate$E: $(SRCDIR)\translate.c $(BCC) $** makeheaders$E: $(SRCDIR)\makeheaders.c $(BCC) $** mkindex$E: $(SRCDIR)\mkindex.c $(BCC) $** version$E: $B\win\version.c $(BCC) $** $(OX)\shell$O : $(SRCDIR)\shell.c $(TCC) /Fo$@ /Dmain=sqlite3_shell $(SQLITE_OPTIONS) -c shell_.c $(OX)\sqlite3$O : $(SRCDIR)\sqlite3.c $(TCC) /Fo$@ -c $(SQLITE_OPTIONS) $** $(OX)\th$O : $(SRCDIR)\th.c $(TCC) /Fo$@ -c $** $(OX)\th_lang$O : $(SRCDIR)\th_lang.c $(TCC) /Fo$@ -c $** VERSION.h : version$E $B\manifest.uuid $B\manifest $** > $@ page_index.h: mkindex$E $(SRC) $** > $@ clean: -del $(OX)\*.obj -del *.obj *_.c *.h *.map -del headers linkopts realclean: -del $(APPNAME) translate$E mkindex$E makeheaders$E version$E } foreach s [lsort $src] { writeln "\$(OX)\\$s\$O : ${s}_.c ${s}.h" writeln "\t\$(TCC) /Fo\$@ -c ${s}_.c\n" writeln "${s}_.c : \$(SRCDIR)\\$s.c" writeln "\ttranslate\$E \$** > \$@\n" } writeln -nonewline "headers: makeheaders\$E page_index.h VERSION.h\n\tmakeheaders\$E " foreach s [lsort $src] { writeln -nonewline "${s}_.c:$s.h " } writeln "\$(SRCDIR)\\sqlite3.h \$(SRCDIR)\\th.h VERSION.h" writeln "\t@copy /Y nul: headers" close $output_file # # End of the win/Makefile.msc output ############################################################################## ############################################################################## ############################################################################## # Begin win/Makefile.PellesCGMake # puts "building ../win/Makefile.PellesCGMake" set output_file [open ../win/Makefile.PellesCGMake w] fconfigure $output_file -translation binary writeln {# DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh src/makemake.tcl" # to regenerate this file. # # HowTo # ----- # # This is a Makefile to compile fossil with PellesC from # http://www.smorgasbordet.com/pellesc/index.htm # In addition to the Compiler envrionment, you need # gmake from http://sourceforge.net/projects/unxutils/, Pelles make version # couldn't handle the complex dependencies in this build # zlib sources # Then you do # 1. create a directory PellesC in the project root directory # 2. Change the variables PellesCDir/ZLIBSRCDIR to the path of your installation # 3. open a dos prompt window and change working directory into PellesC (step 1) # 4. run gmake -f ..\win\Makefile.PellesCGMake # # this file is tested with # PellesC 5.00.13 # gmake 3.80 # zlib sources 1.2.5 # Windows XP SP 2 # and # PellesC 6.00.4 # gmake 3.80 # zlib sources 1.2.5 # Windows 7 Home Premium # # PellesCDir=c:\Programme\PellesC # Select between 32/64 bit code, default is 32 bit #TARGETVERSION=64 ifeq ($(TARGETVERSION),64) # 64 bit version TARGETMACHINE_CC=amd64 TARGETMACHINE_LN=amd64 TARGETEXTEND=64 else # 32 bit version TARGETMACHINE_CC=x86 TARGETMACHINE_LN=ix86 TARGETEXTEND= endif # define the project directories B=.. SRCDIR=$(B)/src/ WINDIR=$(B)/win/ ZLIBSRCDIR=../../zlib/ # define linker command and options LINK=$(PellesCDir)/bin/polink.exe LINKFLAGS=-subsystem:console -machine:$(TARGETMACHINE_LN) /LIBPATH:$(PellesCDir)\lib\win$(TARGETEXTEND) /LIBPATH:$(PellesCDir)\lib kernel32.lib advapi32.lib delayimp$(TARGETEXTEND).lib Wsock32.lib Crtmt$(TARGETEXTEND).lib # define standard C-compiler and flags, used to compile # the fossil binary. Some special definitions follow for # special files follow CC=$(PellesCDir)\bin\pocc.exe DEFINES=-D_pgmptr=g.argv[0] CCFLAGS=-T$(TARGETMACHINE_CC)-coff -Ot -W2 -Gd -Go -Ze -MT $(DEFINES) INCLUDE=/I $(PellesCDir)\Include\Win /I $(PellesCDir)\Include /I $(ZLIBSRCDIR) /I $(SRCDIR) # define commands for building the windows resource files RESOURCE=fossil.res RC=$(PellesCDir)\bin\porc.exe RCFLAGS=$(INCLUDE) -D__POCC__=1 -D_M_X$(TARGETVERSION) # define the special utilities files, needed to generate # the automatically generated source files UTILS=translate.exe mkindex.exe makeheaders.exe UTILS_OBJ=$(UTILS:.exe=.obj) UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c)) # define the sqlite files, which need special flags on compile SQLITESRC=sqlite3.c ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf)) SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj)) SQLITEDEFINES=-DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -Dlocaltime=fossil_localtime -DSQLITE_ENABLE_LOCKING_STYLE=0 # define the sqlite shell files, which need special flags on compile SQLITESHELLSRC=shell.c ORIGSQLITESHELLSRC=$(foreach sf,$(SQLITESHELLSRC),$(SRCDIR)$(sf)) SQLITESHELLOBJ=$(foreach sf,$(SQLITESHELLSRC),$(sf:.c=.obj)) SQLITESHELLDEFINES=-Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 # define the th scripting files, which need special flags on compile THSRC=th.c th_lang.c ORIGTHSRC=$(foreach sf,$(THSRC),$(SRCDIR)$(sf)) THOBJ=$(foreach sf,$(THSRC),$(sf:.c=.obj)) # define the zlib files, needed by this compile ZLIBSRC=adler32.c compress.c crc32.c deflate.c gzclose.c gzlib.c gzread.c gzwrite.c infback.c inffast.c inflate.c inftrees.c trees.c uncompr.c zutil.c ORIGZLIBSRC=$(foreach sf,$(ZLIBSRC),$(ZLIBSRCDIR)$(sf)) ZLIBOBJ=$(foreach sf,$(ZLIBSRC),$(sf:.c=.obj)) # define all fossil sources, using the standard compile and # source generation. These are all files in SRCDIR, which are not # mentioned as special files above: ORIGSRC=$(filter-out $(UTILS_SRC) $(ORIGTHSRC) $(ORIGSQLITESRC) $(ORIGSQLITESHELLSRC),$(wildcard $(SRCDIR)*.c)) SRC=$(subst $(SRCDIR),,$(ORIGSRC)) TRANSLATEDSRC=$(SRC:.c=_.c) TRANSLATEDOBJ=$(TRANSLATEDSRC:.c=.obj) # main target file is the application APPLICATION=fossil.exe # define the standard make target .PHONY: default default: page_index.h headers $(APPLICATION) # symbolic target to generate the source generate utils .PHONY: utils utils: $(UTILS) # link utils $(UTILS) version.exe: %.exe: %.obj $(LINK) $(LINKFLAGS) -out:"$@" $< # compiling standard fossil utils $(UTILS_OBJ): %.obj: $(SRCDIR)%.c $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" # compile special windows utils version.obj: $(WINDIR)version.c $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" # generate the translated c-source files $(TRANSLATEDSRC): %_.c: $(SRCDIR)%.c translate.exe translate.exe $< >$@ # generate the index source, containing all web references,.. page_index.h: $(TRANSLATEDSRC) mkindex.exe mkindex.exe $(TRANSLATEDSRC) >$@ # extracting version info from manifest VERSION.h: version.exe ..\manifest.uuid ..\manifest version.exe ..\manifest.uuid ..\manifest > $@ # generate the simplified headers headers: makeheaders.exe page_index.h VERSION.h ../src/sqlite3.h ../src/th.h VERSION.h makeheaders.exe $(foreach ts,$(TRANSLATEDSRC),$(ts):$(ts:_.c=.h)) ../src/sqlite3.h ../src/th.h VERSION.h echo Done >$@ # compile C sources with relevant options $(TRANSLATEDOBJ): %_.obj: %_.c %.h $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" $(SQLITEOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)%.h $(CC) $(CCFLAGS) $(SQLITEDEFINES) $(INCLUDE) "$<" -Fo"$@" $(SQLITESHELLOBJ): %.obj: $(SRCDIR)%.c $(CC) $(CCFLAGS) $(SQLITESHELLDEFINES) $(INCLUDE) "$<" -Fo"$@" $(THOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)th.h $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" $(ZLIBOBJ): %.obj: $(ZLIBSRCDIR)%.c $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" # create the windows resource with icon and version info $(RESOURCE): %.res: ../win/%.rc ../win/*.ico $(RC) $(RCFLAGS) $< -Fo"$@" # link the application $(APPLICATION): $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) headers $(RESOURCE) $(LINK) $(LINKFLAGS) -out:"$@" $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) $(RESOURCE) # cleanup .PHONY: clean clean: del /F $(TRANSLATEDOBJ) $(SQLITEOBJ) $(THOBJ) $(ZLIBOBJ) $(UTILS_OBJ) version.obj del /F $(TRANSLATEDSRC) del /F *.h headers del /F $(RESOURCE) .PHONY: clobber clobber: clean del /F *.exe } |
Changes to src/manifest.c.
︙ | ︙ | |||
72 73 74 75 76 77 78 79 80 81 82 83 84 85 | int nFile; /* Number of F cards */ int nFileAlloc; /* Slots allocated in aFile[] */ int iFile; /* Index of current file in iterator */ ManifestFile *aFile; /* One entry for each F-card */ int nParent; /* Number of parents. */ int nParentAlloc; /* Slots allocated in azParent[] */ char **azParent; /* UUIDs of parents. One for each P card argument */ int nCChild; /* Number of cluster children */ int nCChildAlloc; /* Number of closts allocated in azCChild[] */ char **azCChild; /* UUIDs of referenced objects in a cluster. M cards */ int nTag; /* Number of T Cards */ int nTagAlloc; /* Slots allocated in aTag[] */ struct { char *zName; /* Name of the tag */ | > > > > > | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | int nFile; /* Number of F cards */ int nFileAlloc; /* Slots allocated in aFile[] */ int iFile; /* Index of current file in iterator */ ManifestFile *aFile; /* One entry for each F-card */ int nParent; /* Number of parents. */ int nParentAlloc; /* Slots allocated in azParent[] */ char **azParent; /* UUIDs of parents. One for each P card argument */ int nCherrypick; /* Number of entries in aCherrypick[] */ struct { char *zCPTarget; /* UUID of cherry-picked version w/ +|- prefix */ char *zCPBase; /* UUID of cherry-pick baseline. NULL for singletons */ } *aCherrypick; int nCChild; /* Number of cluster children */ int nCChildAlloc; /* Number of closts allocated in azCChild[] */ char **azCChild; /* UUIDs of referenced objects in a cluster. M cards */ int nTag; /* Number of T Cards */ int nTagAlloc; /* Slots allocated in aTag[] */ struct { char *zName; /* Name of the tag */ |
︙ | ︙ | |||
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | #define MX_MANIFEST_CACHE 6 static struct { int nxAge; int aAge[MX_MANIFEST_CACHE]; Manifest *apManifest[MX_MANIFEST_CACHE]; } manifestCache; /* ** Clear the memory allocated in a manifest object */ void manifest_destroy(Manifest *p){ if( p ){ blob_reset(&p->content); free(p->aFile); free(p->azParent); free(p->azCChild); free(p->aTag); free(p->aField); if( p->pBaseline ) manifest_destroy(p->pBaseline); fossil_free(p); } } /* ** Add an element to the manifest cache using LRU replacement. */ | > > > > > > > | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | #define MX_MANIFEST_CACHE 6 static struct { int nxAge; int aAge[MX_MANIFEST_CACHE]; Manifest *apManifest[MX_MANIFEST_CACHE]; } manifestCache; /* ** True if manifest_crosslink_begin() has been called but ** manifest_crosslink_end() is still pending. */ static int manifest_crosslink_busy = 0; /* ** Clear the memory allocated in a manifest object */ void manifest_destroy(Manifest *p){ if( p ){ blob_reset(&p->content); free(p->aFile); free(p->azParent); free(p->azCChild); free(p->aTag); free(p->aField); free(p->aCherrypick); if( p->pBaseline ) manifest_destroy(p->pBaseline); memset(p, 0, sizeof(*p)); fossil_free(p); } } /* ** Add an element to the manifest cache using LRU replacement. */ |
︙ | ︙ | |||
504 505 506 507 508 509 510 | p->nFileAlloc*sizeof(p->aFile[0]) ); } i = p->nFile++; p->aFile[i].zName = zName; p->aFile[i].zUuid = zUuid; p->aFile[i].zPerm = zPerm; p->aFile[i].zPrior = zPriorName; | | | 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 | p->nFileAlloc*sizeof(p->aFile[0]) ); } i = p->nFile++; p->aFile[i].zName = zName; p->aFile[i].zUuid = zUuid; p->aFile[i].zPerm = zPerm; p->aFile[i].zPrior = zPriorName; if( i>0 && fossil_strcmp(p->aFile[i-1].zName, zName)>=0 ){ goto manifest_syntax_error; } break; } /* ** J <name> ?<value>? |
︙ | ︙ | |||
533 534 535 536 537 538 539 | p->nFieldAlloc = p->nFieldAlloc*2 + 10; p->aField = fossil_realloc(p->aField, p->nFieldAlloc*sizeof(p->aField[0]) ); } i = p->nField++; p->aField[i].zName = zName; p->aField[i].zValue = zValue; | | | 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 | p->nFieldAlloc = p->nFieldAlloc*2 + 10; p->aField = fossil_realloc(p->aField, p->nFieldAlloc*sizeof(p->aField[0]) ); } i = p->nField++; p->aField[i].zName = zName; p->aField[i].zValue = zValue; if( i>0 && fossil_strcmp(p->aField[i-1].zName, zName)>=0 ){ goto manifest_syntax_error; } break; } /* |
︙ | ︙ | |||
589 590 591 592 593 594 595 | if( p->nCChild>=p->nCChildAlloc ){ p->nCChildAlloc = p->nCChildAlloc*2 + 10; p->azCChild = fossil_realloc(p->azCChild , p->nCChildAlloc*sizeof(p->azCChild[0]) ); } i = p->nCChild++; p->azCChild[i] = zUuid; | | | 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 | if( p->nCChild>=p->nCChildAlloc ){ p->nCChildAlloc = p->nCChildAlloc*2 + 10; p->azCChild = fossil_realloc(p->azCChild , p->nCChildAlloc*sizeof(p->azCChild[0]) ); } i = p->nCChild++; p->azCChild[i] = zUuid; if( i>0 && fossil_strcmp(p->azCChild[i-1], zUuid)>=0 ){ goto manifest_syntax_error; } break; } /* ** P <uuid> ... |
︙ | ︙ | |||
616 617 618 619 620 621 622 623 624 625 626 627 628 629 | p->nParentAlloc*sizeof(char*)); } i = p->nParent++; p->azParent[i] = zUuid; } break; } /* ** R <md5sum> ** ** Specify the MD5 checksum over the name and content of all files ** in the manifest. */ | > > > > > > > > > > > > > > > > > > > > > > > > | 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 | p->nParentAlloc*sizeof(char*)); } i = p->nParent++; p->azParent[i] = zUuid; } break; } /* ** Q (+|-)<uuid> ?<uuid>? ** ** Specify one or a range of checkins that are cherrypicked into ** this checkin ("+") or backed out of this checkin ("-"). */ case 'Q': { if( (zUuid = next_token(&x, &sz))==0 ) goto manifest_syntax_error; if( sz!=UUID_SIZE+1 ) goto manifest_syntax_error; if( zUuid[0]!='+' && zUuid[0]!='-' ) goto manifest_syntax_error; if( !validate16(&zUuid[1], UUID_SIZE) ) goto manifest_syntax_error; n = p->nCherrypick; p->nCherrypick++; p->aCherrypick = fossil_realloc(p->aCherrypick, p->nCherrypick*sizeof(p->aCherrypick[0])); p->aCherrypick[n].zCPTarget = zUuid; p->aCherrypick[n].zCPBase = zUuid = next_token(&x, &sz); if( zUuid ){ if( sz!=UUID_SIZE ) goto manifest_syntax_error; if( !validate16(zUuid, UUID_SIZE) ) goto manifest_syntax_error; } break; } /* ** R <md5sum> ** ** Specify the MD5 checksum over the name and content of all files ** in the manifest. */ |
︙ | ︙ | |||
677 678 679 680 681 682 683 | p->nTagAlloc = p->nTagAlloc*2 + 10; p->aTag = fossil_realloc(p->aTag, p->nTagAlloc*sizeof(p->aTag[0]) ); } i = p->nTag++; p->aTag[i].zName = zName; p->aTag[i].zUuid = zUuid; p->aTag[i].zValue = zValue; | | | 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 | p->nTagAlloc = p->nTagAlloc*2 + 10; p->aTag = fossil_realloc(p->aTag, p->nTagAlloc*sizeof(p->aTag[0]) ); } i = p->nTag++; p->aTag[i].zName = zName; p->aTag[i].zUuid = zUuid; p->aTag[i].zValue = zValue; if( i>0 && fossil_strcmp(p->aTag[i-1].zName, zName)>=0 ){ goto manifest_syntax_error; } break; } /* ** U ?<login>? |
︙ | ︙ | |||
833 834 835 836 837 838 839 | if( p->zTicketUuid ) goto manifest_syntax_error; if( p->zWikiTitle ) goto manifest_syntax_error; if( !seenZ ) goto manifest_syntax_error; p->type = CFTYPE_ATTACHMENT; }else{ if( p->nCChild>0 ) goto manifest_syntax_error; if( p->rDate<=0.0 ) goto manifest_syntax_error; | < | 869 870 871 872 873 874 875 876 877 878 879 880 881 882 | if( p->zTicketUuid ) goto manifest_syntax_error; if( p->zWikiTitle ) goto manifest_syntax_error; if( !seenZ ) goto manifest_syntax_error; p->type = CFTYPE_ATTACHMENT; }else{ if( p->nCChild>0 ) goto manifest_syntax_error; if( p->rDate<=0.0 ) goto manifest_syntax_error; if( p->nField>0 ) goto manifest_syntax_error; if( p->zTicketUuid ) goto manifest_syntax_error; if( p->zWikiTitle ) goto manifest_syntax_error; if( p->zTicketUuid ) goto manifest_syntax_error; p->type = CFTYPE_MANIFEST; } md5sum_init(); |
︙ | ︙ | |||
857 858 859 860 861 862 863 864 865 866 867 868 869 870 | /* ** Get a manifest given the rid for the control artifact. Return ** a pointer to the manifest on success or NULL if there is a failure. */ Manifest *manifest_get(int rid, int cfType){ Blob content; Manifest *p; p = manifest_cache_find(rid); if( p ){ if( cfType!=CFTYPE_ANY && cfType!=p->type ){ manifest_cache_insert(p); p = 0; } return p; | > | 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 | /* ** Get a manifest given the rid for the control artifact. Return ** a pointer to the manifest on success or NULL if there is a failure. */ Manifest *manifest_get(int rid, int cfType){ Blob content; Manifest *p; if( !rid ) return 0; p = manifest_cache_find(rid); if( p ){ if( cfType!=CFTYPE_ANY && cfType!=p->type ){ manifest_cache_insert(p); p = 0; } return p; |
︙ | ︙ | |||
925 926 927 928 929 930 931 | } /* ** Fetch the baseline associated with the delta-manifest p. ** Return 0 on success. If unable to parse the baseline, ** throw an error. If the baseline is a manifest, throw an ** error if throwError is true, or record that p is an orphan | | | < < < < < < < < | | | 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 | } /* ** Fetch the baseline associated with the delta-manifest p. ** Return 0 on success. If unable to parse the baseline, ** throw an error. If the baseline is a manifest, throw an ** error if throwError is true, or record that p is an orphan ** and return 1 if throwError is false. */ static int fetch_baseline(Manifest *p, int throwError){ if( p->zBaseline!=0 && p->pBaseline==0 ){ int rid = uuid_to_rid(p->zBaseline, 1); p->pBaseline = manifest_get(rid, CFTYPE_MANIFEST); if( p->pBaseline==0 ){ if( !throwError ){ db_multi_exec( "INSERT OR IGNORE INTO orphan(rid, baseline) VALUES(%d,%d)", p->rid, rid ); return 1; } fossil_fatal("cannot access baseline manifest %S", p->zBaseline); } } return 0; |
︙ | ︙ | |||
997 998 999 1000 1001 1002 1003 | if( p->iFile<p->nFile ) pOut = &p->aFile[p->iFile++]; break; }else if( p->iFile>=p->nFile ){ /* We have used all entries from the delta. Return the next ** entry from the baseline. */ if( pB->iFile<pB->nFile ) pOut = &pB->aFile[pB->iFile++]; break; | | | 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 | if( p->iFile<p->nFile ) pOut = &p->aFile[p->iFile++]; break; }else if( p->iFile>=p->nFile ){ /* We have used all entries from the delta. Return the next ** entry from the baseline. */ if( pB->iFile<pB->nFile ) pOut = &pB->aFile[pB->iFile++]; break; }else if( (cmp = fossil_strcmp(pB->aFile[pB->iFile].zName, p->aFile[p->iFile].zName)) < 0 ){ /* The next baseline entry comes before the next delta entry. ** So return the baseline entry. */ pOut = &pB->aFile[pB->iFile++]; break; }else if( cmp>0 ){ /* The next delta entry comes before the next baseline |
︙ | ︙ | |||
1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 | db_static_prepare(&s1, "INSERT INTO filename(name) VALUES(:fn)"); db_bind_text(&s1, ":fn", zFilename); db_exec(&s1); fnid = db_last_insert_rowid(); } return fnid; } /* ** Add a single entry to the mlink table. Also add the filename to ** the filename table if it is not there already. */ static void add_one_mlink( int mid, /* The record ID of the manifest */ const char *zFromUuid, /* UUID for the mlink.pid. "" to add file */ const char *zToUuid, /* UUID for the mlink.fid. "" to delele */ const char *zFilename, /* Filename */ | > > > > > > > > > > > > | > > > | | > | 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 | db_static_prepare(&s1, "INSERT INTO filename(name) VALUES(:fn)"); db_bind_text(&s1, ":fn", zFilename); db_exec(&s1); fnid = db_last_insert_rowid(); } return fnid; } /* ** Compute an appropriate mlink.mperm integer for the permission string ** of a file. */ int manifest_file_mperm(ManifestFile *pFile){ int mperm = 0; if( pFile && pFile->zPerm && strstr(pFile->zPerm,"x")!=0 ){ mperm = 1; } return mperm; } /* ** Add a single entry to the mlink table. Also add the filename to ** the filename table if it is not there already. */ static void add_one_mlink( int mid, /* The record ID of the manifest */ const char *zFromUuid, /* UUID for the mlink.pid. "" to add file */ const char *zToUuid, /* UUID for the mlink.fid. "" to delele */ const char *zFilename, /* Filename */ const char *zPrior, /* Previous filename. NULL if unchanged */ int isPublic, /* True if mid is not a private manifest */ int mperm /* 1: exec */ ){ int fnid, pfnid, pid, fid; static Stmt s1; fnid = filename_to_fnid(zFilename); if( zPrior==0 ){ pfnid = 0; }else{ pfnid = filename_to_fnid(zPrior); } if( zFromUuid==0 || zFromUuid[0]==0 ){ pid = 0; }else{ pid = uuid_to_rid(zFromUuid, 1); } if( zToUuid==0 || zToUuid[0]==0 ){ fid = 0; }else{ fid = uuid_to_rid(zToUuid, 1); if( isPublic ) content_make_public(fid); } db_static_prepare(&s1, "INSERT INTO mlink(mid,pid,fid,fnid,pfnid,mperm)" "VALUES(:m,:p,:f,:n,:pfn,:mp)" ); db_bind_int(&s1, ":m", mid); db_bind_int(&s1, ":p", pid); db_bind_int(&s1, ":f", fid); db_bind_int(&s1, ":n", fnid); db_bind_int(&s1, ":pfn", pfnid); db_bind_int(&s1, ":mp", mperm); db_exec(&s1); if( pid && fid ){ content_deltify(pid, fid, 0); } } /* |
︙ | ︙ | |||
1111 1112 1113 1114 1115 1116 1117 | static ManifestFile *manifest_file_seek_base(Manifest *p, const char *zName){ int lwr, upr; int c; int i; lwr = 0; upr = p->nFile - 1; if( p->iFile>=lwr && p->iFile<upr ){ | | | | 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 | static ManifestFile *manifest_file_seek_base(Manifest *p, const char *zName){ int lwr, upr; int c; int i; lwr = 0; upr = p->nFile - 1; if( p->iFile>=lwr && p->iFile<upr ){ c = fossil_strcmp(p->aFile[p->iFile+1].zName, zName); if( c==0 ){ return &p->aFile[++p->iFile]; }else if( c>0 ){ upr = p->iFile; }else{ lwr = p->iFile+1; } } while( lwr<=upr ){ i = (lwr+upr)/2; c = fossil_strcmp(p->aFile[i].zName, zName); if( c<0 ){ lwr = i+1; }else if( c>0 ){ upr = i-1; }else{ p->iFile = i; return &p->aFile[i]; |
︙ | ︙ | |||
1158 1159 1160 1161 1162 1163 1164 | fetch_baseline(p, 1); pFile = manifest_file_seek_base(p->pBaseline, zName); } return pFile; } /* | < < < < < < < < < < < < < < | | > | | > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > | > > > > > > | > | > > | > > > | > > > > > > > > > > > > > < < < < < < > | > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 | fetch_baseline(p, 1); pFile = manifest_file_seek_base(p->pBaseline, zName); } return pFile; } /* ** Add mlink table entries associated with manifest cid, pChild. The ** parent manifest is pid, pParent. One of either pChild or pParent ** will be NULL and it will be computed based on cid/pid. ** ** A single mlink entry is added for every file that changed content, ** name, and/or permissions going from pid to cid. ** ** Deleted files have mlink.fid=0. ** Added files have mlink.pid=0. ** Edited files have both mlink.pid!=0 and mlink.fid!=0 */ static void add_mlink(int pid, Manifest *pParent, int cid, Manifest *pChild){ Blob otherContent; int otherRid; int i, rc; ManifestFile *pChildFile, *pParentFile; Manifest **ppOther; static Stmt eq; int isPublic; /* True if pChild is non-private */ /* If mlink table entires are already set for cid, then abort early ** doing no work. */ db_static_prepare(&eq, "SELECT 1 FROM mlink WHERE mid=:mid"); db_bind_int(&eq, ":mid", cid); rc = db_step(&eq); db_reset(&eq); if( rc==SQLITE_ROW ) return; /* Compute the value of the missing pParent or pChild parameter. ** Fetch the baseline checkins for both. */ assert( pParent==0 || pChild==0 ); if( pParent==0 ){ ppOther = &pParent; otherRid = pid; }else{ ppOther = &pChild; otherRid = cid; } if( (*ppOther = manifest_cache_find(otherRid))==0 ){ content_get(otherRid, &otherContent); if( blob_size(&otherContent)==0 ) return; *ppOther = manifest_parse(&otherContent, otherRid); if( *ppOther==0 ) return; } if( fetch_baseline(pParent, 0) || fetch_baseline(pChild, 0) ){ manifest_destroy(*ppOther); return; } isPublic = !content_is_private(cid); /* Try to make the parent manifest a delta from the child, if that ** is an appropriate thing to do. For a new baseline, make the ** previoius baseline a delta from the current baseline. */ if( (pParent->zBaseline==0)==(pChild->zBaseline==0) ){ content_deltify(pid, cid, 0); }else if( pChild->zBaseline==0 && pParent->zBaseline!=0 ){ content_deltify(pParent->pBaseline->rid, cid, 0); } /* Remember all children less than a few seconds younger than their parent, ** as we might want to fudge the times for those children. */ if( pChild->rDate<pParent->rDate+AGE_FUDGE_WINDOW && manifest_crosslink_busy ){ db_multi_exec( "INSERT OR REPLACE INTO time_fudge VALUES(%d, %.17g, %d, %.17g);", pParent->rid, pParent->rDate, pChild->rid, pChild->rDate ); } /* First look at all files in pChild, ignoring its baseline. This ** is where most of the changes will be found. */ for(i=0, pChildFile=pChild->aFile; i<pChild->nFile; i++, pChildFile++){ int mperm = manifest_file_mperm(pChildFile); if( pChildFile->zPrior ){ pParentFile = manifest_file_seek(pParent, pChildFile->zPrior); if( pParentFile ){ /* File with name change */ add_one_mlink(cid, pParentFile->zUuid, pChildFile->zUuid, pChildFile->zName, pChildFile->zPrior, isPublic, mperm); }else{ /* File name changed, but the old name is not found in the parent! ** Treat this like a new file. */ add_one_mlink(cid, 0, pChildFile->zUuid, pChildFile->zName, 0, isPublic, mperm); } }else{ pParentFile = manifest_file_seek(pParent, pChildFile->zName); if( pParentFile==0 ){ if( pChildFile->zUuid ){ /* A new file */ add_one_mlink(cid, 0, pChildFile->zUuid, pChildFile->zName, 0, isPublic, mperm); } }else if( fossil_strcmp(pChildFile->zUuid, pParentFile->zUuid)!=0 || manifest_file_mperm(pParentFile)!=mperm ){ /* Changes in file content or permissions */ add_one_mlink(cid, pParentFile->zUuid, pChildFile->zUuid, pChildFile->zName, 0, isPublic, mperm); } } } if( pParent->zBaseline && pChild->zBaseline ){ /* Both parent and child are delta manifests. Look for files that ** are marked as deleted in the parent but which reappear in the child ** and show such files as being added in the child. */ for(i=0, pParentFile=pParent->aFile; i<pParent->nFile; i++, pParentFile++){ if( pParentFile->zUuid ) continue; pChildFile = manifest_file_seek(pChild, pParentFile->zName); if( pChildFile ){ add_one_mlink(cid, 0, pChildFile->zUuid, pChildFile->zName, 0, isPublic, manifest_file_mperm(pChildFile)); } } }else if( pChild->zBaseline==0 ){ /* Parent is a delta but pChild is a baseline. Look for files that are ** present in pParent but which are missing from pChild and mark them ** has having been deleted. */ manifest_file_rewind(pParent); while( (pParentFile = manifest_file_next(pParent,0))!=0 ){ pChildFile = manifest_file_seek(pChild, pParentFile->zName); if( pChildFile==0 && pParentFile->zUuid!=0 ){ add_one_mlink(cid, pParentFile->zUuid, 0, pParentFile->zName, 0, isPublic, 0); } } } manifest_cache_insert(*ppOther); } /* ** Setup to do multiple manifest_crosslink() calls. ** This is only required if processing ticket changes. */ void manifest_crosslink_begin(void){ assert( manifest_crosslink_busy==0 ); manifest_crosslink_busy = 1; db_begin_transaction(); db_multi_exec( "CREATE TEMP TABLE pending_tkt(uuid TEXT UNIQUE);" "CREATE TEMP TABLE time_fudge(" " mid INTEGER PRIMARY KEY," /* The rid of a manifest */ " m1 REAL," /* The timestamp on mid */ " cid INTEGER," /* A child or mid */ " m2 REAL" /* Timestamp on the child */ ");" ); } #if INTERFACE /* Timestamps might be adjusted slightly to ensure that checkins appear ** on the timeline in chronological order. This is the maximum amount ** of the adjustment window, in days. */ #define AGE_FUDGE_WINDOW (2.0/86400.0) /* 2 seconds */ /* This is increment (in days) by which timestamps are adjusted for ** use on the timeline. */ #define AGE_ADJUST_INCREMENT (25.0/86400000.0) /* 25 milliseconds */ #endif /* LOCAL_INTERFACE */ /* ** Finish up a sequence of manifest_crosslink calls. */ void manifest_crosslink_end(void){ Stmt q, u; int i; assert( manifest_crosslink_busy==1 ); db_prepare(&q, "SELECT uuid FROM pending_tkt"); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); ticket_rebuild_entry(zUuid); } db_finalize(&q); db_multi_exec("DROP TABLE pending_tkt"); /* If multiple check-ins happen close together in time, adjust their ** times by a few milliseconds to make sure they appear in chronological ** order. */ db_prepare(&q, "UPDATE time_fudge SET m1=m2-:incr WHERE m1>=m2 AND m1<m2+:window" ); db_bind_double(&q, ":incr", AGE_ADJUST_INCREMENT); db_bind_double(&q, ":window", AGE_FUDGE_WINDOW); db_prepare(&u, "UPDATE time_fudge SET m2=" "(SELECT x.m1 FROM time_fudge AS x WHERE x.mid=time_fudge.cid)" ); for(i=0; i<30; i++){ db_step(&q); db_reset(&q); if( sqlite3_changes(g.db)==0 ) break; db_step(&u); db_reset(&u); } db_finalize(&q); db_finalize(&u); db_multi_exec( "UPDATE event SET mtime=(SELECT m1 FROM time_fudge WHERE mid=objid)" " WHERE objid IN (SELECT mid FROM time_fudge);" "DROP TABLE time_fudge;" ); db_end_transaction(0); manifest_crosslink_busy = 0; } /* ** Make an entry in the event table for a ticket change artifact. */ |
︙ | ︙ | |||
1316 1317 1318 1319 1320 1321 1322 | } zTitle = db_text("unknown", "SELECT %s FROM ticket WHERE tkt_uuid='%s'", zTitleExpr, pManifest->zTicketUuid ); if( !isNew ){ for(i=0; i<pManifest->nField; i++){ | | | 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 | } zTitle = db_text("unknown", "SELECT %s FROM ticket WHERE tkt_uuid='%s'", zTitleExpr, pManifest->zTicketUuid ); if( !isNew ){ for(i=0; i<pManifest->nField; i++){ if( fossil_strcmp(pManifest->aField[i].zName, zStatusColumn)==0 ){ zNewStatus = pManifest->aField[i].zValue; } } if( zNewStatus ){ blob_appendf(&comment, "%h ticket [%.10s]: <i>%s</i>", zNewStatus, pManifest->zTicketUuid, zTitle ); |
︙ | ︙ | |||
1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 | ** ** If the input is a control artifact, then make appropriate entries ** in the auxiliary tables of the database in order to crosslink the ** artifact. ** ** If global variable g.xlinkClusterOnly is true, then ignore all ** control artifacts other than clusters. ** ** Historical note: This routine original processed manifests only. ** Processing for other control artifacts was added later. The name ** of the routine, "manifest_crosslink", and the name of this source ** file, is a legacy of its original use. */ int manifest_crosslink(int rid, Blob *pContent){ int i; Manifest *p; Stmt q; int parentid = 0; if( (p = manifest_cache_find(rid))!=0 ){ blob_reset(pContent); }else if( (p = manifest_parse(pContent, rid))==0 ){ return 0; } if( g.xlinkClusterOnly && p->type!=CFTYPE_CLUSTER ){ manifest_destroy(p); return 0; } if( p->type==CFTYPE_MANIFEST && fetch_baseline(p, 0) ){ manifest_destroy(p); return 0; } db_begin_transaction(); if( p->type==CFTYPE_MANIFEST ){ if( !db_exists("SELECT 1 FROM mlink WHERE mid=%d", rid) ){ char *zCom; for(i=0; i<p->nParent; i++){ | > > > > > | 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 | ** ** If the input is a control artifact, then make appropriate entries ** in the auxiliary tables of the database in order to crosslink the ** artifact. ** ** If global variable g.xlinkClusterOnly is true, then ignore all ** control artifacts other than clusters. ** ** This routine always resets the pContent blob before returning. ** ** Historical note: This routine original processed manifests only. ** Processing for other control artifacts was added later. The name ** of the routine, "manifest_crosslink", and the name of this source ** file, is a legacy of its original use. */ int manifest_crosslink(int rid, Blob *pContent){ int i; Manifest *p; Stmt q; int parentid = 0; if( (p = manifest_cache_find(rid))!=0 ){ blob_reset(pContent); }else if( (p = manifest_parse(pContent, rid))==0 ){ assert( blob_is_reset(pContent) || pContent==0 ); return 0; } if( g.xlinkClusterOnly && p->type!=CFTYPE_CLUSTER ){ manifest_destroy(p); assert( blob_is_reset(pContent) ); return 0; } if( p->type==CFTYPE_MANIFEST && fetch_baseline(p, 0) ){ manifest_destroy(p); assert( blob_is_reset(pContent) ); return 0; } db_begin_transaction(); if( p->type==CFTYPE_MANIFEST ){ if( !db_exists("SELECT 1 FROM mlink WHERE mid=%d", rid) ){ char *zCom; for(i=0; i<p->nParent; i++){ |
︙ | ︙ | |||
1425 1426 1427 1428 1429 1430 1431 1432 1433 | } db_prepare(&q, "SELECT cid FROM plink WHERE pid=%d AND isprim", rid); while( db_step(&q)==SQLITE_ROW ){ int cid = db_column_int(&q, 0); add_mlink(rid, p, cid, 0); } db_finalize(&q); db_multi_exec( "REPLACE INTO event(type,mtime,objid,user,comment," | > > > > > > > > > | | | | 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 | } db_prepare(&q, "SELECT cid FROM plink WHERE pid=%d AND isprim", rid); while( db_step(&q)==SQLITE_ROW ){ int cid = db_column_int(&q, 0); add_mlink(rid, p, cid, 0); } db_finalize(&q); if( p->nParent==0 ){ /* For root files (files without parents) add mlink entries ** showing all content as new. */ int isPublic = !content_is_private(rid); for(i=0; i<p->nFile; i++){ add_one_mlink(rid, 0, p->aFile[i].zUuid, p->aFile[i].zName, 0, isPublic, manifest_file_mperm(&p->aFile[i])); } } db_multi_exec( "REPLACE INTO event(type,mtime,objid,user,comment," "bgcolor,euser,ecomment,omtime)" "VALUES('ci'," " coalesce(" " (SELECT julianday(value) FROM tagxref WHERE tagid=%d AND rid=%d)," " %.17g" " )," " %d,%Q,%Q," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d AND tagtype>0)," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d)," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d),%.17g);", TAG_DATE, rid, p->rDate, rid, p->zUser, p->zComment, TAG_BGCOLOR, rid, TAG_USER, rid, TAG_COMMENT, rid, p->rDate ); zCom = db_text(0, "SELECT coalesce(ecomment, comment) FROM event" " WHERE rowid=last_insert_rowid()"); wiki_extract_links(zCom, rid, 0, p->rDate, 1, WIKI_INLINE); free(zCom); /* If this is a delta-manifest, record the fact that this repository |
︙ | ︙ | |||
1558 1559 1560 1561 1562 1563 1564 | while( fossil_isspace(p->zWiki[0]) ) p->zWiki++; nWiki = strlen(p->zWiki); sqlite3_snprintf(sizeof(zLength), zLength, "%d", nWiki); tag_insert(zTag, 1, zLength, rid, p->rDate, rid); free(zTag); prior = db_int(0, "SELECT rid FROM tagxref" | | | > > > > > > > | | | | | | | | < < < < < < > | 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 | while( fossil_isspace(p->zWiki[0]) ) p->zWiki++; nWiki = strlen(p->zWiki); sqlite3_snprintf(sizeof(zLength), zLength, "%d", nWiki); tag_insert(zTag, 1, zLength, rid, p->rDate, rid); free(zTag); prior = db_int(0, "SELECT rid FROM tagxref" " WHERE tagid=%d AND mtime<%.17g AND rid!=%d" " ORDER BY mtime DESC", tagid, p->rDate, rid ); subsequent = db_int(0, "SELECT rid FROM tagxref" " WHERE tagid=%d AND mtime>=%.17g AND rid!=%d" " ORDER BY mtime", tagid, p->rDate, rid ); if( prior ){ content_deltify(prior, rid, 0); if( !subsequent ){ db_multi_exec( "DELETE FROM event" " WHERE type='e'" " AND tagid=%d" " AND objid IN (SELECT rid FROM tagxref WHERE tagid=%d)", tagid, tagid ); } } if( subsequent ){ content_deltify(rid, subsequent, 0); }else{ db_multi_exec( "REPLACE INTO event(type,mtime,objid,tagid,user,comment,bgcolor)" "VALUES('e',%.17g,%d,%d,%Q,%Q," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d));", |
︙ | ︙ | |||
1656 1657 1658 1659 1660 1661 1662 1663 1664 | } db_end_transaction(0); if( p->type==CFTYPE_MANIFEST ){ manifest_cache_insert(p); }else{ manifest_destroy(p); } return 1; } | > | 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 | } db_end_transaction(0); if( p->type==CFTYPE_MANIFEST ){ manifest_cache_insert(p); }else{ manifest_destroy(p); } assert( blob_is_reset(pContent) ); return 1; } |
Changes to src/merge.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 | /* ** COMMAND: merge ** ** Usage: %fossil merge [--cherrypick] [--backout] VERSION ** | | | | > > > > > > > > | | | | > > > > > > > > > > > > > > > > > > > | > > | | | > > > > | < < < < < > > > > > | | | > > > < < | > > > > > | | > | < > > > > > > | | | > | | | > | > > > | > > | > > > | | > | < > | < > > > > > | | > | | | > > | > > | > > > | > > > > > > > > > > > > > > > > | > > > | | > | | | | | < | < > > > | | | > | | | | | > | | | > | | | > | | | | | > | | | | | < > | | > > < | < < | > | > > > > > < < | | | > > > > > > > > | | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 | /* ** COMMAND: merge ** ** Usage: %fossil merge [--cherrypick] [--backout] VERSION ** ** The argument VERSION is a version that should be merged into the ** current checkout. All changes from VERSION back to the nearest ** common ancestor are merged. Except, if either of the --cherrypick or ** --backout options are used only the changes associated with the ** single check-in VERSION are merged. The --backout option causes ** the changes associated with VERSION to be removed from the current ** checkout rather than added. ** ** Only file content is merged. The result continues to use the ** file and directory names from the current checkout even if those ** names might have been changed in the branch being merged in. ** ** Other options: ** ** --baseline BASELINE Use BASELINE as the "pivot" of the merge instead ** of the nearest common ancestor. This allows ** a sequence of changes in a branch to be merged ** without having to merge the entire branch. ** ** --detail Show additional details of the merge ** ** --binary GLOBPATTERN Treat files that match GLOBPATTERN as binary ** and do not try to merge parallel changes. This ** option overrides the "binary-glob" setting. ** ** --nochange | -n Dryrun: do not actually make any changes; just ** show what would have happened. */ void merge_cmd(void){ int vid; /* Current version "V" */ int mid; /* Version we are merging from "M" */ int pid; /* The pivot version - most recent common ancestor P */ int detailFlag; /* True if the --detail option is present */ int pickFlag; /* True if the --cherrypick option is present */ int backoutFlag; /* True if the --backout option is present */ int nochangeFlag; /* True if the --nochange or -n option is present */ const char *zBinGlob; /* The value of --binary */ const char *zPivot; /* The value of --baseline */ int debugFlag; /* True if --debug is present */ int nChng; /* Number of file name changes */ int *aChng; /* An array of file name changes */ int i; /* Loop counter */ int nConflict = 0; /* Number of conflicts seen */ Stmt q; /* Notation: ** ** V The current checkout ** M The version being merged in ** P The "pivot" - the most recent common ancestor of V and M. */ undo_capture_command_line(); detailFlag = find_option("detail",0,0)!=0; pickFlag = find_option("cherrypick",0,0)!=0; backoutFlag = find_option("backout",0,0)!=0; debugFlag = find_option("debug",0,0)!=0; zBinGlob = find_option("binary",0,1); nochangeFlag = find_option("nochange","n",0)!=0; zPivot = find_option("baseline",0,1); if( g.argc!=3 ){ usage("VERSION"); } db_must_be_within_tree(); if( zBinGlob==0 ) zBinGlob = db_get("binary-glob",0); vid = db_lget_int("checkout", 0); if( vid==0 ){ fossil_fatal("nothing is checked out"); } mid = name_to_rid(g.argv[2]); if( mid==0 || !is_a_version(mid) ){ fossil_fatal("not a version: %s", g.argv[2]); } if( zPivot ){ pid = name_to_rid(zPivot); if( pid==0 || !is_a_version(pid) ){ fossil_fatal("not a version: %s", zPivot); } if( pickFlag ){ fossil_fatal("incompatible options: --cherrypick & --baseline"); } /* pickFlag = 1; // Using --baseline is really like doing a cherrypick */ }else if( pickFlag || backoutFlag ){ pid = db_int(0, "SELECT pid FROM plink WHERE cid=%d AND isprim", mid); if( pid<=0 ){ fossil_fatal("cannot find an ancestor for %s", g.argv[2]); } }else{ pivot_set_primary(mid); pivot_set_secondary(vid); db_prepare(&q, "SELECT merge FROM vmerge WHERE id=0"); while( db_step(&q)==SQLITE_ROW ){ pivot_set_secondary(db_column_int(&q,0)); } db_finalize(&q); pid = pivot_find(); if( pid<=0 ){ fossil_fatal("cannot find a common ancestor between the current " "checkout and %s", g.argv[2]); } } if( backoutFlag ){ int t = pid; pid = mid; mid = t; } if( !is_a_version(pid) ){ fossil_fatal("not a version: record #%d", pid); } vfile_check_signature(vid, 1, 0); db_begin_transaction(); if( !nochangeFlag ) undo_begin(); load_vfile_from_rid(mid); load_vfile_from_rid(pid); /* ** The vfile.pathname field is used to match files against each other. The ** FV table contains one row for each each unique filename in ** in the current checkout, the pivot, and the version being merged. */ db_multi_exec( "DROP TABLE IF EXISTS fv;" "CREATE TEMP TABLE fv(" " fn TEXT PRIMARY KEY," /* The filename */ " idv INTEGER," /* VFILE entry for current version */ " idp INTEGER," /* VFILE entry for the pivot */ " idm INTEGER," /* VFILE entry for version merging in */ " chnged BOOLEAN," /* True if current version has been edited */ " ridv INTEGER," /* Record ID for current version */ " ridp INTEGER," /* Record ID for pivot */ " ridm INTEGER," /* Record ID for merge */ " isexe BOOLEAN," /* Execute permission enabled */ " fnp TEXT," /* The filename in the pivot */ " fnm TEXT" /* the filename in the merged version */ ");" ); /* Add files found in V */ db_multi_exec( "INSERT OR IGNORE" " INTO fv(fn,fnp,fnm,idv,idp,idm,ridv,ridp,ridm,isexe,chnged)" " SELECT pathname, pathname, pathname, id, 0, 0, rid, 0, 0, isexe, chnged " " FROM vfile WHERE vid=%d", vid ); /* ** Compute name changes from P->V */ find_filename_changes(vid, pid, &nChng, &aChng); if( nChng ){ for(i=0; i<nChng; i++){ char *z; z = db_text(0, "SELECT name FROM filename WHERE fnid=%d", aChng[i*2+1]); db_multi_exec( "UPDATE fv SET fnp=%Q, fnm=%Q" " WHERE fn=(SELECT name FROM filename WHERE fnid=%d)", z, z, aChng[i*2] ); free(z); } fossil_free(aChng); db_multi_exec("UPDATE fv SET fnm=fnp WHERE fnp!=fn"); } /* Add files found in P but not in V */ db_multi_exec( "INSERT OR IGNORE" " INTO fv(fn,fnp,fnm,idv,idp,idm,ridv,ridp,ridm,isexe,chnged)" " SELECT pathname, pathname, pathname, 0, 0, 0, 0, 0, 0, isexe, 0 " " FROM vfile" " WHERE vid=%d AND pathname NOT IN (SELECT fnp FROM fv)", pid ); /* ** Compute name changes from P->M */ find_filename_changes(pid, mid, &nChng, &aChng); if( nChng ){ if( nChng>4 ) db_multi_exec("CREATE INDEX fv_fnp ON fv(fnp)"); for(i=0; i<nChng; i++){ db_multi_exec( "UPDATE fv SET fnm=(SELECT name FROM filename WHERE fnid=%d)" " WHERE fnp=(SELECT name FROM filename WHERE fnid=%d)", aChng[i*2+1], aChng[i*2] ); } fossil_free(aChng); } /* Add files found in M but not in P or V. */ db_multi_exec( "INSERT OR IGNORE" " INTO fv(fn,fnp,fnm,idv,idp,idm,ridv,ridp,ridm,isexe,chnged)" " SELECT pathname, pathname, pathname, 0, 0, 0, 0, 0, 0, isexe, 0 " " FROM vfile" " WHERE vid=%d" " AND pathname NOT IN (SELECT fnp FROM fv UNION SELECT fnm FROM fv)", mid ); /* ** Compute the file version ids for P and M. */ db_multi_exec( "UPDATE fv SET" " idp=coalesce((SELECT id FROM vfile WHERE vid=%d AND pathname=fnp),0)," " ridp=coalesce((SELECT rid FROM vfile WHERE vid=%d AND pathname=fnp),0)," " idm=coalesce((SELECT id FROM vfile WHERE vid=%d AND pathname=fnm),0)," " ridm=coalesce((SELECT rid FROM vfile WHERE vid=%d AND pathname=fnm),0)", pid, pid, mid, mid ); if( debugFlag ){ db_prepare(&q, "SELECT rowid, fn, fnp, fnm, chnged, ridv, ridp, ridm, isexe FROM fv" ); while( db_step(&q)==SQLITE_ROW ){ printf("%3d: ridv=%-4d ridp=%-4d ridm=%-4d chnged=%d isexe=%d\n", db_column_int(&q, 0), db_column_int(&q, 5), db_column_int(&q, 6), db_column_int(&q, 7), db_column_int(&q, 4), db_column_int(&q, 8)); printf(" fn = [%s]\n", db_column_text(&q, 1)); printf(" fnp = [%s]\n", db_column_text(&q, 2)); printf(" fnm = [%s]\n", db_column_text(&q, 3)); } db_finalize(&q); } /* ** Find files in M and V but not in P and report conflicts. ** The file in M will be ignored. It will be treated as if it ** does not exist. */ db_prepare(&q, "SELECT idm FROM fv WHERE idp=0 AND idv>0 AND idm>0" ); while( db_step(&q)==SQLITE_ROW ){ int idm = db_column_int(&q, 0); char *zName = db_text(0, "SELECT pathname FROM vfile WHERE id=%d", idm); printf("WARNING - no common ancestor: %s\n", zName); free(zName); db_multi_exec("UPDATE fv SET idm=0 WHERE idm=%d", idm); } db_finalize(&q); /* ** Add to V files that are not in V or P but are in M */ db_prepare(&q, "SELECT idm, rowid, fnm FROM fv AS x" " WHERE idp=0 AND idv=0 AND idm>0" ); while( db_step(&q)==SQLITE_ROW ){ int idm = db_column_int(&q, 0); int rowid = db_column_int(&q, 1); int idv; const char *zName; db_multi_exec( "INSERT INTO vfile(vid,chnged,deleted,rid,mrid,isexe,pathname)" " SELECT %d,3,0,rid,mrid,isexe,pathname FROM vfile WHERE id=%d", vid, idm ); idv = db_last_insert_rowid(); db_multi_exec("UPDATE fv SET idv=%d WHERE rowid=%d", idv, rowid); zName = db_column_text(&q, 2); printf("ADDED %s\n", zName); if( !nochangeFlag ){ undo_save(zName); vfile_to_disk(0, idm, 0, 0); } } db_finalize(&q); /* ** Find files that have changed from P->M but not P->V. ** Copy the M content over into V. */ db_prepare(&q, "SELECT idv, ridm, fn FROM fv" " WHERE idp>0 AND idv>0 AND idm>0" " AND ridm!=ridp AND ridv=ridp AND NOT chnged" ); while( db_step(&q)==SQLITE_ROW ){ int idv = db_column_int(&q, 0); int ridm = db_column_int(&q, 1); const char *zName = db_column_text(&q, 2); /* Copy content from idm over into idv. Overwrite idv. */ printf("UPDATE %s\n", zName); if( !nochangeFlag ){ undo_save(zName); db_multi_exec( "UPDATE vfile SET mtime=0, mrid=%d, chnged=2 WHERE id=%d", ridm, idv ); vfile_to_disk(0, idv, 0, 0); } } db_finalize(&q); /* ** Do a three-way merge on files that have changes on both P->M and P->V. */ db_prepare(&q, "SELECT ridm, idv, ridp, ridv, %s, fn, isexe FROM fv" " WHERE idp>0 AND idv>0 AND idm>0" " AND ridm!=ridp AND (ridv!=ridp OR chnged)", glob_expr("fv.fn", zBinGlob) ); while( db_step(&q)==SQLITE_ROW ){ int ridm = db_column_int(&q, 0); int idv = db_column_int(&q, 1); int ridp = db_column_int(&q, 2); int ridv = db_column_int(&q, 3); int isBinary = db_column_int(&q, 4); const char *zName = db_column_text(&q, 5); int isExe = db_column_int(&q, 6); int rc; char *zFullPath; Blob m, p, r; /* Do a 3-way merge of idp->idm into idp->idv. The results go into idv. */ if( detailFlag ){ printf("MERGE %s (pivot=%d v1=%d v2=%d)\n", zName, ridp, ridm, ridv); }else{ printf("MERGE %s\n", zName); } undo_save(zName); zFullPath = mprintf("%s/%s", g.zLocalRoot, zName); content_get(ridp, &p); content_get(ridm, &m); if( isBinary ){ rc = -1; blob_zero(&r); }else{ rc = merge_3way(&p, zFullPath, &m, &r); } if( rc>=0 ){ if( !nochangeFlag ){ blob_write_to_file(&r, zFullPath); file_setexe(zFullPath, isExe); } db_multi_exec("UPDATE vfile SET mtime=0 WHERE id=%d", idv); if( rc>0 ){ printf("***** %d merge conflicts in %s\n", rc, zName); nConflict++; } }else{ printf("***** Cannot merge binary file %s\n", zName); nConflict++; } blob_reset(&p); blob_reset(&m); blob_reset(&r); db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(%d,%d)", idv,ridm); } db_finalize(&q); /* ** Drop files that are in P and V but not in M */ db_prepare(&q, "SELECT idv, fn, chnged FROM fv" " WHERE idp>0 AND idv>0 AND idm=0" ); while( db_step(&q)==SQLITE_ROW ){ int idv = db_column_int(&q, 0); const char *zName = db_column_text(&q, 1); int chnged = db_column_int(&q, 2); /* Delete the file idv */ printf("DELETE %s\n", zName); if( chnged ){ printf("WARNING: local edits lost for %s\n", zName); nConflict++; } undo_save(zName); db_multi_exec( "UPDATE vfile SET deleted=1 WHERE id=%d", idv ); if( !nochangeFlag ){ char *zFullPath = mprintf("%s%s", g.zLocalRoot, zName); unlink(zFullPath); free(zFullPath); } } db_finalize(&q); /* ** Rename files that have taken a rename on P->M but which keep the same ** name o P->V. If a file is renamed on P->V only or on both P->V and ** P->M then we retain the V name of the file. */ db_prepare(&q, "SELECT idv, fnp, fnm FROM fv" " WHERE idv>0 AND idp>0 AND idm>0 AND fnp=fn AND fnm!=fnp" ); while( db_step(&q)==SQLITE_ROW ){ int idv = db_column_int(&q, 0); const char *zOldName = db_column_text(&q, 1); const char *zNewName = db_column_text(&q, 2); printf("RENAME %s -> %s\n", zOldName, zNewName); undo_save(zOldName); undo_save(zNewName); db_multi_exec( "UPDATE vfile SET pathname=%Q, origname=coalesce(origname,pathname)" " WHERE id=%d AND vid=%d", zNewName, idv, vid ); if( !nochangeFlag ){ char *zFullOldPath = mprintf("%s%s", g.zLocalRoot, zOldName); char *zFullNewPath = mprintf("%s%s", g.zLocalRoot, zNewName); file_copy(zFullOldPath, zFullNewPath); unlink(zFullOldPath); free(zFullNewPath); free(zFullOldPath); } } db_finalize(&q); /* Report on conflicts */ if( nConflict && !nochangeFlag ){ printf("WARNING: merge conflicts - see messages above for details.\n"); } /* ** Clean up the mid and pid VFILE entries. Then commit the changes. */ db_multi_exec("DELETE FROM vfile WHERE vid!=%d", vid); if( !pickFlag ){ db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(0,%d)", mid); } undo_finish(); db_end_transaction(nochangeFlag); } |
Changes to src/merge3.c.
︙ | ︙ | |||
37 38 39 40 41 42 43 44 45 46 47 | ** lines are different. ** ** The cursors on both pV1 and pV2 is unchanged by this comparison. */ static int sameLines(Blob *pV1, Blob *pV2, int N){ char *z1, *z2; int i; if( N==0 ) return 1; z1 = &blob_buffer(pV1)[blob_tell(pV1)]; z2 = &blob_buffer(pV2)[blob_tell(pV2)]; | > | | | | 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | ** lines are different. ** ** The cursors on both pV1 and pV2 is unchanged by this comparison. */ static int sameLines(Blob *pV1, Blob *pV2, int N){ char *z1, *z2; int i; char c; if( N==0 ) return 1; z1 = &blob_buffer(pV1)[blob_tell(pV1)]; z2 = &blob_buffer(pV2)[blob_tell(pV2)]; for(i=0; (c=z1[i])==z2[i]; i++){ if( c=='\n' || c==0 ){ N--; if( N==0 || c==0 ) return 1; } } return 0; } /* ** Look at the next edit triple in both aC1 and aC2. (An "edit triple" is |
︙ | ︙ | |||
142 143 144 145 146 147 148 | ** to pV2. ** ** The return is 0 upon complete success. If any input file is binary, ** -1 is returned and pOut is unmodified. If there are merge ** conflicts, the merge proceeds as best as it can and the number ** of conflicts is returns */ | | | > | > > > | > | | | 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | ** to pV2. ** ** The return is 0 upon complete success. If any input file is binary, ** -1 is returned and pOut is unmodified. If there are merge ** conflicts, the merge proceeds as best as it can and the number ** of conflicts is returns */ static int blob_merge(Blob *pPivot, Blob *pV1, Blob *pV2, Blob *pOut){ int *aC1; /* Changes from pPivot to pV1 */ int *aC2; /* Changes from pPivot to pV2 */ int i1, i2; /* Index into aC1[] and aC2[] */ int nCpy, nDel, nIns; /* Number of lines to copy, delete, or insert */ int limit1, limit2; /* Sizes of aC1[] and aC2[] */ int nConflict = 0; /* Number of merge conflicts seen so far */ static const char zBegin[] = "<<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<<\n"; static const char zMid1[] = "======= COMMON ANCESTOR content follows ============================\n"; static const char zMid2[] = "======= MERGED IN content follows ==================================\n"; static const char zEnd[] = ">>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n"; blob_zero(pOut); /* Merge results stored in pOut */ /* Compute the edits that occur from pPivot => pV1 (into aC1) ** and pPivot => pV2 (into aC2). Each of the aC1 and aC2 arrays is ** an array of integer triples. Within each triple, the first integer ** is the number of lines of text to copy directly from the pivot, ** the second integer is the number of lines of text to omit from the ** pivot, and the third integer is the number of lines of text that are ** inserted. The edit array ends with a triple of 0,0,0. */ aC1 = text_diff(pPivot, pV1, 0, 0, 0); aC2 = text_diff(pPivot, pV2, 0, 0, 0); if( aC1==0 || aC2==0 ){ free(aC1); free(aC2); return -1; } blob_rewind(pV1); /* Rewind inputs: Needed to reconstruct output */ |
︙ | ︙ | |||
186 187 188 189 190 191 192 | limit2 = i2; DEBUG( for(i1=0; i1<limit1; i1+=3){ printf("c1: %4d %4d %4d\n", aC1[i1], aC1[i1+1], aC1[i1+2]); } for(i2=0; i2<limit2; i2+=3){ | | | 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | limit2 = i2; DEBUG( for(i1=0; i1<limit1; i1+=3){ printf("c1: %4d %4d %4d\n", aC1[i1], aC1[i1+1], aC1[i1+2]); } for(i2=0; i2<limit2; i2+=3){ printf("c2: %4d %4d %4d\n", aC2[i2], aC2[i2+1], aC2[i2+2]); } ) /* Loop over the two edit vectors and use them to compute merged text ** which is written into pOut. i1 and i2 are multiples of 3 which are ** indices into aC1[] and aC2[] to the edit triple currently being ** processed |
︙ | ︙ | |||
258 259 260 261 262 263 264 | nConflict++; while( !ends_at_CPY(&aC1[i1], sz) || !ends_at_CPY(&aC2[i2], sz) ){ sz++; } DEBUG( printf("CONFLICT %d\n", sz); ) blob_appendf(pOut, zBegin); i1 = output_one_side(pOut, pV1, aC1, i1, sz); | | > > < | | 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 | nConflict++; while( !ends_at_CPY(&aC1[i1], sz) || !ends_at_CPY(&aC2[i2], sz) ){ sz++; } DEBUG( printf("CONFLICT %d\n", sz); ) blob_appendf(pOut, zBegin); i1 = output_one_side(pOut, pV1, aC1, i1, sz); blob_appendf(pOut, zMid1); blob_copy_lines(pOut, pPivot, sz); blob_appendf(pOut, zMid2); i2 = output_one_side(pOut, pV2, aC2, i2, sz); blob_appendf(pOut, zEnd); } /* If we are finished with an edit triple, advance to the next ** triple. */ if( i1<limit1 && aC1[i1]==0 && aC1[i1+1]==0 && aC1[i1+2]==0 ) i1+=3; if( i2<limit2 && aC2[i2]==0 && aC2[i2+1]==0 && aC2[i2+2]==0 ) i2+=3; } |
︙ | ︙ | |||
293 294 295 296 297 298 299 300 301 302 303 304 305 306 | free(aC1); free(aC2); return nConflict; } /* ** COMMAND: test-3-way-merge ** ** Combine change in going from PIVOT->VERSION1 with the change going ** from PIVOT->VERSION2 and write the combined changes into MERGED. */ void delta_3waymerge_cmd(void){ Blob pivot, v1, v2, merged; if( g.argc!=6 ){ | > > | < | 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 | free(aC1); free(aC2); return nConflict; } /* ** COMMAND: test-3-way-merge ** ** Usage: %fossil test-3-way-merge PIVOT V1 V2 MERGED ** ** Combine change in going from PIVOT->VERSION1 with the change going ** from PIVOT->VERSION2 and write the combined changes into MERGED. */ void delta_3waymerge_cmd(void){ Blob pivot, v1, v2, merged; if( g.argc!=6 ){ usage("PIVOT V1 V2 MERGED"); } if( blob_read_from_file(&pivot, g.argv[2])<0 ){ fprintf(stderr,"cannot read %s\n", g.argv[2]); fossil_exit(1); } if( blob_read_from_file(&v1, g.argv[3])<0 ){ fprintf(stderr,"cannot read %s\n", g.argv[3]); |
︙ | ︙ | |||
325 326 327 328 329 330 331 | fossil_exit(1); } blob_reset(&pivot); blob_reset(&v1); blob_reset(&v2); blob_reset(&merged); } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 | fossil_exit(1); } blob_reset(&pivot); blob_reset(&v1); blob_reset(&v2); blob_reset(&merged); } /* ** aSubst is an array of string pairs. The first element of each pair is ** a string that begins with %. The second element is a replacement for that ** string. ** ** This routine makes a copy of zInput into memory obtained from malloc and ** performance all applicable substitutions on that string. */ char *string_subst(const char *zInput, int nSubst, const char **azSubst){ Blob x; int i, j; blob_zero(&x); while( zInput[0] ){ for(i=0; zInput[i] && zInput[i]!='%'; i++){} if( i>0 ){ blob_append(&x, zInput, i); zInput += i; } if( zInput[0]==0 ) break; for(j=0; j<nSubst; j+=2){ int n = strlen(azSubst[j]); if( strncmp(zInput, azSubst[j], n)==0 ){ blob_append(&x, azSubst[j+1], -1); zInput += n; break; } } if( j>=nSubst ){ blob_append(&x, "%", 1); zInput++; } } return blob_str(&x); } /* ** This routine is a wrapper around blob_merge() with the following ** enhancements: ** ** (1) If the merge-command is defined, then use the external merging ** program specified instead of the built-in blob-merge to do the ** merging. Panic if the external merger fails. ** ** Not currently implemented ** ** ** (2) If gmerge-command is defined and there are merge conflicts in ** blob_merge() then invoke the external graphical merger to resolve ** the conflicts. ** ** (3) If a merge conflict occurs and gmerge-command is not defined, ** then write the pivot, original, and merge-in files to the ** filesystem. */ int merge_3way( Blob *pPivot, /* Common ancestor (older) */ const char *zV1, /* Name of file for version merging into (mine) */ Blob *pV2, /* Version merging from (yours) */ Blob *pOut /* Output written here */ ){ Blob v1; /* Content of zV1 */ int rc; /* Return code of subroutines and this routine */ blob_read_from_file(&v1, zV1); rc = blob_merge(pPivot, &v1, pV2, pOut); if( rc>0 ){ char *zPivot; /* Name of the pivot file */ char *zOrig; /* Name of the original content file */ char *zOther; /* Name of the merge file */ const char *zGMerge; /* Name of the gmerge command */ zPivot = file_newname(zV1, "baseline", 1); blob_write_to_file(pPivot, zPivot); zOrig = file_newname(zV1, "original", 1); blob_write_to_file(&v1, zOrig); zOther = file_newname(zV1, "merge", 1); blob_write_to_file(pV2, zOther); zGMerge = db_get("gmerge-command", 0); if( zGMerge && zGMerge[0] ){ char *zOut; /* Temporary output file */ char *zCmd; /* Command to invoke */ const char *azSubst[8]; /* Strings to be substituted */ zOut = file_newname(zV1, "output", 1); azSubst[0] = "%baseline"; azSubst[1] = zPivot; azSubst[2] = "%original"; azSubst[3] = zOrig; azSubst[4] = "%merge"; azSubst[5] = zOther; azSubst[6] = "%output"; azSubst[7] = zOut; zCmd = string_subst(zGMerge, 8, azSubst); printf("%s\n", zCmd); fflush(stdout); fossil_system(zCmd); if( file_size(zOut)>=0 ){ blob_read_from_file(pOut, zOut); unlink(zPivot); unlink(zOrig); unlink(zOther); unlink(zOut); } fossil_free(zCmd); fossil_free(zOut); } fossil_free(zPivot); fossil_free(zOrig); fossil_free(zOther); } blob_reset(&v1); return rc; } |
Changes to src/mkindex.c.
︙ | ︙ | |||
11 12 13 14 15 16 17 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** | > > > | | > > > > > > > > > > | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This program scans Fossil source code files looking for special ** comments that indicate a command-line command or a webpage. This ** routine collects information about these entry points and then ** generates (on standard output) C code used by Fossil to dispatch ** to those entry points. ** ** The source code is scanned for comment lines of the form: ** ** WEBPAGE: /abc/xyz ** ** This comment should be followed by a function definition of the ** form: ** ** void function_name(void){ ** ** This routine creates C source code for a constant table that maps ** webpage name into pointers to the function. ** ** We also scan for comments lines of this form: ** ** COMMAND: cmdname ** ** These entries build a constant table used to map command names into ** functions. ** ** Comment text following COMMAND: through the end of the comment is ** understood to be help text for the command specified. This help ** text is accumulated and a table containing the text for each command ** is generated. That table is used implement the "fossil help" command ** and the "/help" HTTP method. ** ** Multiple occurrances of WEBPAGE: or COMMAND: (but not both) can appear ** before each function name. In this way, webpages and commands can ** have aliases. */ #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <assert.h> #include <string.h> |
︙ | ︙ | |||
57 58 59 60 61 62 63 | ** Maximum number of entries */ #define N_ENTRY 500 /* ** Maximum size of a help message */ | | | 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | ** Maximum number of entries */ #define N_ENTRY 500 /* ** Maximum size of a help message */ #define MX_HELP 25000 /* ** Table of entries */ Entry aEntry[N_ENTRY]; /* |
︙ | ︙ | |||
126 127 128 129 130 131 132 | /* ** Scan a line for a function that implements a web page or command. */ void scan_for_func(char *zLine){ int i,j,k; char *z; if( nUsed<=nFixed ) return; | | > | > > > | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | /* ** Scan a line for a function that implements a web page or command. */ void scan_for_func(char *zLine){ int i,j,k; char *z; if( nUsed<=nFixed ) return; if( strncmp(zLine, "**", 2)==0 && isspace(zLine[2]) && strlen(zLine)<sizeof(zHelp)-nHelp-1 && nUsed>nFixed && memcmp(zLine,"** COMMAND:",11)!=0 ){ if( zLine[2]=='\n' ){ zHelp[nHelp++] = '\n'; }else{ if( strncmp(&zLine[3], "Usage: ", 6)==0 ) nHelp = 0; strcpy(&zHelp[nHelp], &zLine[3]); nHelp += strlen(&zHelp[nHelp]); } |
︙ | ︙ |
Changes to src/name.c.
︙ | ︙ | |||
140 141 142 143 144 145 146 147 148 149 150 | ** ** An input of "tip" returns the most recent check-in. ** ** Memory to hold the returned string comes from malloc() and needs to ** be freed by the caller. */ char *tag_to_uuid(const char *zTag){ char *zUuid = db_text(0, "SELECT blob.uuid" " FROM tag, tagxref, event, blob" | > | | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | ** ** An input of "tip" returns the most recent check-in. ** ** Memory to hold the returned string comes from malloc() and needs to ** be freed by the caller. */ char *tag_to_uuid(const char *zTag){ int vid; char *zUuid = db_text(0, "SELECT blob.uuid" " FROM tag, tagxref, event, blob" " WHERE tag.tagname='sym-%q' " " AND tagxref.tagid=tag.tagid AND tagxref.tagtype>0 " " AND event.objid=tagxref.rid " " AND blob.rid=event.objid " " ORDER BY event.mtime DESC ", zTag ); if( zUuid==0 ){ |
︙ | ︙ | |||
168 169 170 171 172 173 174 | nDate -= 3; zDate[nDate] = 0; useUtc = 1; } zUuid = db_text(0, "SELECT blob.uuid" " FROM tag, tagxref, event, blob" | | | > > > > > > > > > > > > > > > | 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | nDate -= 3; zDate[nDate] = 0; useUtc = 1; } zUuid = db_text(0, "SELECT blob.uuid" " FROM tag, tagxref, event, blob" " WHERE tag.tagname='sym-%q' " " AND tagxref.tagid=tag.tagid AND tagxref.tagtype>0 " " AND event.objid=tagxref.rid " " AND blob.rid=event.objid " " AND event.mtime<=julianday(%Q %s)" " ORDER BY event.mtime DESC ", zTagBase, zDate, (useUtc ? "" : ",'utc'") ); break; } } if( zUuid==0 && fossil_strcmp(zTag, "tip")==0 ){ zUuid = db_text(0, "SELECT blob.uuid" " FROM event, blob" " WHERE event.type='ci'" " AND blob.rid=event.objid" " ORDER BY event.mtime DESC" ); } if( zUuid==0 && g.localOpen && (vid=db_lget_int("checkout",0))!=0 ){ if( fossil_strcmp(zTag, "current")==0 ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); }else if( fossil_strcmp(zTag, "prev")==0 || fossil_strcmp(zTag, "previous")==0 ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=" "(SELECT pid FROM plink WHERE cid=%d AND isprim)", vid); }else if( fossil_strcmp(zTag, "next")==0 ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=" "(SELECT cid FROM plink WHERE pid=%d" " ORDER BY isprim DESC, mtime DESC)", vid); } } } return zUuid; } /* ** Convert a date/time string into a UUID. |
︙ | ︙ | |||
320 321 322 323 324 325 326 | @ <ol> z = mprintf("%s", zName); canonical16(z, strlen(z)); db_prepare(&q, "SELECT uuid, rid FROM blob WHERE uuid GLOB '%q*'", z); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); int rid = db_column_int(&q, 1); | | | 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 | @ <ol> z = mprintf("%s", zName); canonical16(z, strlen(z)); db_prepare(&q, "SELECT uuid, rid FROM blob WHERE uuid GLOB '%q*'", z); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); int rid = db_column_int(&q, 1); @ <li><p><a href="%s(g.zTop)/%T(zSrc)/%S(zUuid)"> @ %S(zUuid)</a> - object_description(rid, 0, 0); @ </p></li> } @ </ol> style_footer(); } |
︙ | ︙ |
Added src/path.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 | /* ** Copyright (c) 2011 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@sqlite.org ** ******************************************************************************* ** ** This file contains code used to trace paths of through the ** directed acyclic graph (DAG) of checkins. */ #include "config.h" #include "path.h" #include <assert.h> #if INTERFACE /* Nodes for the paths through the DAG. */ struct PathNode { int rid; /* ID for this node */ u8 fromIsParent; /* True if pFrom is the parent of rid */ u8 isPrim; /* True if primary side of common ancestor */ PathNode *pFrom; /* Node we came from */ union { PathNode *pPeer; /* List of nodes of the same generation */ PathNode *pTo; /* Next on path from beginning to end */ } u; PathNode *pAll; /* List of all nodes */ }; #endif /* ** Local variables for this module */ static struct { PathNode *pCurrent; /* Current generation of nodes */ PathNode *pAll; /* All nodes */ Bag seen; /* Nodes seen before */ int nStep; /* Number of steps from first to last */ PathNode *pStart; /* Earliest node */ PathNode *pPivot; /* Common ancestor of pStart and pEnd */ PathNode *pEnd; /* Most recent */ } path; /* ** Return the first (last) element of the computed path. */ PathNode *path_first(void){ return path.pStart; } PathNode *path_last(void){ return path.pEnd; } /* ** Return the number of steps in the computed path. */ int path_length(void){ return path.nStep; } /* ** Create a new node */ static PathNode *path_new_node(int rid, PathNode *pFrom, int isParent){ PathNode *p; p = fossil_malloc( sizeof(*p) ); memset(p, 0, sizeof(*p)); p->rid = rid; p->fromIsParent = isParent; p->pFrom = pFrom; p->u.pPeer = path.pCurrent; path.pCurrent = p; p->pAll = path.pAll; path.pAll = p; bag_insert(&path.seen, rid); return p; } /* ** Reset memory used by the shortest path algorithm. */ void path_reset(void){ PathNode *p; while( path.pAll ){ p = path.pAll; path.pAll = p->pAll; fossil_free(p); } bag_clear(&path.seen); path.pCurrent = 0; path.pAll = 0; path.pEnd = 0; path.nStep = 0; } /* ** Construct the path from path.pStart to path.pEnd in the u.pTo fields. */ void path_reverse_path(void){ PathNode *p; for(p=path.pEnd; p && p->pFrom; p = p->pFrom){ p->pFrom->u.pTo = p; } path.pEnd->u.pTo = 0; assert( p==path.pStart ); } /* ** Compute the shortest path from iFrom to iTo ** ** If directOnly is true, then use only the "primary" links from parent to ** child. In other words, ignore merges. ** ** Return a pointer to the beginning of the path (the iFrom node). ** Elements of the path can be traversed by following the PathNode.u.pTo ** pointer chain. ** ** Return NULL if no path is found. */ PathNode *path_shortest(int iFrom, int iTo, int directOnly){ Stmt s; PathNode *pPrev; PathNode *p; path_reset(); path.pStart = path_new_node(iFrom, 0, 0); if( iTo==iFrom ){ path.pEnd = path.pStart; return path.pStart; } if( directOnly ){ db_prepare(&s, "SELECT cid, 1 FROM plink WHERE pid=:pid AND isprim " "UNION ALL " "SELECT pid, 0 FROM plink WHERE cid=:pid AND isprim" ); }else{ db_prepare(&s, "SELECT cid, 1 FROM plink WHERE pid=:pid " "UNION ALL " "SELECT pid, 0 FROM plink WHERE cid=:pid" ); } while( path.pCurrent ){ path.nStep++; pPrev = path.pCurrent; path.pCurrent = 0; while( pPrev ){ db_bind_int(&s, ":pid", pPrev->rid); while( db_step(&s)==SQLITE_ROW ){ int cid = db_column_int(&s, 0); int isParent = db_column_int(&s, 1); if( bag_find(&path.seen, cid) ) continue; p = path_new_node(cid, pPrev, isParent); if( cid==iTo ){ db_finalize(&s); path.pEnd = p; path_reverse_path(); return path.pStart; } } db_reset(&s); pPrev = pPrev->u.pPeer; } } db_finalize(&s); path_reset(); return 0; } /* ** Find the mid-point of the path. If the path contains fewer than ** 2 steps, return 0. */ PathNode *path_midpoint(void){ PathNode *p; int i; if( path.nStep<2 ) return 0; for(p=path.pEnd, i=0; p && i<path.nStep/2; p=p->pFrom, i++){} return p; } /* ** COMMAND: test-shortest-path ** ** Usage: %fossil test-shortest-path ?--no-merge? VERSION1 VERSION2 ** ** Report the shortest path between two checkins. If the --no-merge flag ** is used, follow only direct parent-child paths and omit merge links. */ void shortest_path_test_cmd(void){ int iFrom; int iTo; PathNode *p; int n; int directOnly; db_find_and_open_repository(0,0); directOnly = find_option("no-merge",0,0)!=0; if( g.argc!=4 ) usage("VERSION1 VERSION2"); iFrom = name_to_rid(g.argv[2]); iTo = name_to_rid(g.argv[3]); p = path_shortest(iFrom, iTo, directOnly); if( p==0 ){ fossil_fatal("no path from %s to %s", g.argv[1], g.argv[2]); } for(n=1, p=path.pStart; p; p=p->u.pTo, n++){ char *z; z = db_text(0, "SELECT substr(uuid,1,12) || ' ' || datetime(mtime)" " FROM blob, event" " WHERE blob.rid=%d AND event.objid=%d AND event.type='ci'", p->rid, p->rid); printf("%4d: %s", n, z); fossil_free(z); if( p->u.pTo ){ printf(" is a %s of\n", p->u.pTo->fromIsParent ? "parent" : "child"); }else{ printf("\n"); } } } /* ** Find the closest common ancestor of two nodes. "Closest" means the ** fewest number of arcs. */ int path_common_ancestor(int iMe, int iYou){ Stmt s; PathNode *pPrev; PathNode *p; Bag me, you; if( iMe==iYou ) return iMe; if( iMe==0 || iYou==0 ) return 0; path_reset(); path.pStart = path_new_node(iMe, 0, 0); path.pStart->isPrim = 1; path.pEnd = path_new_node(iYou, 0, 0); db_prepare(&s, "SELECT pid FROM plink WHERE cid=:cid"); bag_init(&me); bag_insert(&me, iMe); bag_init(&you); bag_insert(&you, iYou); while( path.pCurrent ){ pPrev = path.pCurrent; path.pCurrent = 0; while( pPrev ){ db_bind_int(&s, ":cid", pPrev->rid); while( db_step(&s)==SQLITE_ROW ){ int pid = db_column_int(&s, 0); if( bag_find(pPrev->isPrim ? &you : &me, pid) ){ /* pid is the common ancestor */ PathNode *pNext; for(p=path.pAll; p && p->rid!=pid; p=p->pAll){} assert( p!=0 ); pNext = p; while( pNext ){ pNext = p->pFrom; p->pFrom = pPrev; pPrev = p; p = pNext; } if( pPrev==path.pStart ) path.pStart = path.pEnd; path.pEnd = pPrev; path_reverse_path(); db_finalize(&s); return pid; }else if( bag_find(&path.seen, pid) ){ /* pid is just an alternative path on one of the legs */ continue; } p = path_new_node(pid, pPrev, 0); p->isPrim = pPrev->isPrim; bag_insert(pPrev->isPrim ? &me : &you, pid); } db_reset(&s); pPrev = pPrev->u.pPeer; } } db_finalize(&s); path_reset(); return 0; } /* ** COMMAND: test-ancestor-path ** ** Usage: %fossil test-ancestor-path VERSION1 VERSION2 ** ** Report the path from VERSION1 to VERSION2 through their most recent ** common ancestor. */ void ancestor_path_test_cmd(void){ int iFrom; int iTo; int iPivot; PathNode *p; int n; db_find_and_open_repository(0,0); if( g.argc!=4 ) usage("VERSION1 VERSION2"); iFrom = name_to_rid(g.argv[2]); iTo = name_to_rid(g.argv[3]); iPivot = path_common_ancestor(iFrom, iTo); for(n=1, p=path.pStart; p; p=p->u.pTo, n++){ char *z; z = db_text(0, "SELECT substr(uuid,1,12) || ' ' || datetime(mtime)" " FROM blob, event" " WHERE blob.rid=%d AND event.objid=%d AND event.type='ci'", p->rid, p->rid); printf("%4d: %s", n, z); fossil_free(z); if( p->rid==iFrom ) printf(" VERSION1"); if( p->rid==iTo ) printf(" VERSION2"); if( p->rid==iPivot ) printf(" PIVOT"); printf("\n"); } } /* ** A record of a file rename operation. */ typedef struct NameChange NameChange; struct NameChange { int origName; /* Original name of file */ int curName; /* Current name of the file */ int newName; /* Name of file in next version */ NameChange *pNext; /* List of all name changes */ }; /* ** Compute all file name changes that occur going from checkin iFrom ** to checkin iTo. ** ** The number of name changes is written into *pnChng. For each name ** change, two integers are allocated for *piChng. The first is the ** filename.fnid for the original name and the second is for new name. ** Space to hold *piChng is obtained from fossil_malloc() and should ** be released by the caller. ** ** This routine really has nothing to do with pathion. It is located ** in this path.c module in order to leverage some of the path ** infrastructure. */ void find_filename_changes( int iFrom, int iTo, int *pnChng, int **aiChng ){ PathNode *p; /* For looping over path from iFrom to iTo */ NameChange *pAll = 0; /* List of all name changes seen so far */ NameChange *pChng; /* For looping through the name change list */ int nChng = 0; /* Number of files whose names have changed */ int *aChng; /* Two integers per name change */ int i; /* Loop counter */ Stmt q1; /* Query of name changes */ *pnChng = 0; *aiChng = 0; if( iFrom==iTo ) return; path_reset(); p = path_shortest(iFrom, iTo, 0); if( p==0 ) return; path_reverse_path(); db_prepare(&q1, "SELECT pfnid, fnid FROM mlink WHERE mid=:mid AND pfnid>0" ); for(p=path.pStart; p; p=p->u.pTo){ int fnid, pfnid; if( !p->fromIsParent && (p->u.pTo==0 || p->u.pTo->fromIsParent) ){ /* Skip nodes where the parent is not on the path */ continue; } db_bind_int(&q1, ":mid", p->rid); while( db_step(&q1)==SQLITE_ROW ){ if( p->fromIsParent ){ fnid = db_column_int(&q1, 1); pfnid = db_column_int(&q1, 0); }else{ fnid = db_column_int(&q1, 0); pfnid = db_column_int(&q1, 1); } for(pChng=pAll; pChng; pChng=pChng->pNext){ if( pChng->curName==pfnid ){ pChng->newName = fnid; break; } } if( pChng==0 ){ pChng = fossil_malloc( sizeof(*pChng) ); pChng->pNext = pAll; pAll = pChng; pChng->origName = pfnid; pChng->curName = pfnid; pChng->newName = fnid; nChng++; } } for(pChng=pAll; pChng; pChng=pChng->pNext) pChng->curName = pChng->newName; db_reset(&q1); } db_finalize(&q1); if( nChng ){ *pnChng = nChng; aChng = *aiChng = fossil_malloc( nChng*2*sizeof(int) ); for(pChng=pAll, i=0; pChng; pChng=pChng->pNext, i+=2){ aChng[i] = pChng->origName; aChng[i+1] = pChng->newName; } while( pAll ){ pChng = pAll; pAll = pAll->pNext; fossil_free(pChng); } } } /* ** COMMAND: test-name-changes ** ** Usage: %fossil test-name-changes VERSION1 VERSION2 ** ** Show all filename changes that occur going from VERSION1 to VERSION2 */ void test_name_change(void){ int iFrom; int iTo; int *aChng; int nChng; int i; db_find_and_open_repository(0,0); if( g.argc!=4 ) usage("VERSION1 VERSION2"); iFrom = name_to_rid(g.argv[2]); iTo = name_to_rid(g.argv[3]); find_filename_changes(iFrom, iTo, &nChng, &aChng); for(i=0; i<nChng; i++){ char *zFrom, *zTo; zFrom = db_text(0, "SELECT name FROM filename WHERE fnid=%d", aChng[i*2]); zTo = db_text(0, "SELECT name FROM filename WHERE fnid=%d", aChng[i*2+1]); printf("[%s] -> [%s]\n", zFrom, zTo); fossil_free(zFrom); fossil_free(zTo); } fossil_free(aChng); } |
Changes to src/popen.c.
︙ | ︙ | |||
27 28 29 30 31 32 33 34 35 36 37 38 39 40 | ** Print a fatal error and quit. */ static void win32_fatal_error(const char *zMsg){ fossil_fatal("%s"); } #endif #ifdef _WIN32 /* ** On windows, create a child process and specify the stdin, stdout, ** and stderr channels for that process to use. ** | > > > > > > > > > > > > > > > > > > > > > > > > > > | 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | ** Print a fatal error and quit. */ static void win32_fatal_error(const char *zMsg){ fossil_fatal("%s"); } #endif /* ** The following macros are used to cast pointers to integers and ** integers to pointers. The way you do this varies from one compiler ** to the next, so we have developed the following set of #if statements ** to generate appropriate macros for a wide range of compilers. ** ** The correct "ANSI" way to do this is to use the intptr_t type. ** Unfortunately, that typedef is not available on all compilers, or ** if it is available, it requires an #include of specific headers ** that vary from one machine to the next. ** ** This code is copied out of SQLite. */ #if defined(__PTRDIFF_TYPE__) /* This case should work for GCC */ # define INT_TO_PTR(X) ((void*)(__PTRDIFF_TYPE__)(X)) # define PTR_TO_INT(X) ((int)(__PTRDIFF_TYPE__)(X)) #elif !defined(__GNUC__) /* Works for compilers other than LLVM */ # define INT_TO_PTR(X) ((void*)&((char*)0)[X]) # define PTR_TO_INT(X) ((int)(((char*)X)-(char*)0)) #elif defined(HAVE_STDINT_H) /* Use this case if we have ANSI headers */ # define INT_TO_PTR(X) ((void*)(intptr_t)(X)) # define PTR_TO_INT(X) ((int)(intptr_t)(X)) #else /* Generates a warning - but it always works */ # define INT_TO_PTR(X) ((void*)(X)) # define PTR_TO_INT(X) ((int)(X)) #endif #ifdef _WIN32 /* ** On windows, create a child process and specify the stdin, stdout, ** and stderr channels for that process to use. ** |
︙ | ︙ | |||
114 115 116 117 118 119 120 | win32_fatal_error("cannot create pipe for stdin"); } SetHandleInformation( hStdinWr, HANDLE_FLAG_INHERIT, FALSE); win32_create_child_process((char*)zCmd, hStdinRd, hStdoutWr, hStderr,&childPid); *pChildPid = childPid; | | | | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | win32_fatal_error("cannot create pipe for stdin"); } SetHandleInformation( hStdinWr, HANDLE_FLAG_INHERIT, FALSE); win32_create_child_process((char*)zCmd, hStdinRd, hStdoutWr, hStderr,&childPid); *pChildPid = childPid; *pfdIn = _open_osfhandle(PTR_TO_INT(hStdoutRd), 0); fd = _open_osfhandle(PTR_TO_INT(hStdinWr), 0); *ppOut = _fdopen(fd, "w"); CloseHandle(hStdinRd); CloseHandle(hStdoutWr); return 0; #else int pin[2], pout[2]; *pfdIn = 0; |
︙ | ︙ |
Changes to src/pqueue.c.
︙ | ︙ | |||
36 37 38 39 40 41 42 43 44 45 46 47 48 49 | ** Integers must be positive. */ struct PQueue { int cnt; /* Number of entries in the queue */ int sz; /* Number of slots in a[] */ struct QueueElement { int id; /* ID of the element */ double value; /* Value of element. Kept in ascending order */ } *a; }; #endif /* ** Initialize a PQueue structure | > | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | ** Integers must be positive. */ struct PQueue { int cnt; /* Number of entries in the queue */ int sz; /* Number of slots in a[] */ struct QueueElement { int id; /* ID of the element */ void *p; /* Content pointer */ double value; /* Value of element. Kept in ascending order */ } *a; }; #endif /* ** Initialize a PQueue structure |
︙ | ︙ | |||
67 68 69 70 71 72 73 | p->a = fossil_realloc(p->a, sizeof(p->a[0])*N); p->sz = N; } /* ** Insert element e into the queue. */ | | > | > > | 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | p->a = fossil_realloc(p->a, sizeof(p->a[0])*N); p->sz = N; } /* ** Insert element e into the queue. */ void pqueue_insert(PQueue *p, int e, double v, void *pData){ int i, j; if( p->cnt+1>p->sz ){ pqueue_resize(p, p->cnt+5); } for(i=0; i<p->cnt; i++){ if( p->a[i].value>v ){ for(j=p->cnt; j>i; j--){ p->a[j] = p->a[j-1]; } break; } } p->a[i].id = e; p->a[i].p = pData; p->a[i].value = v; p->cnt++; } /* ** Extract the first element from the queue (the element with ** the smallest value) and return its ID. Return 0 if the queue ** is empty. */ int pqueue_extract(PQueue *p, void **pp){ int e, i; if( p->cnt==0 ){ if( pp ) *pp = 0; return 0; } e = p->a[0].id; if( pp ) *pp = p->a[0].p; for(i=0; i<p->cnt-1; i++){ p->a[i] = p->a[i+1]; } p->cnt--; return e; } |
Changes to src/printf.c.
︙ | ︙ | |||
807 808 809 810 811 812 813 | */ void fossil_print(const char *zFormat, ...){ va_list ap; va_start(ap, zFormat); if( g.cgiOutput ){ cgi_vprintf(zFormat, ap); }else{ | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 | */ void fossil_print(const char *zFormat, ...){ va_list ap; va_start(ap, zFormat); if( g.cgiOutput ){ cgi_vprintf(zFormat, ap); }else{ Blob b = empty_blob; vxprintf(&b, zFormat, ap); fwrite(blob_buffer(&b), 1, blob_size(&b), stdout); blob_reset(&b); } } /* ** Case insensitive string comparison. */ int fossil_strnicmp(const char *zA, const char *zB, int nByte){ if( zA==0 ){ if( zB==0 ) return 0; return -1; }else if( zB==0 ){ return +1; } if( nByte<0 ) nByte = strlen(zB); return sqlite3_strnicmp(zA, zB, nByte); } int fossil_stricmp(const char *zA, const char *zB){ int nByte; int rc; if( zA==0 ){ if( zB==0 ) return 0; return -1; }else if( zB==0 ){ return +1; } nByte = strlen(zB); rc = sqlite3_strnicmp(zA, zB, nByte); if( rc==0 && zA[nByte] ) rc = 1; return rc; } |
Changes to src/rebuild.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 | #include "config.h" #include "rebuild.h" #include <assert.h> #include <dirent.h> #include <errno.h> /* | | > > | | > > > > < < < < < < < < < < < | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | #include "config.h" #include "rebuild.h" #include <assert.h> #include <dirent.h> #include <errno.h> /* ** Make changes to the stable part of the schema (the part that is not ** simply deleted and reconstructed on a rebuild) to bring the schema ** up to the latest. */ static const char zSchemaUpdates1[] = @ -- Index on the delta table @ -- @ CREATE INDEX IF NOT EXISTS delta_i1 ON delta(srcid); @ @ -- Artifacts that should not be processed are identified in the @ -- "shun" table. Artifacts that are control-file forgeries or @ -- spam or artifacts whose contents violate administrative policy @ -- can be shunned in order to prevent them from contaminating @ -- the repository. @ -- @ -- Shunned artifacts do not exist in the blob table. Hence they @ -- have not artifact ID (rid) and we thus must store their full @ -- UUID. @ -- @ CREATE TABLE IF NOT EXISTS shun( @ uuid UNIQUE, -- UUID of artifact to be shunned. Canonical form @ mtime INTEGER, -- When added. Seconds since 1970 @ scom TEXT -- Optional text explaining why the shun occurred @ ); @ @ -- Artifacts that should not be pushed are stored in the "private" @ -- table. @ -- @ CREATE TABLE IF NOT EXISTS private(rid INTEGER PRIMARY KEY); @ @ -- Some ticket content (such as the originators email address or contact @ -- information) needs to be obscured to protect privacy. This is achieved @ -- by storing an SHA1 hash of the content. For display, the hash is @ -- mapped back into the original text using this table. @ -- @ -- This table contains sensitive information and should not be shared @ -- with unauthorized users. @ -- @ CREATE TABLE IF NOT EXISTS concealed( @ hash TEXT PRIMARY KEY, -- The SHA1 hash of content @ mtime INTEGER, -- Time created. Seconds since 1970 @ content TEXT -- Content intended to be concealed @ ); ; static const char zSchemaUpdates2[] = @ -- An entry in this table describes a database query that generates a @ -- table of tickets. @ -- @ CREATE TABLE IF NOT EXISTS reportfmt( @ rn INTEGER PRIMARY KEY, -- Report number @ owner TEXT, -- Owner of this report format (not used) @ title TEXT UNIQUE, -- Title of this report @ mtime INTEGER, -- Time last modified. Seconds since 1970 @ cols TEXT, -- A color-key specification @ sqlcode TEXT -- An SQL SELECT statement for this report @ ); ; static void rebuild_update_schema(void){ int rc; db_multi_exec(zSchemaUpdates1); db_multi_exec(zSchemaUpdates2); rc = db_exists("SELECT 1 FROM sqlite_master" " WHERE name='user' AND sql GLOB '* mtime *'"); if( rc==0 ){ db_multi_exec( "CREATE TEMP TABLE temp_user AS SELECT * FROM user;" "DROP TABLE user;" "CREATE TABLE user(\n" " uid INTEGER PRIMARY KEY,\n" " login TEXT UNIQUE,\n" " pw TEXT,\n" " cap TEXT,\n" " cookie TEXT,\n" " ipaddr TEXT,\n" " cexpire DATETIME,\n" " info TEXT,\n" " mtime DATE,\n" " photo BLOB\n" ");" "INSERT OR IGNORE INTO user" " SELECT uid, login, pw, cap, cookie," " ipaddr, cexpire, info, now(), photo FROM temp_user;" "DROP TABLE temp_user;" ); } rc = db_exists("SELECT 1 FROM sqlite_master" " WHERE name='config' AND sql GLOB '* mtime *'"); if( rc==0 ){ db_multi_exec( "ALTER TABLE config ADD COLUMN mtime INTEGER;" "UPDATE config SET mtime=now();" ); } rc = db_exists("SELECT 1 FROM sqlite_master" " WHERE name='shun' AND sql GLOB '* mtime *'"); if( rc==0 ){ db_multi_exec( "ALTER TABLE shun ADD COLUMN mtime INTEGER;" "ALTER TABLE shun ADD COLUMN scom TEXT;" "UPDATE shun SET mtime=now();" ); } rc = db_exists("SELECT 1 FROM sqlite_master" " WHERE name='reportfmt' AND sql GLOB '* mtime *'"); if( rc==0 ){ db_multi_exec( "CREATE TEMP TABLE old_fmt AS SELECT * FROM reportfmt;" "DROP TABLE reportfmt;" ); db_multi_exec(zSchemaUpdates2); db_multi_exec( "INSERT OR IGNORE INTO reportfmt(rn,owner,title,cols,sqlcode,mtime)" " SELECT rn, owner, title, cols, sqlcode, now() FROM old_fmt;" "INSERT OR IGNORE INTO reportfmt(rn,owner,title,cols,sqlcode,mtime)" " SELECT rn, owner, title || ' (' || rn || ')', cols, sqlcode, now()" " FROM old_fmt;" ); } rc = db_exists("SELECT 1 FROM sqlite_master" " WHERE name='concealed' AND sql GLOB '* mtime *'"); if( rc==0 ){ db_multi_exec( "ALTER TABLE concealed ADD COLUMN mtime INTEGER;" "UPDATE concealed SET mtime=now();" ); } } /* ** Variables used to store state information about an on-going "rebuild" ** or "deconstruct". */ static int totalSize; /* Total number of artifacts to process */ static int processCnt; /* Number processed so far */ |
︙ | ︙ | |||
172 173 174 175 176 177 178 179 | }else{ /* We are doing "fossil deconstruct" */ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); char *zFile = mprintf(zFNameFormat, zUuid, zUuid+prefixLength); blob_write_to_file(pUse,zFile); free(zFile); free(zUuid); } | > | | 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | }else{ /* We are doing "fossil deconstruct" */ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); char *zFile = mprintf(zFNameFormat, zUuid, zUuid+prefixLength); blob_write_to_file(pUse,zFile); free(zFile); free(zUuid); blob_reset(pUse); } assert( blob_is_reset(pUse) ); rebuild_step_done(rid); /* Call all children recursively */ rid = 0; for(cid=bag_first(&children), i=1; cid; cid=bag_next(&children, cid), i++){ static Stmt q2; int sz; |
︙ | ︙ | |||
241 242 243 244 245 246 247 | ** from scratch. ** ** If the randomize parameter is true, then the BLOBs are deliberately ** extracted in a random order. This feature is used to test the ** ability of fossil to accept records in any order and still ** construct a sane repository. */ | | | | | 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 | ** from scratch. ** ** If the randomize parameter is true, then the BLOBs are deliberately ** extracted in a random order. This feature is used to test the ** ability of fossil to accept records in any order and still ** construct a sane repository. */ int rebuild_db(int randomize, int doOut, int doClustering){ Stmt s; int errCnt = 0; char *zTable; int incrSize; bag_init(&bagDone); ttyOutput = doOut; processCnt = 0; if (!g.fQuiet) { percent_complete(0); } rebuild_update_schema(); for(;;){ zTable = db_text(0, "SELECT name FROM sqlite_master /*scan*/" " WHERE type='table'" " AND name NOT IN ('blob','delta','rcvfrom','user'," "'config','shun','private','reportfmt'," "'concealed','accesslog')" " AND name NOT GLOB 'sqlite_*'" ); if( zTable==0 ) break; db_multi_exec("DROP TABLE %Q", zTable); free(zTable); } db_multi_exec(zRepositorySchema2); |
︙ | ︙ | |||
326 327 328 329 330 331 332 | db_finalize(&s); manifest_crosslink_end(); rebuild_tag_trunk(); if( !g.fQuiet && totalSize>0 ){ processCnt += incrSize; percent_complete((processCnt*1000)/totalSize); } | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > > > > > > > > > > > > > > > > > > > | | | 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 | db_finalize(&s); manifest_crosslink_end(); rebuild_tag_trunk(); if( !g.fQuiet && totalSize>0 ){ processCnt += incrSize; percent_complete((processCnt*1000)/totalSize); } if( doClustering ) create_cluster(); if( !g.fQuiet && totalSize>0 ){ processCnt += incrSize; percent_complete((processCnt*1000)/totalSize); } if(!g.fQuiet && ttyOutput ){ printf("\n"); } return errCnt; } /* ** Attempt to convert more full-text blobs into delta-blobs for ** storage efficiency. */ static void extra_deltification(void){ Stmt q; int topid, previd, rid; int prevfnid, fnid; db_begin_transaction(); db_prepare(&q, "SELECT rid FROM event, blob" " WHERE blob.rid=event.objid" " AND event.type='ci'" " AND NOT EXISTS(SELECT 1 FROM delta WHERE rid=blob.rid)" " ORDER BY event.mtime DESC" ); topid = previd = 0; while( db_step(&q)==SQLITE_ROW ){ rid = db_column_int(&q, 0); if( topid==0 ){ topid = previd = rid; }else{ if( content_deltify(rid, previd, 0)==0 && previd!=topid ){ content_deltify(rid, topid, 0); } previd = rid; } } db_finalize(&q); db_prepare(&q, "SELECT blob.rid, mlink.fnid FROM blob, mlink, plink" " WHERE NOT EXISTS(SELECT 1 FROM delta WHERE rid=blob.rid)" " AND mlink.fid=blob.rid" " AND mlink.mid=plink.cid" " AND plink.cid=mlink.mid" " ORDER BY mlink.fnid, plink.mtime DESC" ); prevfnid = 0; while( db_step(&q)==SQLITE_ROW ){ rid = db_column_int(&q, 0); fnid = db_column_int(&q, 1); if( prevfnid!=fnid ){ prevfnid = fnid; topid = previd = rid; }else{ if( content_deltify(rid, previd, 0)==0 && previd!=topid ){ content_deltify(rid, topid, 0); } previd = rid; } } db_finalize(&q); db_end_transaction(0); } /* ** COMMAND: rebuild ** ** Usage: %fossil rebuild ?REPOSITORY? ** ** Reconstruct the named repository database from the core ** records. Run this command after updating the fossil ** executable in a way that changes the database schema. ** ** Options: ** ** --noverify Skip the verification of changes to the BLOB table ** --force Force the rebuild to complete even if errors are seen ** --randomize Scan artifacts in a random order ** --cluster Compute clusters for unclustered artifacts ** --pagesize N Set the database pagesize to N. (512..65536 and power of 2) ** --wal Set Write-Ahead-Log journalling mode on the database ** --compress Strive to make the database as small as possible ** --vacuum Run VACUUM on the database after rebuilding */ void rebuild_database(void){ int forceFlag; int randomizeFlag; int errCnt; int omitVerify; int doClustering; const char *zPagesize; int newPagesize = 0; int activateWal; int runVacuum; int runCompress; omitVerify = find_option("noverify",0,0)!=0; forceFlag = find_option("force","f",0)!=0; randomizeFlag = find_option("randomize", 0, 0)!=0; doClustering = find_option("cluster", 0, 0)!=0; runVacuum = find_option("vacuum",0,0)!=0; runCompress = find_option("compress",0,0)!=0; zPagesize = find_option("pagesize",0,1); if( zPagesize ){ newPagesize = atoi(zPagesize); if( newPagesize<512 || newPagesize>65536 || (newPagesize&(newPagesize-1))!=0 ){ fossil_fatal("page size must be a power of two between 512 and 65536"); } } activateWal = find_option("wal",0,0)!=0; if( g.argc==3 ){ db_open_repository(g.argv[2]); }else{ db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); if( g.argc!=2 ){ usage("?REPOSITORY-FILENAME?"); } db_close(1); db_open_repository(g.zRepositoryName); } db_begin_transaction(); ttyOutput = 1; errCnt = rebuild_db(randomizeFlag, 1, doClustering); db_multi_exec( "REPLACE INTO config(name,value,mtime) VALUES('content-schema','%s',now());" "REPLACE INTO config(name,value,mtime) VALUES('aux-schema','%s',now());", CONTENT_SCHEMA, AUX_SCHEMA ); if( errCnt && !forceFlag ){ printf("%d errors. Rolling back changes. Use --force to force a commit.\n", errCnt); db_end_transaction(1); }else{ if( runCompress ){ printf("Extra delta compression... "); fflush(stdout); extra_deltification(); runVacuum = 1; } if( omitVerify ) verify_cancel(); db_end_transaction(0); if( runCompress ) printf("done\n"); db_close(0); db_open_repository(g.zRepositoryName); if( newPagesize ){ db_multi_exec("PRAGMA page_size=%d", newPagesize); runVacuum = 1; } if( runVacuum ){ printf("Vacuuming the database... "); fflush(stdout); db_multi_exec("VACUUM"); printf("done\n"); } if( activateWal ){ db_multi_exec("PRAGMA journal_mode=WAL;"); } } } /* ** COMMAND: test-detach ?REPOSITORY? ** ** Change the project-code and make other changes in order to prevent ** the repository from ever again pushing or pulling to other ** repositories. Used to create a "test" repository for development ** testing by cloning a working project repository. */ void test_detach_cmd(void){ db_find_and_open_repository(0, 2); db_begin_transaction(); db_multi_exec( "DELETE FROM config WHERE name='last-sync-url';" "UPDATE config SET value=lower(hex(randomblob(20)))" " WHERE name='project-code';" "UPDATE config SET value='detached-' || value" " WHERE name='project-name' AND value NOT GLOB 'detached-*';" |
︙ | ︙ | |||
409 410 411 412 413 414 415 | ** Create clusters for all unclustered artifacts if the number of unclustered ** artifacts exceeds the current clustering threshold. */ void test_createcluster_cmd(void){ if( g.argc==3 ){ db_open_repository(g.argv[2]); }else{ | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > > > > | > | < | > > | > > > > > > > | | | | < | | | | < | > | | 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 | ** Create clusters for all unclustered artifacts if the number of unclustered ** artifacts exceeds the current clustering threshold. */ void test_createcluster_cmd(void){ if( g.argc==3 ){ db_open_repository(g.argv[2]); }else{ db_find_and_open_repository(0, 0); if( g.argc!=2 ){ usage("?REPOSITORY-FILENAME?"); } db_close(1); db_open_repository(g.zRepositoryName); } db_begin_transaction(); create_cluster(); db_end_transaction(0); } /* ** COMMAND: test-clusters ** ** Verify that all non-private and non-shunned artifacts are accessible ** through the cluster chain. */ void test_clusters_cmd(void){ Bag pending; Stmt q; int n; db_find_and_open_repository(0, 2); bag_init(&pending); db_multi_exec( "CREATE TEMP TABLE xdone(x INTEGER PRIMARY KEY);" "INSERT INTO xdone SELECT rid FROM unclustered;" "INSERT OR IGNORE INTO xdone SELECT rid FROM private;" "INSERT OR IGNORE INTO xdone" " SELECT blob.rid FROM shun JOIN blob USING(uuid);" ); db_prepare(&q, "SELECT rid FROM unclustered WHERE rid IN" " (SELECT rid FROM tagxref WHERE tagid=%d)", TAG_CLUSTER ); while( db_step(&q)==SQLITE_ROW ){ bag_insert(&pending, db_column_int(&q, 0)); } db_finalize(&q); while( bag_count(&pending)>0 ){ Manifest *p; int rid = bag_first(&pending); int i; bag_remove(&pending, rid); p = manifest_get(rid, CFTYPE_CLUSTER); if( p==0 ){ fossil_fatal("bad cluster: rid=%d", rid); } for(i=0; i<p->nCChild; i++){ const char *zUuid = p->azCChild[i]; int crid = name_to_rid(zUuid); if( crid==0 ){ fossil_warning("cluster (rid=%d) references unknown artifact %s", rid, zUuid); continue; } db_multi_exec("INSERT OR IGNORE INTO xdone VALUES(%d)", crid); if( db_exists("SELECT 1 FROM tagxref WHERE tagid=%d AND rid=%d", TAG_CLUSTER, crid) ){ bag_insert(&pending, crid); } } manifest_destroy(p); } n = db_int(0, "SELECT count(*) FROM /*scan*/" " (SELECT rid FROM blob EXCEPT SELECT x FROM xdone)"); if( n==0 ){ printf("all artifacts reachable through clusters\n"); }else{ printf("%d unreachable artifacts:\n", n); db_prepare(&q, "SELECT rid, uuid FROM blob WHERE rid NOT IN xdone"); while( db_step(&q)==SQLITE_ROW ){ printf(" %3d %s\n", db_column_int(&q,0), db_column_text(&q,1)); } db_finalize(&q); } } /* ** COMMAND: scrub ** %fossil scrub [--verily] [--force] [--private] [REPOSITORY] ** ** The command removes sensitive information (such as passwords) from a ** repository so that the respository can be sent to an untrusted reader. ** ** By default, only passwords are removed. However, if the --verily option ** is added, then private branches, concealed email addresses, IP ** addresses of correspondents, and similar privacy-sensitive fields ** are also purged. If the --private option is used, then only private ** branches are removed and all other information is left intact. ** ** This command permanently deletes the scrubbed information. The effects ** of this command are irreversible. Use with caution. ** ** The user is prompted to confirm the scrub unless the --force option ** is used. */ void scrub_cmd(void){ int bVerily = find_option("verily",0,0)!=0; int bForce = find_option("force", "f", 0)!=0; int privateOnly = find_option("private",0,0)!=0; int bNeedRebuild = 0; if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); if( g.argc==2 ){ db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); if( g.argc!=2 ){ usage("?REPOSITORY-FILENAME?"); } db_close(1); db_open_repository(g.zRepositoryName); }else{ db_open_repository(g.argv[2]); } if( !bForce ){ Blob ans; blob_zero(&ans); prompt_user("Scrubbing the repository will permanently information.\n" "Changes cannot be undone. Continue (y/N)? ", &ans); if( blob_str(&ans)[0]!='y' ){ fossil_exit(1); } } db_begin_transaction(); if( privateOnly || bVerily ){ bNeedRebuild = db_exists("SELECT 1 FROM private"); db_multi_exec( "DELETE FROM blob WHERE rid IN private;" "DELETE FROM delta WHERE rid IN private;" "DELETE FROM private;" ); } if( !privateOnly ){ db_multi_exec( "UPDATE user SET pw='';" "DELETE FROM config WHERE name GLOB 'last-sync-*';" ); if( bVerily ){ db_multi_exec( "DELETE FROM concealed;" "UPDATE rcvfrom SET ipaddr='unknown';" "UPDATE user SET photo=NULL, info='';" ); } } if( !bNeedRebuild ){ db_end_transaction(0); db_multi_exec("VACUUM;"); }else{ rebuild_db(0, 1, 0); db_end_transaction(0); } } /* ** Recursively read all files from the directory zPath and install ** every file read as a new artifact in the repository. |
︙ | ︙ | |||
511 512 513 514 515 516 517 | } blob_init(&path, 0, 0); blob_appendf(&path, "%s", zSubpath); if( blob_read_from_file(&aContent, blob_str(&path))==-1 ){ fossil_panic("some unknown error occurred while reading \"%s\"", blob_str(&path)); } | | > | 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 | } blob_init(&path, 0, 0); blob_appendf(&path, "%s", zSubpath); if( blob_read_from_file(&aContent, blob_str(&path))==-1 ){ fossil_panic("some unknown error occurred while reading \"%s\"", blob_str(&path)); } content_put(&aContent); blob_reset(&path); blob_reset(&aContent); free(zSubpath); printf("\r%d", ++nFileRead); fflush(stdout); } closedir(d); }else { fossil_panic("encountered error %d while trying to open \"%s\".", errno, g.argv[3]); } } /* |
︙ | ︙ | |||
554 555 556 557 558 559 560 | db_begin_transaction(); db_initial_setup(0, 0, 1); printf("Reading files from directory \"%s\"...\n", g.argv[3]); recon_read_dir(g.argv[3]); printf("\nBuilding the Fossil repository...\n"); | | > > > > > > > > > > > > > > > | > > > > | | 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 | db_begin_transaction(); db_initial_setup(0, 0, 1); printf("Reading files from directory \"%s\"...\n", g.argv[3]); recon_read_dir(g.argv[3]); printf("\nBuilding the Fossil repository...\n"); rebuild_db(0, 1, 1); /* Reconstruct the private table. The private table contains the rid ** of every manifest that is tagged with "private" and every file that ** is not used by a manifest that is not private. */ db_multi_exec( "CREATE TEMP TABLE private_ckin(rid INTEGER PRIMARY KEY);" "INSERT INTO private_ckin " " SELECT rid FROM tagxref WHERE tagid=%d AND tagtype>0;" "INSERT OR IGNORE INTO private" " SELECT fid FROM mlink" " EXCEPT SELECT fid FROM mlink WHERE mid NOT IN private_ckin;" "INSERT OR IGNORE INTO private SELECT rid FROM private_ckin;" "DROP TABLE private_ckin;" ); /* Skip the verify_before_commit() step on a reconstruct. Most artifacts ** will have been changed and verification therefore takes a really, really ** long time. */ verify_cancel(); db_end_transaction(0); printf("project-id: %s\n", db_get("project-code", 0)); printf("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); printf("admin-user: %s (initial password is \"%s\")\n", g.zLogin, zPassword); } /* ** COMMAND: deconstruct ** ** Usage %fossil deconstruct ?OPTIONS? DESTIONATION ** ** Options: ** -R|--repository REPOSITORY ** -L|--prefixlength N ** ** This command exports all artifacts of a given repository and ** writes all artifacts to the file system. The DESTINATION directory ** will be populated with subdirectories AA and files AA/BBBBBBBBB.., where ** AABBBBBBBBB.. is the 40 character artifact ID, AA the first 2 characters. ** If -L|--prefixlength is given, the length (default 2) of the directory ** prefix can be set to 0,1,..,9 characters. */ void deconstruct_cmd(void){ |
︙ | ︙ | |||
621 622 623 624 625 626 627 | #endif if( prefixLength ){ zFNameFormat = mprintf("%s/%%.%ds/%%s",zDestDir,prefixLength); }else{ zFNameFormat = mprintf("%s/%%s",zDestDir); } /* open repository and open query for all artifacts */ | | | 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 | #endif if( prefixLength ){ zFNameFormat = mprintf("%s/%%.%ds/%%s",zDestDir,prefixLength); }else{ zFNameFormat = mprintf("%s/%%s",zDestDir); } /* open repository and open query for all artifacts */ db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); bag_init(&bagDone); ttyOutput = 1; processCnt = 0; if (!g.fQuiet) { printf("0 (0%%)...\r"); fflush(stdout); } |
︙ | ︙ |
Changes to src/report.c.
︙ | ︙ | |||
14 15 16 17 18 19 20 21 22 23 24 25 26 27 | ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** Code to generate the ticket listings */ #include "config.h" #include "report.h" #include <assert.h> /* Forward references to static routines */ static void report_format_hints(void); /* | > | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** Code to generate the ticket listings */ #include "config.h" #include <time.h> #include "report.h" #include <assert.h> /* Forward references to static routines */ static void report_format_hints(void); /* |
︙ | ︙ | |||
62 63 64 65 66 67 68 | blob_appendf(&ril, " "); if( g.okWrite && zOwner && zOwner[0] ){ blob_appendf(&ril, "(by <i>%h</i></i>) ", zOwner); } if( g.okTktFmt ){ blob_appendf(&ril, "[<a href=\"rptedit?rn=%d&copy=1\" rel=\"nofollow\">copy</a>] ", rn); } | | > > | 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | blob_appendf(&ril, " "); if( g.okWrite && zOwner && zOwner[0] ){ blob_appendf(&ril, "(by <i>%h</i></i>) ", zOwner); } if( g.okTktFmt ){ blob_appendf(&ril, "[<a href=\"rptedit?rn=%d&copy=1\" rel=\"nofollow\">copy</a>] ", rn); } if( g.okAdmin || (g.okWrTkt && zOwner && fossil_strcmp(g.zLogin,zOwner)==0) ){ blob_appendf(&ril, "[<a href=\"rptedit?rn=%d\" rel=\"nofollow\">edit</a>] ", rn); } if( g.okTktFmt ){ blob_appendf(&ril, "[<a href=\"rptsql?rn=%d\" rel=\"nofollow\">sql</a>] ", rn); } blob_appendf(&ril, "</li>\n"); } |
︙ | ︙ | |||
174 175 176 177 178 179 180 | "plink", "event", "tag", "tagxref", }; int i; for(i=0; i<sizeof(azAllowed)/sizeof(azAllowed[0]); i++){ | | | 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | "plink", "event", "tag", "tagxref", }; int i; for(i=0; i<sizeof(azAllowed)/sizeof(azAllowed[0]); i++){ if( fossil_stricmp(zArg1, azAllowed[i])==0 ) break; } if( i>=sizeof(azAllowed)/sizeof(azAllowed[0]) ){ *(char**)pError = mprintf("access to table \"%s\" is restricted",zArg1); rc = SQLITE_DENY; }else if( !g.okRdAddr && strncmp(zArg2, "private_", 8)==0 ){ rc = SQLITE_IGNORE; } |
︙ | ︙ | |||
211 212 213 214 215 216 217 | sqlite3_stmt *pStmt; int rc; /* First make sure the SQL is a single query command by verifying that ** the first token is "SELECT" and that there are no unquoted semicolons. */ for(i=0; fossil_isspace(zSql[i]); i++){} | | | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 | sqlite3_stmt *pStmt; int rc; /* First make sure the SQL is a single query command by verifying that ** the first token is "SELECT" and that there are no unquoted semicolons. */ for(i=0; fossil_isspace(zSql[i]); i++){} if( fossil_strnicmp(&zSql[i],"select",6)!=0 ){ return mprintf("The SQL must be a SELECT statement"); } for(i=0; zSql[i]; i++){ if( zSql[i]==';' ){ int bad; int c = zSql[i+1]; zSql[i+1] = 0; |
︙ | ︙ | |||
237 238 239 240 241 242 243 244 245 246 247 248 249 250 | /* Compile the statement and check for illegal accesses or syntax errors. */ sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr); rc = sqlite3_prepare(g.db, zSql, -1, &pStmt, &zTail); if( rc!=SQLITE_OK ){ zErr = mprintf("Syntax error: %s", sqlite3_errmsg(g.db)); } if( pStmt ){ sqlite3_finalize(pStmt); } sqlite3_set_authorizer(g.db, 0, 0); return zErr; } | > > > | 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 | /* Compile the statement and check for illegal accesses or syntax errors. */ sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr); rc = sqlite3_prepare(g.db, zSql, -1, &pStmt, &zTail); if( rc!=SQLITE_OK ){ zErr = mprintf("Syntax error: %s", sqlite3_errmsg(g.db)); } if( !sqlite3_stmt_readonly(pStmt) ){ zErr = mprintf("SQL must not modify the database"); } if( pStmt ){ sqlite3_finalize(pStmt); } sqlite3_set_authorizer(g.db, 0, 0); return zErr; } |
︙ | ︙ | |||
352 353 354 355 356 357 358 359 360 361 362 | if( zSQL[0]==0 ){ zErr = "Please supply an SQL query statement"; }else if( (zTitle = trim_string(zTitle))[0]==0 ){ zErr = "Please supply a title"; }else{ zErr = verify_sql_statement(zSQL); } if( zErr==0 ){ login_verify_csrf_secret(); if( rn>0 ){ db_multi_exec("UPDATE reportfmt SET title=%Q, sqlcode=%Q," | > > > > > > | | | | 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 | if( zSQL[0]==0 ){ zErr = "Please supply an SQL query statement"; }else if( (zTitle = trim_string(zTitle))[0]==0 ){ zErr = "Please supply a title"; }else{ zErr = verify_sql_statement(zSQL); } if( zErr==0 && db_exists("SELECT 1 FROM reportfmt WHERE title=%Q and rn<>%d", zTitle, rn) ){ zErr = mprintf("There is already another report named \"%h\"", zTitle); } if( zErr==0 ){ login_verify_csrf_secret(); if( rn>0 ){ db_multi_exec("UPDATE reportfmt SET title=%Q, sqlcode=%Q," " owner=%Q, cols=%Q, mtime=now() WHERE rn=%d", zTitle, zSQL, zOwner, zClrKey, rn); }else{ db_multi_exec("INSERT INTO reportfmt(title,sqlcode,owner,cols,mtime) " "VALUES(%Q,%Q,%Q,%Q,now())", zTitle, zSQL, zOwner, zClrKey); rn = db_last_insert_rowid(); } cgi_redirect(mprintf("rptview?rn=%d", rn)); return; } }else if( rn==0 ){ |
︙ | ︙ | |||
418 419 420 421 422 423 424 | } @ <p>Enter an optional color key in the following box. (If blank, no @ color key is displayed.) Each line contains the text for a single @ entry in the key. The first token of each line is the background @ color for that line.<br /> @ <textarea name="k" rows="8" cols="50">%h(zClrKey)</textarea> @ </p> | | | 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 | } @ <p>Enter an optional color key in the following box. (If blank, no @ color key is displayed.) Each line contains the text for a single @ entry in the key. The first token of each line is the background @ color for that line.<br /> @ <textarea name="k" rows="8" cols="50">%h(zClrKey)</textarea> @ </p> if( !g.okAdmin && fossil_strcmp(zOwner,g.zLogin)!=0 ){ @ <p>This report format is owned by %h(zOwner). You are not allowed @ to change it.</p> @ </form> report_format_hints(); style_footer(); return; } |
︙ | ︙ | |||
813 814 815 816 817 818 819 820 821 822 823 824 825 826 | free(zToFree); if( horiz ){ @ </tr> } @ </table> } /* ** WEBPAGE: /rptview ** ** Generate a report. The rn query parameter is the report number ** corresponding to REPORTFMT.RN. If the tablist query parameter exists, ** then the output consists of lines of tab-separated fields instead of | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 | free(zToFree); if( horiz ){ @ </tr> } @ </table> } /* ** Execute a single read-only SQL statement. Invoke xCallback() on each ** row. */ int sqlite3_exec_readonly( sqlite3 *db, /* The database on which the SQL executes */ const char *zSql, /* The SQL to be executed */ sqlite3_callback xCallback, /* Invoke this callback routine */ void *pArg, /* First argument to xCallback() */ char **pzErrMsg /* Write error messages here */ ){ int rc = SQLITE_OK; /* Return code */ const char *zLeftover; /* Tail of unprocessed SQL */ sqlite3_stmt *pStmt = 0; /* The current SQL statement */ char **azCols = 0; /* Names of result columns */ int nCol; /* Number of columns of output */ char **azVals = 0; /* Text of all output columns */ int i; /* Loop counter */ pStmt = 0; rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, &zLeftover); assert( rc==SQLITE_OK || pStmt==0 ); if( rc!=SQLITE_OK ){ return rc; } if( !pStmt ){ /* this happens for a comment or white-space */ return SQLITE_OK; } if( !sqlite3_stmt_readonly(pStmt) ){ sqlite3_finalize(pStmt); return SQLITE_ERROR; } nCol = sqlite3_column_count(pStmt); azVals = fossil_malloc(2*nCol*sizeof(const char*) + 1); while( (rc = sqlite3_step(pStmt))==SQLITE_ROW ){ if( azCols==0 ){ azCols = &azVals[nCol]; for(i=0; i<nCol; i++){ azCols[i] = (char *)sqlite3_column_name(pStmt, i); } } for(i=0; i<nCol; i++){ azVals[i] = (char *)sqlite3_column_text(pStmt, i); } if( xCallback(pArg, nCol, azVals, azCols) ){ break; } } rc = sqlite3_finalize(pStmt); fossil_free(azVals); return rc; } /* ** WEBPAGE: /rptview ** ** Generate a report. The rn query parameter is the report number ** corresponding to REPORTFMT.RN. If the tablist query parameter exists, ** then the output consists of lines of tab-separated fields instead of |
︙ | ︙ | |||
894 895 896 897 898 899 900 901 | } style_header(zTitle); output_color_key(zClrKey, 1, "border=\"0\" cellpadding=\"3\" cellspacing=\"0\" class=\"report\""); @ <table border="1" cellpadding="2" cellspacing="0" class="report"> sState.rn = rn; sState.nCount = 0; sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr1); | > | | | 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 | } style_header(zTitle); output_color_key(zClrKey, 1, "border=\"0\" cellpadding=\"3\" cellspacing=\"0\" class=\"report\""); @ <table border="1" cellpadding="2" cellspacing="0" class="report"> sState.rn = rn; sState.nCount = 0; (void)fossil_localtime(0); /* initialize the g.fTimeFormat variable */ sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr1); sqlite3_exec_readonly(g.db, zSql, generate_html, &sState, &zErr2); sqlite3_set_authorizer(g.db, 0, 0); @ </table> if( zErr1 ){ @ <p class="reportError">Error: %h(zErr1)</p> }else if( zErr2 ){ @ <p class="reportError">Error: %h(zErr2)</p> } style_footer(); }else{ sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr1); sqlite3_exec_readonly(g.db, zSql, output_tab_separated, &count, &zErr2); sqlite3_set_authorizer(g.db, 0, 0); cgi_set_content_type("text/plain"); } } /* ** report number for full table ticket export |
︙ | ︙ | |||
1053 1054 1055 1056 1057 1058 1059 | }else{ db_prepare(&q, "SELECT title, sqlcode, owner, cols FROM reportfmt WHERE title='%s'", zRep); } if( db_step(&q)!=SQLITE_ROW ){ db_finalize(&q); rpt_list_reports(); | | | | 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 | }else{ db_prepare(&q, "SELECT title, sqlcode, owner, cols FROM reportfmt WHERE title='%s'", zRep); } if( db_step(&q)!=SQLITE_ROW ){ db_finalize(&q); rpt_list_reports(); fossil_fatal("unknown report format(%s)!",zRep); } zTitle = db_column_malloc(&q, 0); zSql = db_column_malloc(&q, 1); zOwner = db_column_malloc(&q, 2); zClrKey = db_column_malloc(&q, 3); db_finalize(&q); } if( zFilter ){ zSql = mprintf("SELECT * FROM (%s) WHERE %s",zSql,zFilter); } count = 0; tktEncode = enc; zSep = zSepIn; sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr1); sqlite3_exec_readonly(g.db, zSql, output_separated_file, &count, &zErr2); sqlite3_set_authorizer(g.db, 0, 0); if( zFilter ){ free(zSql); } } |
Changes to src/rss.c.
︙ | ︙ | |||
125 126 127 128 129 130 131 | zPrefix = "*MERGE* "; }else if( nChild>1 ){ zPrefix = "*FORK* "; } @ <item> @ <title>%s(zPrefix)%h(zCom)</title> | | | | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | zPrefix = "*MERGE* "; }else if( nChild>1 ){ zPrefix = "*FORK* "; } @ <item> @ <title>%s(zPrefix)%h(zCom)</title> @ <link>%s(g.zBaseURL)/info/%s(zId)</link> @ <description>%s(zPrefix)%h(zCom)</description> @ <pubDate>%s(zDate)</pubDate> @ <author>%h(zAuthor)</author> @ <guid>%s(g.zBaseURL)/info/%s(zId)</guid> @ </item> free(zDate); nLine++; } db_finalize(&q); @ </channel> |
︙ | ︙ |
Changes to src/schema.c.
︙ | ︙ | |||
37 38 39 40 41 42 43 | /* ** The content tables have a content version number which rarely ** changes. The aux tables have an arbitrary version number (typically ** a date) which can change frequently. When the content schema changes, ** we have to execute special procedures to update the schema. When ** the aux schema changes, all we need to do is rebuild the database. */ | | | | > | > > > > > > > > > > | | > > | > > > > | | | > | | | > | 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 | /* ** The content tables have a content version number which rarely ** changes. The aux tables have an arbitrary version number (typically ** a date) which can change frequently. When the content schema changes, ** we have to execute special procedures to update the schema. When ** the aux schema changes, all we need to do is rebuild the database. */ #define CONTENT_SCHEMA "2" #define AUX_SCHEMA "2011-04-25 19:50" #endif /* INTERFACE */ /* ** The schema for a repository database. ** ** Schema1[] contains parts of the schema that are fixed and unchanging ** across versions. Schema2[] contains parts of the schema that can ** change from one version to the next. The information in Schema2[] ** is reconstructed from the information in Schema1[] by the "rebuild" ** operation. */ const char zRepositorySchema1[] = @ -- The BLOB and DELTA tables contain all records held in the repository. @ -- @ -- The BLOB.CONTENT column is always compressed using zlib. This @ -- column might hold the full text of the record or it might hold @ -- a delta that is able to reconstruct the record from some other @ -- record. If BLOB.CONTENT holds a delta, then a DELTA table entry @ -- will exist for the record and that entry will point to another @ -- entry that holds the source of the delta. Deltas can be chained. @ -- @ -- The blob and delta tables collectively hold the "global state" of @ -- a Fossil repository. @ -- @ CREATE TABLE blob( @ rid INTEGER PRIMARY KEY, -- Record ID @ rcvid INTEGER, -- Origin of this record @ size INTEGER, -- Size of content. -1 for a phantom. @ uuid TEXT UNIQUE NOT NULL, -- SHA1 hash of the content @ content BLOB, -- Compressed content of this record @ CHECK( length(uuid)==40 AND rid>0 ) @ ); @ CREATE TABLE delta( @ rid INTEGER PRIMARY KEY, -- Record ID @ srcid INTEGER NOT NULL REFERENCES blob -- Record holding source document @ ); @ CREATE INDEX delta_i1 ON delta(srcid); @ @ ------------------------------------------------------------------------- @ -- The BLOB and DELTA tables above hold the "global state" of a Fossil @ -- project; the stuff that is normally exchanged during "sync". The @ -- "local state" of a repository is contained in the remaining tables of @ -- the zRepositorySchema1 string. @ ------------------------------------------------------------------------- @ @ -- Whenever new blobs are received into the repository, an entry @ -- in this table records the source of the blob. @ -- @ CREATE TABLE rcvfrom( @ rcvid INTEGER PRIMARY KEY, -- Received-From ID @ uid INTEGER REFERENCES user, -- User login @ mtime DATETIME, -- Time of receipt. Julian day. @ nonce TEXT UNIQUE, -- Nonce used for login @ ipaddr TEXT -- Remote IP address. NULL for direct. @ ); @ @ -- Information about users @ -- @ -- The user.pw field can be either cleartext of the password, or @ -- a SHA1 hash of the password. If the user.pw field is exactly 40 @ -- characters long we assume it is a SHA1 hash. Otherwise, it is @ -- cleartext. The sha1_shared_secret() routine computes the password @ -- hash based on the project-code, the user login, and the cleartext @ -- password. @ -- @ CREATE TABLE user( @ uid INTEGER PRIMARY KEY, -- User ID @ login TEXT UNIQUE, -- login name of the user @ pw TEXT, -- password @ cap TEXT, -- Capabilities of this user @ cookie TEXT, -- WWW login cookie @ ipaddr TEXT, -- IP address for which cookie is valid @ cexpire DATETIME, -- Time when cookie expires @ info TEXT, -- contact information @ mtime DATE, -- last change. seconds since 1970 @ photo BLOB -- JPEG image of this user @ ); @ @ -- The VAR table holds miscellanous information about the repository. @ -- in the form of name-value pairs. @ -- @ CREATE TABLE config( @ name TEXT PRIMARY KEY NOT NULL, -- Primary name of the entry @ value CLOB, -- Content of the named parameter @ mtime DATE, -- last modified. seconds since 1970 @ CHECK( typeof(name)='text' AND length(name)>=1 ) @ ); @ @ -- Artifacts that should not be processed are identified in the @ -- "shun" table. Artifacts that are control-file forgeries or @ -- spam or artifacts whose contents violate administrative policy @ -- can be shunned in order to prevent them from contaminating @ -- the repository. @ -- @ -- Shunned artifacts do not exist in the blob table. Hence they @ -- have not artifact ID (rid) and we thus must store their full @ -- UUID. @ -- @ CREATE TABLE shun( @ uuid UNIQUE, -- UUID of artifact to be shunned. Canonical form @ mtime DATE, -- When added. seconds since 1970 @ scom TEXT -- Optional text explaining why the shun occurred @ ); @ @ -- Artifacts that should not be pushed are stored in the "private" @ -- table. Private artifacts are omitted from the "unclustered" and @ -- "unsent" tables. @ -- @ CREATE TABLE private(rid INTEGER PRIMARY KEY); @ @ -- An entry in this table describes a database query that generates a @ -- table of tickets. @ -- @ CREATE TABLE reportfmt( @ rn INTEGER PRIMARY KEY, -- Report number @ owner TEXT, -- Owner of this report format (not used) @ title TEXT UNIQUE, -- Title of this report @ mtime DATE, -- Last modified. seconds since 1970 @ cols TEXT, -- A color-key specification @ sqlcode TEXT -- An SQL SELECT statement for this report @ ); @ INSERT INTO reportfmt(title,mtime,cols,sqlcode) @ VALUES('All Tickets',julianday('1970-01-01'),'#ffffff Key: @ #f2dcdc Active @ #e8e8e8 Review @ #cfe8bd Fixed @ #bde5d6 Tested @ #cacae5 Deferred @ #c8c8c8 Closed','SELECT @ CASE WHEN status IN (''Open'',''Verified'') THEN ''#f2dcdc'' |
︙ | ︙ | |||
174 175 176 177 178 179 180 | @ -- by storing an SHA1 hash of the content. For display, the hash is @ -- mapped back into the original text using this table. @ -- @ -- This table contains sensitive information and should not be shared @ -- with unauthorized users. @ -- @ CREATE TABLE concealed( | | > | | | | | > | | > > > > > > > > > | | | > | 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 | @ -- by storing an SHA1 hash of the content. For display, the hash is @ -- mapped back into the original text using this table. @ -- @ -- This table contains sensitive information and should not be shared @ -- with unauthorized users. @ -- @ CREATE TABLE concealed( @ hash TEXT PRIMARY KEY, -- The SHA1 hash of content @ mtime DATE, -- Time created. Seconds since 1970 @ content TEXT -- Content intended to be concealed @ ); ; const char zRepositorySchema2[] = @ -- Filenames @ -- @ CREATE TABLE filename( @ fnid INTEGER PRIMARY KEY, -- Filename ID @ name TEXT UNIQUE -- Name of file page @ ); @ @ -- Linkages between checkins, files created by each checkin, and @ -- the names of those files. @ -- @ -- pid==0 if the file is added by checkin mid. @ -- fid==0 if the file is removed by checkin mid. @ -- @ CREATE TABLE mlink( @ mid INTEGER REFERENCES blob, -- Manifest ID where change occurs @ pid INTEGER REFERENCES blob, -- File ID in parent manifest @ fid INTEGER REFERENCES blob, -- Changed file ID in this manifest @ fnid INTEGER REFERENCES filename, -- Name of the file @ pfnid INTEGER REFERENCES filename, -- Previous name. 0 if unchanged @ mperm INTEGER -- File permissions. 1==exec @ ); @ CREATE INDEX mlink_i1 ON mlink(mid); @ CREATE INDEX mlink_i2 ON mlink(fnid); @ CREATE INDEX mlink_i3 ON mlink(fid); @ CREATE INDEX mlink_i4 ON mlink(pid); @ @ -- Parent/child linkages between checkins @ -- @ CREATE TABLE plink( @ pid INTEGER REFERENCES blob, -- Parent manifest @ cid INTEGER REFERENCES blob, -- Child manifest @ isprim BOOLEAN, -- pid is the primary parent of cid @ mtime DATETIME, -- the date/time stamp on cid. Julian day. @ UNIQUE(pid, cid) @ ); @ CREATE INDEX plink_i2 ON plink(cid,pid); @ @ -- A "leaf" checkin is a checkin that has no children in the same @ -- branch. The set of all leaves is easily computed with a join, @ -- between the plink and tagxref tables, but it is a slower join for @ -- very large repositories (repositories with 100,000 or more checkins) @ -- and so it makes sense to precompute the set of leaves. There is @ -- one entry in the following table for each leaf. @ -- @ CREATE TABLE leaf(rid INTEGER PRIMARY KEY); @ @ -- Events used to generate a timeline @ -- @ CREATE TABLE event( @ type TEXT, -- Type of event: 'ci', 'w', 'e', 't' @ mtime DATETIME, -- Time of occurrence. Julian day. @ objid INTEGER PRIMARY KEY, -- Associated record ID @ tagid INTEGER, -- Associated ticket or wiki name tag @ uid INTEGER REFERENCES user, -- User who caused the event @ bgcolor TEXT, -- Color set by 'bgcolor' property @ euser TEXT, -- User set by 'user' property @ user TEXT, -- Name of the user @ ecomment TEXT, -- Comment set by 'comment' property @ comment TEXT, -- Comment describing the event @ brief TEXT, -- Short comment when tagid already seen @ omtime DATETIME -- Original unchanged date+time, or NULL @ ); @ CREATE INDEX event_i1 ON event(mtime); @ @ -- A record of phantoms. A phantom is a record for which we know the @ -- UUID but we do not (yet) know the file content. @ -- @ CREATE TABLE phantom( |
︙ | ︙ | |||
293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 | @ INSERT INTO tag VALUES(3, 'user'); -- TAG_USER @ INSERT INTO tag VALUES(4, 'date'); -- TAG_DATE @ INSERT INTO tag VALUES(5, 'hidden'); -- TAG_HIDDEN @ INSERT INTO tag VALUES(6, 'private'); -- TAG_PRIVATE @ INSERT INTO tag VALUES(7, 'cluster'); -- TAG_CLUSTER @ INSERT INTO tag VALUES(8, 'branch'); -- TAG_BRANCH @ INSERT INTO tag VALUES(9, 'closed'); -- TAG_CLOSED @ @ -- Assignments of tags to baselines. Note that we allow tags to @ -- have values assigned to them. So we are not really dealing with @ -- tags here. These are really properties. But we are going to @ -- keep calling them tags because in many cases the value is ignored. @ -- @ CREATE TABLE tagxref( @ tagid INTEGER REFERENCES tag, -- The tag that added or removed | > | | | | | 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 | @ INSERT INTO tag VALUES(3, 'user'); -- TAG_USER @ INSERT INTO tag VALUES(4, 'date'); -- TAG_DATE @ INSERT INTO tag VALUES(5, 'hidden'); -- TAG_HIDDEN @ INSERT INTO tag VALUES(6, 'private'); -- TAG_PRIVATE @ INSERT INTO tag VALUES(7, 'cluster'); -- TAG_CLUSTER @ INSERT INTO tag VALUES(8, 'branch'); -- TAG_BRANCH @ INSERT INTO tag VALUES(9, 'closed'); -- TAG_CLOSED @ INSERT INTO tag VALUES(10,'parent'); -- TAG_PARENT @ @ -- Assignments of tags to baselines. Note that we allow tags to @ -- have values assigned to them. So we are not really dealing with @ -- tags here. These are really properties. But we are going to @ -- keep calling them tags because in many cases the value is ignored. @ -- @ CREATE TABLE tagxref( @ tagid INTEGER REFERENCES tag, -- The tag that added or removed @ tagtype INTEGER, -- 0:-,cancel 1:+,single 2:*,propagate @ srcid INTEGER REFERENCES blob, -- Artifact of tag. 0 for propagated tags @ origid INTEGER REFERENCES blob, -- check-in holding propagated tag @ value TEXT, -- Value of the tag. Might be NULL. @ mtime TIMESTAMP, -- Time of addition or removal. Julian day @ rid INTEGER REFERENCE blob, -- Artifact tag is applied to @ UNIQUE(rid, tagid) @ ); @ CREATE INDEX tagxref_i1 ON tagxref(tagid, mtime); @ @ -- When a hyperlink occurs from one artifact to another (for example @ -- when a check-in comment refers to a ticket) an entry is made in @ -- the following table for that hyperlink. This table is used to @ -- facilitate the display of "back links". @ -- @ CREATE TABLE backlink( @ target TEXT, -- Where the hyperlink points to @ srctype INT, -- 0: check-in 1: ticket 2: wiki @ srcid INT, -- rid for checkin or wiki. tkt_id for ticket. @ mtime TIMESTAMP, -- time that the hyperlink was added. Julian day. @ UNIQUE(target, srctype, srcid) @ ); @ CREATE INDEX backlink_src ON backlink(srcid, srctype); @ @ -- Each attachment is an entry in the following table. Only @ -- the most recent attachment (identified by the D card) is saved. @ -- @ CREATE TABLE attachment( @ attachid INTEGER PRIMARY KEY, -- Local id for this attachment @ isLatest BOOLEAN DEFAULT 0, -- True if this is the one to use @ mtime TIMESTAMP, -- Last changed. Julian day. @ src TEXT, -- UUID of the attachment. NULL to delete @ target TEXT, -- Object attached to. Wikiname or Tkt UUID @ filename TEXT, -- Filename for the attachment @ comment TEXT, -- Comment associated with this attachment @ user TEXT -- Name of user adding attachment @ ); @ CREATE INDEX attachment_idx1 ON attachment(target, filename, mtime); |
︙ | ︙ | |||
378 379 380 381 382 383 384 385 386 | # define TAG_USER 3 /* User who made a checking */ # define TAG_DATE 4 /* The date of a check-in */ # define TAG_HIDDEN 5 /* Do not display or sync */ # define TAG_PRIVATE 6 /* Display but do not sync */ # define TAG_CLUSTER 7 /* A cluster */ # define TAG_BRANCH 8 /* Value is name of the current branch */ # define TAG_CLOSED 9 /* Do not display this check-in as a leaf */ #endif #if EXPORT_INTERFACE | > | | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 | # define TAG_USER 3 /* User who made a checking */ # define TAG_DATE 4 /* The date of a check-in */ # define TAG_HIDDEN 5 /* Do not display or sync */ # define TAG_PRIVATE 6 /* Display but do not sync */ # define TAG_CLUSTER 7 /* A cluster */ # define TAG_BRANCH 8 /* Value is name of the current branch */ # define TAG_CLOSED 9 /* Do not display this check-in as a leaf */ # define TAG_PARENT 10 /* Change to parentage on a checkin */ #endif #if EXPORT_INTERFACE # define MAX_INT_TAG 16 /* The largest pre-assigned tag id */ #endif /* ** The schema for the locate FOSSIL database file found at the root ** of very check-out. This database contains the complete state of ** the checkout. */ |
︙ | ︙ | |||
428 429 430 431 432 433 434 | @ id INTEGER PRIMARY KEY, -- ID of the checked out file @ vid INTEGER REFERENCES blob, -- The baseline this file is part of. @ chnged INT DEFAULT 0, -- 0:unchnged 1:edited 2:m-chng 3:m-add @ deleted BOOLEAN DEFAULT 0, -- True if deleted @ isexe BOOLEAN, -- True if file should be executable @ rid INTEGER, -- Originally from this repository record @ mrid INTEGER, -- Based on this record due to a merge | | | 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 | @ id INTEGER PRIMARY KEY, -- ID of the checked out file @ vid INTEGER REFERENCES blob, -- The baseline this file is part of. @ chnged INT DEFAULT 0, -- 0:unchnged 1:edited 2:m-chng 3:m-add @ deleted BOOLEAN DEFAULT 0, -- True if deleted @ isexe BOOLEAN, -- True if file should be executable @ rid INTEGER, -- Originally from this repository record @ mrid INTEGER, -- Based on this record due to a merge @ mtime INTEGER, -- Mtime of file on disk. sec since 1970 @ pathname TEXT, -- Full pathname relative to root @ origname TEXT, -- Original pathname. NULL if unchanged @ UNIQUE(pathname,vid) @ ); @ @ -- This table holds a record of uncommitted merges in the local @ -- file tree. If a VFILE entry with id has merged with another |
︙ | ︙ |
Changes to src/search.c.
︙ | ︙ | |||
42 43 44 45 46 47 48 | int nPattern = strlen(zPattern); Search *p; char *z; int i; p = fossil_malloc( nPattern + sizeof(*p) + 1); z = (char*)&p[1]; | | | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | int nPattern = strlen(zPattern); Search *p; char *z; int i; p = fossil_malloc( nPattern + sizeof(*p) + 1); z = (char*)&p[1]; memcpy(z, zPattern, nPattern+1); memset(p, 0, sizeof(*p)); while( *z && p->nTerm<sizeof(p->a)/sizeof(p->a[0]) ){ while( !fossil_isalnum(*z) && *z ){ z++; } if( *z==0 ) break; p->a[p->nTerm].z = z; for(i=1; fossil_isalnum(z[i]) || z[i]=='_'; i++){} p->a[p->nTerm].n = i; |
︙ | ︙ |
Changes to src/setup.c.
︙ | ︙ | |||
42 43 44 45 46 47 48 | ){ @ <tr><td valign="top" align="right"> if( zLink && zLink[0] ){ @ <a href="%s(zLink)">%h(zTitle)</a> }else{ @ %h(zTitle) } | | | > > > > > | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | ){ @ <tr><td valign="top" align="right"> if( zLink && zLink[0] ){ @ <a href="%s(zLink)">%h(zTitle)</a> }else{ @ %h(zTitle) } @ </td><td width="5"></td><td valign="top">%h(zDesc)</td></tr> } /* ** WEBPAGE: /setup */ void setup_page(void){ login_check_credentials(); if( !g.okSetup ){ login_needed(); } style_header("Server Administration"); @ <table border="0" cellspacing="7"> setup_menu_entry("Users", "setup_ulist", "Grant privileges to individual users."); setup_menu_entry("Access", "setup_access", "Control access settings."); setup_menu_entry("Configuration", "setup_config", "Configure the WWW components of the repository"); setup_menu_entry("Settings", "setup_settings", "Web interface to the \"fossil settings\" command"); setup_menu_entry("Timeline", "setup_timeline", "Timeline display preferences"); setup_menu_entry("Login-Group", "setup_login_group", "Manage single sign-on between this repository and others" " on the same server"); setup_menu_entry("Tickets", "tktsetup", "Configure the trouble-ticketing system for this repository"); setup_menu_entry("Skins", "setup_skin", "Select from a menu of prepackaged \"skins\" for the web interface"); setup_menu_entry("CSS", "setup_editcss", "Edit the Cascading Style Sheet used by all pages of this repository"); setup_menu_entry("Header", "setup_header", "Edit HTML text inserted at the top of every page"); setup_menu_entry("Footer", "setup_footer", "Edit HTML text inserted at the bottom of every page"); setup_menu_entry("Logo", "setup_logo", "Change the logo image for the server"); setup_menu_entry("Shunned", "shun", "Show artifacts that are shunned by this repository"); setup_menu_entry("Log", "rcvfromlist", "A record of received artifacts and their sources"); setup_menu_entry("User-Log", "access_log", "A record of login attempts"); setup_menu_entry("Stats", "stat", "Display repository statistics"); @ </table> style_footer(); } |
︙ | ︙ | |||
118 119 120 121 122 123 124 | @ <th class="usetupListUser" style="text-align: right;padding-right: 20px;">User ID</th> @ <th class="usetupListCap" style="text-align: center;padding-right: 15px;">Capabilities</th> @ <th class="usetupListCon" style="text-align: left;">Contact Info</th> @ </tr> db_prepare(&s, "SELECT uid, login, cap, info FROM user ORDER BY login"); while( db_step(&s)==SQLITE_ROW ){ const char *zCap = db_column_text(&s, 2); | < | 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | @ <th class="usetupListUser" style="text-align: right;padding-right: 20px;">User ID</th> @ <th class="usetupListCap" style="text-align: center;padding-right: 15px;">Capabilities</th> @ <th class="usetupListCon" style="text-align: left;">Contact Info</th> @ </tr> db_prepare(&s, "SELECT uid, login, cap, info FROM user ORDER BY login"); while( db_step(&s)==SQLITE_ROW ){ const char *zCap = db_column_text(&s, 2); @ <tr> @ <td class="usetupListUser" style="text-align: right;padding-right: 20px;white-space:nowrap;"> if( g.okAdmin && (zCap[0]!='s' || g.okSetup) ){ @ <a href="setup_uedit?id=%d(db_column_int(&s,0))"> } @ %h(db_column_text(&s,1)) if( g.okAdmin ){ |
︙ | ︙ | |||
184 185 186 187 188 189 190 191 192 193 194 195 196 197 | @ <td><i>Reader:</i> Inherit privileges of @ user <tt>reader</tt></td></tr> @ <tr><td valign="top"><b>v</b></td> @ <td><i>Developer:</i> Inherit privileges of @ user <tt>developer</tt></td></tr> @ <tr><td valign="top"><b>w</b></td> @ <td><i>Write-Tkt:</i> Edit tickets</td></tr> @ <tr><td valign="top"><b>z</b></td> @ <td><i>Zip download:</i> Download a baseline via the @ <tt>/zip</tt> URL even without @ check<span class="capability">o</span>ut @ and <span class="capability">h</span>istory permissions</td></tr> @ </table> @ </li> | > > | 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | @ <td><i>Reader:</i> Inherit privileges of @ user <tt>reader</tt></td></tr> @ <tr><td valign="top"><b>v</b></td> @ <td><i>Developer:</i> Inherit privileges of @ user <tt>developer</tt></td></tr> @ <tr><td valign="top"><b>w</b></td> @ <td><i>Write-Tkt:</i> Edit tickets</td></tr> @ <tr><td valign="top"><b>x</b></td> @ <td><i>Private:</i> Push and/or pull private branches</td></tr> @ <tr><td valign="top"><b>z</b></td> @ <td><i>Zip download:</i> Download a baseline via the @ <tt>/zip</tt> URL even without @ check<span class="capability">o</span>ut @ and <span class="capability">h</span>istory permissions</td></tr> @ </table> @ </li> |
︙ | ︙ | |||
239 240 241 242 243 244 245 | /* ** WEBPAGE: /setup_uedit */ void user_edit(void){ const char *zId, *zLogin, *zInfo, *zCap, *zPw; char *oaa, *oas, *oar, *oaw, *oan, *oai, *oaj, *oao, *oap; char *oak, *oad, *oac, *oaf, *oam, *oah, *oag, *oae; | | > > | 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | /* ** WEBPAGE: /setup_uedit */ void user_edit(void){ const char *zId, *zLogin, *zInfo, *zCap, *zPw; char *oaa, *oas, *oar, *oaw, *oan, *oai, *oaj, *oao, *oap; char *oak, *oad, *oac, *oaf, *oam, *oah, *oag, *oae; char *oat, *oau, *oav, *oab, *oax, *oaz; const char *zGroup; const char *zOldLogin; const char *inherit[128]; int doWrite; int uid; int higherUser = 0; /* True if user being edited is SETUP and the */ /* user doing the editing is ADMIN. Disallow editing */ /* Must have ADMIN privleges to access this page |
︙ | ︙ | |||
296 297 298 299 300 301 302 303 304 305 306 307 308 309 | int af = P("af")!=0; int am = P("am")!=0; int ah = P("ah")!=0; int ag = P("ag")!=0; int at = P("at")!=0; int au = P("au")!=0; int av = P("av")!=0; int az = P("az")!=0; if( aa ){ zCap[i++] = 'a'; } if( ab ){ zCap[i++] = 'b'; } if( ac ){ zCap[i++] = 'c'; } if( ad ){ zCap[i++] = 'd'; } if( ae ){ zCap[i++] = 'e'; } if( af ){ zCap[i++] = 'f'; } | > | 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 | int af = P("af")!=0; int am = P("am")!=0; int ah = P("ah")!=0; int ag = P("ag")!=0; int at = P("at")!=0; int au = P("au")!=0; int av = P("av")!=0; int ax = P("ax")!=0; int az = P("az")!=0; if( aa ){ zCap[i++] = 'a'; } if( ab ){ zCap[i++] = 'b'; } if( ac ){ zCap[i++] = 'c'; } if( ad ){ zCap[i++] = 'd'; } if( ae ){ zCap[i++] = 'e'; } if( af ){ zCap[i++] = 'f'; } |
︙ | ︙ | |||
318 319 320 321 322 323 324 325 326 327 328 329 330 | if( ap ){ zCap[i++] = 'p'; } if( ar ){ zCap[i++] = 'r'; } if( as ){ zCap[i++] = 's'; } if( at ){ zCap[i++] = 't'; } if( au ){ zCap[i++] = 'u'; } if( av ){ zCap[i++] = 'v'; } if( aw ){ zCap[i++] = 'w'; } if( az ){ zCap[i++] = 'z'; } zCap[i] = 0; zPw = P("pw"); zLogin = P("login"); if( isValidPwString(zPw) ){ | > | > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 | if( ap ){ zCap[i++] = 'p'; } if( ar ){ zCap[i++] = 'r'; } if( as ){ zCap[i++] = 's'; } if( at ){ zCap[i++] = 't'; } if( au ){ zCap[i++] = 'u'; } if( av ){ zCap[i++] = 'v'; } if( aw ){ zCap[i++] = 'w'; } if( ax ){ zCap[i++] = 'x'; } if( az ){ zCap[i++] = 'z'; } zCap[i] = 0; zPw = P("pw"); zLogin = P("login"); if( isValidPwString(zPw) ){ zPw = sha1_shared_secret(zPw, zLogin, 0); }else{ zPw = db_text(0, "SELECT pw FROM user WHERE uid=%d", uid); } zOldLogin = db_text(0, "SELECT login FROM user WHERE uid=%d", uid); if( uid>0 && db_exists("SELECT 1 FROM user WHERE login=%Q AND uid!=%d", zLogin, uid) ){ style_header("User Creation Error"); @ <span class="loginError">Login "%h(zLogin)" is already used by @ a different user.</span> @ @ <p><a href="setup_uedit?id=%d(uid)">[Bummer]</a></p> style_footer(); return; } login_verify_csrf_secret(); db_multi_exec( "REPLACE INTO user(uid,login,info,pw,cap,mtime) " "VALUES(nullif(%d,0),%Q,%Q,%Q,'%s',now())", uid, P("login"), P("info"), zPw, zCap ); if( atoi(PD("all","0"))>0 ){ Blob sql; char *zErr = 0; blob_zero(&sql); if( zOldLogin==0 ){ blob_appendf(&sql, "INSERT INTO user(login)" " SELECT %Q WHERE NOT EXISTS(SELECT 1 FROM user WHERE login=%Q);", zLogin, zLogin ); zOldLogin = zLogin; } blob_appendf(&sql, "UPDATE user SET login=%Q," " pw=coalesce(shared_secret(%Q,%Q," "(SELECT value FROM config WHERE name='project-code')),pw)," " info=%Q," " cap=%Q," " mtime=now()" " WHERE login=%Q;", zLogin, P("pw"), zLogin, P("info"), zCap, zOldLogin ); login_group_sql(blob_str(&sql), "<li> ", " </li>\n", &zErr); blob_reset(&sql); if( zErr ){ style_header("User Change Error"); @ <span class="loginError">%s(zErr)</span> @ @ <p><a href="setup_uedit?id=%d(uid)">[Bummer]</a></p> style_footer(); return; } } cgi_redirect("setup_ulist"); return; } /* Load the existing information about the user, if any */ zLogin = ""; zInfo = ""; zCap = ""; zPw = ""; oaa = oab = oac = oad = oae = oaf = oag = oah = oai = oaj = oak = oam = oan = oao = oap = oar = oas = oat = oau = oav = oaw = oax = oaz = ""; if( uid ){ zLogin = db_text("", "SELECT login FROM user WHERE uid=%d", uid); zInfo = db_text("", "SELECT info FROM user WHERE uid=%d", uid); zCap = db_text("", "SELECT cap FROM user WHERE uid=%d", uid); zPw = db_text("", "SELECT pw FROM user WHERE uid=%d", uid); if( strchr(zCap, 'a') ) oaa = " checked=\"checked\""; if( strchr(zCap, 'b') ) oab = " checked=\"checked\""; |
︙ | ︙ | |||
383 384 385 386 387 388 389 390 391 392 393 394 395 396 | if( strchr(zCap, 'p') ) oap = " checked=\"checked\""; if( strchr(zCap, 'r') ) oar = " checked=\"checked\""; if( strchr(zCap, 's') ) oas = " checked=\"checked\""; if( strchr(zCap, 't') ) oat = " checked=\"checked\""; if( strchr(zCap, 'u') ) oau = " checked=\"checked\""; if( strchr(zCap, 'v') ) oav = " checked=\"checked\""; if( strchr(zCap, 'w') ) oaw = " checked=\"checked\""; if( strchr(zCap, 'z') ) oaz = " checked=\"checked\""; } /* figure out inherited permissions */ memset(inherit, 0, sizeof(inherit)); if( strcmp(zLogin, "developer") ){ char *z1, *z2; | > | 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 | if( strchr(zCap, 'p') ) oap = " checked=\"checked\""; if( strchr(zCap, 'r') ) oar = " checked=\"checked\""; if( strchr(zCap, 's') ) oas = " checked=\"checked\""; if( strchr(zCap, 't') ) oat = " checked=\"checked\""; if( strchr(zCap, 'u') ) oau = " checked=\"checked\""; if( strchr(zCap, 'v') ) oav = " checked=\"checked\""; if( strchr(zCap, 'w') ) oaw = " checked=\"checked\""; if( strchr(zCap, 'x') ) oax = " checked=\"checked\""; if( strchr(zCap, 'z') ) oaz = " checked=\"checked\""; } /* figure out inherited permissions */ memset(inherit, 0, sizeof(inherit)); if( strcmp(zLogin, "developer") ){ char *z1, *z2; |
︙ | ︙ | |||
480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 | @ <input type="checkbox" name="ak"%s(oak) />%s(B('k'))Write Wiki<br /> @ <input type="checkbox" name="ab"%s(oab) />%s(B('b'))Attachments<br /> @ <input type="checkbox" name="ar"%s(oar) />%s(B('r'))Read Ticket<br /> @ <input type="checkbox" name="an"%s(oan) />%s(B('n'))New Ticket<br /> @ <input type="checkbox" name="ac"%s(oac) />%s(B('c'))Append Ticket<br /> @ <input type="checkbox" name="aw"%s(oaw) />%s(B('w'))Write Ticket<br /> @ <input type="checkbox" name="at"%s(oat) />%s(B('t'))Ticket Report<br /> @ <input type="checkbox" name="az"%s(oaz) />%s(B('z'))Download Zip @ </td> @ </tr> @ <tr> @ <td align="right">Password:</td> if( zPw[0] ){ /* Obscure the password for all users */ @ <td><input type="password" name="pw" value="**********" /></td> }else{ /* Show an empty password as an empty input field */ @ <td><input type="password" name="pw" value="" /></td> } @ </tr> if( !higherUser ){ @ <tr> @ <td> </td> @ <td><input type="submit" name="submit" value="Apply Changes" /></td> @ </tr> } @ </table> | > > > > > > > > > > > > | 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 | @ <input type="checkbox" name="ak"%s(oak) />%s(B('k'))Write Wiki<br /> @ <input type="checkbox" name="ab"%s(oab) />%s(B('b'))Attachments<br /> @ <input type="checkbox" name="ar"%s(oar) />%s(B('r'))Read Ticket<br /> @ <input type="checkbox" name="an"%s(oan) />%s(B('n'))New Ticket<br /> @ <input type="checkbox" name="ac"%s(oac) />%s(B('c'))Append Ticket<br /> @ <input type="checkbox" name="aw"%s(oaw) />%s(B('w'))Write Ticket<br /> @ <input type="checkbox" name="at"%s(oat) />%s(B('t'))Ticket Report<br /> @ <input type="checkbox" name="ax"%s(oax) />%s(B('x'))Private<br /> @ <input type="checkbox" name="az"%s(oaz) />%s(B('z'))Download Zip @ </td> @ </tr> @ <tr> @ <td align="right">Password:</td> if( zPw[0] ){ /* Obscure the password for all users */ @ <td><input type="password" name="pw" value="**********" /></td> }else{ /* Show an empty password as an empty input field */ @ <td><input type="password" name="pw" value="" /></td> } @ </tr> zGroup = login_group_name(); if( zGroup ){ @ <tr> @ <td valign="top" align="right">Scope:</td> @ <td valign="top"> @ <input type="radio" name="all" checked value="0"> @ Apply changes to this repository only.<br /> @ <input type="radio" name="all" value="1"> @ Apply changes to all repositories in the "<b>%h(zGroup)</b>" @ login group.</td></tr> } if( !higherUser ){ @ <tr> @ <td> </td> @ <td><input type="submit" name="submit" value="Apply Changes" /></td> @ </tr> } @ </table> |
︙ | ︙ | |||
606 607 608 609 610 611 612 | @ and <span class="usertype">nobody</span>. @ </p></li> @ @ <li><p> @ The <span class="capability">EMail</span> privilege allows the display of @ sensitive information such as the email address of users and contact @ information on tickets. Recommended OFF for | | | 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 | @ and <span class="usertype">nobody</span>. @ </p></li> @ @ <li><p> @ The <span class="capability">EMail</span> privilege allows the display of @ sensitive information such as the email address of users and contact @ information on tickets. Recommended OFF for @ <span class="usertype">anonymous</span> and for @ <span class="usertype">nobody</span> but ON for @ <span class="usertype">developer</span>. @ </p></li> @ @ <li><p> @ The <span class="capability">Attachment</span> privilege is needed in @ order to add attachments to tickets or wiki. Write privilege on the |
︙ | ︙ | |||
754 755 756 757 758 759 760 | login_check_credentials(); if( !g.okSetup ){ login_needed(); } style_header("Access Control Settings"); db_begin_transaction(); | | | | | | | | > > | > > > > > > > > > > > > | 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 | login_check_credentials(); if( !g.okSetup ){ login_needed(); } style_header("Access Control Settings"); db_begin_transaction(); @ <form action="%s(g.zTop)/setup_access" method="post"><div> login_insert_csrf_secret(); @ <hr /> onoff_attribute("Require password for local access", "localauth", "localauth", 0); @ <p>When enabled, the password sign-in is always required for @ web access. When disabled, unrestricted web access from 127.0.0.1 @ is allowed for the <a href="%s(g.zTop)/help/ui">fossil ui</a> command or @ from the <a href="%s(g.zTop)/help/server">fossil server</a>, @ <a href="%s(g.zTop)/help/http">fossil http</a> commands when the @ "--localauth" command line options is used, or from the @ <a href="%s(g.zTop)/help/cgi">fossil cgi</a> if a line containing @ the word "localauth" appears in the CGI script. @ @ <p>A password is always required if any one or more @ of the following are true: @ <ol> @ <li> This button is checked @ <li> The inbound TCP/IP connection is not from 127.0.0.1 @ <li> The server is started using either of the @ <a href="%s(g.zTop)/help/server">fossil server</a> or @ <a href="%s(g.zTop)/help/server">fossil http</a> commands @ without the "--localauth" option. @ <li> The server is started from CGI without the "localauth" keyword @ in the CGI script. @ </ol> @ <hr /> onoff_attribute("Allow REMOTE_USER authentication", "remote_user_ok", "remote_user_ok", 0); @ <p>When enabled, if the REMOTE_USER environment variable is set to the @ login name of a valid user and no other login credentials are available, @ then the REMOTE_USER is accepted as an authenticated user. @ </p> |
︙ | ︙ | |||
788 789 790 791 792 793 794 795 796 797 798 799 800 | entry_attribute("Download packet limit", 10, "max-download", "mxdwn", "5000000"); @ <p>Fossil tries to limit out-bound sync, clone, and pull packets @ to this many bytes, uncompressed. If the client requires more data @ than this, then the client will issue multiple HTTP requests. @ Values below 1 million are not recommended. 5 million is a @ reasonable number.</p> @ <hr /> onoff_attribute("Show javascript button to fill in CAPTCHA", "auto-captcha", "autocaptcha", 0); @ <p>When enabled, a button appears on the login screen for user @ "anonymous" that will automatically fill in the CAPTCHA password. | > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 | entry_attribute("Download packet limit", 10, "max-download", "mxdwn", "5000000"); @ <p>Fossil tries to limit out-bound sync, clone, and pull packets @ to this many bytes, uncompressed. If the client requires more data @ than this, then the client will issue multiple HTTP requests. @ Values below 1 million are not recommended. 5 million is a @ reasonable number.</p> @ <hr /> onoff_attribute("Allow users to register themselves", "self-register", "selfregister", 0); @ <p>Allow users to register themselves through the HTTP UI. @ The registration form always requires filling in a CAPTCHA @ (<em>auto-captcha</em> setting is ignored). Still, bear in mind that anyone @ can register under any user name. This option is useful for public projects @ where you do not want everyone in any ticket discussion to be named @ "Anonymous".</p> @ <hr /> entry_attribute("Default privileges", 10, "default-perms", "defaultperms", "u"); @ <p>Permissions given to users that register themselves using the HTTP UI @ or are registered by the administrator using the command line interface. @ </p> @ <hr /> onoff_attribute("Show javascript button to fill in CAPTCHA", "auto-captcha", "autocaptcha", 0); @ <p>When enabled, a button appears on the login screen for user @ "anonymous" that will automatically fill in the CAPTCHA password. @ This is less secure than forcing the user to do it manually, but is @ probably secure enough and it is certainly more convenient for @ anonymous users.</p> @ <hr /> @ <p><input type="submit" name="submit" value="Apply Changes" /></p> @ </div></form> db_end_transaction(0); style_footer(); } /* ** WEBPAGE: setup_login_group */ void setup_login_group(void){ const char *zGroup; char *zErrMsg = 0; Blob fullName; char *zSelfRepo; const char *zRepo = PD("repo", ""); const char *zLogin = PD("login", ""); const char *zPw = PD("pw", ""); const char *zNewName = PD("newname", "New Login Group"); login_check_credentials(); if( !g.okSetup ){ login_needed(); } file_canonical_name(g.zRepositoryName, &fullName); zSelfRepo = mprintf(blob_str(&fullName)); blob_reset(&fullName); if( P("join")!=0 ){ login_group_join(zRepo, zLogin, zPw, zNewName, &zErrMsg); }else if( P("leave") ){ login_group_leave(&zErrMsg); } style_header("Login Group Configuration"); if( zErrMsg ){ @ <p class="generalError">%s(zErrMsg)</p> } zGroup = login_group_name(); if( zGroup==0 ){ @ <p>This repository (in the file named "%h(zSelfRepo)") @ is not currently part of any login-group. @ To join a login group, fill out the form below.</p> @ @ <form action="%s(g.zTop)/setup_login_group" method="post"><div> login_insert_csrf_secret(); @ <blockquote><table broder="0"> @ @ <tr><td align="right"><b>Repository filename in group to join:</b></td> @ <td width="5"></td><td> @ <input type="text" size="50" value="%h(zRepo)" name="repo"></td></tr> @ @ <td align="right"><b>Login on the above repo:</b></td> @ <td width="5"></td><td> @ <input type="text" size="20" value="%h(zLogin)" name="login"></td></tr> @ @ <td align="right"><b>Password:</b></td> @ <td width="5"></td><td> @ <input type="password" size="20" name="pw"></td></tr> @ @ <tr><td align="right"><b>Name of login-group:</b></td> @ <td width="5"></td><td> @ <input type="text" size="30" value="%h(zNewName)" name="newname"> @ (only used if creating a new login-group).</td></tr> @ @ <tr><td colspan="3" align="center"> @ <input type="submit" value="Join" name="join"></td></tr> @ </table> }else{ Stmt q; int n = 0; @ <p>This repository (in the file "%h(zSelfRepo)") @ is currently part of the "<b>%h(zGroup)</b>" login group. @ Other repositories in that group are:</p> @ <table border="0" cellspacing="4"> @ <tr><td colspan="2"><th align="left">Project Name<td> @ <th align="left">Repository File</tr> db_prepare(&q, "SELECT value," " (SELECT value FROM config" " WHERE name=('peer-name-' || substr(x.name,11)))" " FROM config AS x" " WHERE name GLOB 'peer-repo-*'" " ORDER BY value" ); while( db_step(&q)==SQLITE_ROW ){ const char *zRepo = db_column_text(&q, 0); const char *zTitle = db_column_text(&q, 1); n++; @ <tr><td align="right">%d(n).</td><td width="4"> @ <td>%h(zTitle)<td width="10"><td>%h(zRepo)</tr> } db_finalize(&q); @ </table> @ @ <p><form action="%s(g.zTop)/setup_login_group" method="post"><div> login_insert_csrf_secret(); @ To leave this login group press @ <input type="submit" value="Leave Login Group" name="leave"> @ </form></p> } style_footer(); } /* ** WEBPAGE: setup_timeline */ void setup_timeline(void){ login_check_credentials(); if( !g.okSetup ){ login_needed(); } style_header("Timeline Display Preferences"); db_begin_transaction(); @ <form action="%s(g.zTop)/setup_timeline" method="post"><div> login_insert_csrf_secret(); @ <hr /> onoff_attribute("Allow block-markup in timeline", "timeline-block-markup", "tbm", 0); @ <p>In timeline displays, check-in comments can be displayed with or @ without block markup (paragraphs, tables, etc.)</p> |
︙ | ︙ | |||
868 869 870 871 872 873 874 | } style_header("Settings"); db_begin_transaction(); @ <p>This page provides a simple interface to the "fossil setting" command. @ See the "fossil help setting" output below for further information on @ the meaning of each setting.</p><hr /> | | | 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 | } style_header("Settings"); db_begin_transaction(); @ <p>This page provides a simple interface to the "fossil setting" command. @ See the "fossil help setting" output below for further information on @ the meaning of each setting.</p><hr /> @ <form action="%s(g.zTop)/setup_settings" method="post"><div> @ <table border="0"><tr><td valign="top"> login_insert_csrf_secret(); for(pSet=ctrlSettings; pSet->name!=0; pSet++){ if( pSet->width==0 ){ onoff_attribute(pSet->name, pSet->name, pSet->var!=0 ? pSet->var : pSet->name, is_truth(pSet->def)); |
︙ | ︙ | |||
909 910 911 912 913 914 915 | login_check_credentials(); if( !g.okSetup ){ login_needed(); } style_header("WWW Configuration"); db_begin_transaction(); | | | 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 | login_check_credentials(); if( !g.okSetup ){ login_needed(); } style_header("WWW Configuration"); db_begin_transaction(); @ <form action="%s(g.zTop)/setup_config" method="post"><div> login_insert_csrf_secret(); @ <hr /> entry_attribute("Project Name", 60, "project-name", "pn", ""); @ <p>Give your project a name so visitors know what this site is about. @ The project name will also be used as the RSS feed title.</p> @ <hr /> textarea_attribute("Project Description", 5, 60, |
︙ | ︙ | |||
984 985 986 987 988 989 990 | textarea_attribute(0, 0, 0, "css", "css", zDefaultCSS); } if( P("submit")!=0 ){ db_end_transaction(0); cgi_redirect("setup_editcss"); } style_header("Edit CSS"); | | | 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 | textarea_attribute(0, 0, 0, "css", "css", zDefaultCSS); } if( P("submit")!=0 ){ db_end_transaction(0); cgi_redirect("setup_editcss"); } style_header("Edit CSS"); @ <form action="%s(g.zTop)/setup_editcss" method="post"><div> login_insert_csrf_secret(); @ Edit the CSS below:<br /> textarea_attribute("", 40, 80, "css", "css", zDefaultCSS); @ <br /> @ <input type="submit" name="submit" value="Apply Changes" /> @ <input type="submit" name="clear" value="Revert To Default" /> @ </div></form> |
︙ | ︙ | |||
1022 1023 1024 1025 1026 1027 1028 | if( P("clear")!=0 ){ db_multi_exec("DELETE FROM config WHERE name='header'"); cgi_replace_parameter("header", zDefaultHeader); }else{ textarea_attribute(0, 0, 0, "header", "header", zDefaultHeader); } style_header("Edit Page Header"); | | | 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 | if( P("clear")!=0 ){ db_multi_exec("DELETE FROM config WHERE name='header'"); cgi_replace_parameter("header", zDefaultHeader); }else{ textarea_attribute(0, 0, 0, "header", "header", zDefaultHeader); } style_header("Edit Page Header"); @ <form action="%s(g.zTop)/setup_header" method="post"><div> login_insert_csrf_secret(); @ <p>Edit HTML text with embedded TH1 (a TCL dialect) that will be used to @ generate the beginning of every page through start of the main @ menu.</p> textarea_attribute("", 40, 80, "header", "header", zDefaultHeader); @ <br /> @ <input type="submit" name="submit" value="Apply Changes" /> |
︙ | ︙ | |||
1060 1061 1062 1063 1064 1065 1066 | if( P("clear")!=0 ){ db_multi_exec("DELETE FROM config WHERE name='footer'"); cgi_replace_parameter("footer", zDefaultFooter); }else{ textarea_attribute(0, 0, 0, "footer", "footer", zDefaultFooter); } style_header("Edit Page Footer"); | | | 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 | if( P("clear")!=0 ){ db_multi_exec("DELETE FROM config WHERE name='footer'"); cgi_replace_parameter("footer", zDefaultFooter); }else{ textarea_attribute(0, 0, 0, "footer", "footer", zDefaultFooter); } style_header("Edit Page Footer"); @ <form action="%s(g.zTop)/setup_footer" method="post"><div> login_insert_csrf_secret(); @ <p>Edit HTML text with embedded TH1 (a TCL dialect) that will be used to @ generate the end of every page.</p> textarea_attribute("", 20, 80, "footer", "footer", zDefaultFooter); @ <br /> @ <input type="submit" name="submit" value="Apply Changes" /> @ <input type="submit" name="clear" value="Revert To Default" /> |
︙ | ︙ | |||
1101 1102 1103 1104 1105 1106 1107 | } db_begin_transaction(); if( P("set")!=0 && zMime && zMime[0] && szImg>0 ){ Blob img; Stmt ins; blob_init(&img, aImg, szImg); db_prepare(&ins, | | | | | 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 | } db_begin_transaction(); if( P("set")!=0 && zMime && zMime[0] && szImg>0 ){ Blob img; Stmt ins; blob_init(&img, aImg, szImg); db_prepare(&ins, "REPLACE INTO config(name,value,mtime)" " VALUES('logo-image',:bytes,now())" ); db_bind_blob(&ins, ":bytes", &img); db_step(&ins); db_finalize(&ins); db_multi_exec( "REPLACE INTO config(name,value,mtime) VALUES('logo-mimetype',%Q,now())", zMime ); db_end_transaction(0); cgi_redirect("setup_logo"); }else if( P("clr")!=0 ){ db_multi_exec( "DELETE FROM config WHERE name GLOB 'logo-*'" |
︙ | ︙ | |||
1131 1132 1133 1134 1135 1136 1137 | @ @ <p>The logo is accessible to all users at this URL: @ <a href="%s(g.zBaseURL)/logo">%s(g.zBaseURL)/logo</a>. @ The logo may or may not appear on each @ page depending on the <a href="setup_editcss">CSS</a> and @ <a href="setup_header">header setup</a>.</p> @ | | | 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 | @ @ <p>The logo is accessible to all users at this URL: @ <a href="%s(g.zBaseURL)/logo">%s(g.zBaseURL)/logo</a>. @ The logo may or may not appear on each @ page depending on the <a href="setup_editcss">CSS</a> and @ <a href="setup_header">header setup</a>.</p> @ @ <form action="%s(g.zTop)/setup_logo" method="post" @ enctype="multipart/form-data"><div> @ <p>To set a new logo image, select a file to use as the logo using @ the entry box below and then press the "Change Logo" button.</p> login_insert_csrf_secret(); @ Logo Image file: @ <input type="file" name="im" size="60" accept="image/*" /><br /> @ <input type="submit" name="set" value="Change Logo" /> |
︙ | ︙ |
Changes to src/sha1.c.
︙ | ︙ | |||
340 341 342 343 344 345 346 | ** The result of this function is the shared secret used by a client ** to authenticate to a server for the sync protocol. It is also the ** value stored in the USER.PW field of the database. By mixing in the ** login name and the project id with the hash, different shared secrets ** are obtained even if two users select the same password, or if a ** single user selects the same password for multiple projects. */ | | > > > > > | | | | | | | | | > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | ** The result of this function is the shared secret used by a client ** to authenticate to a server for the sync protocol. It is also the ** value stored in the USER.PW field of the database. By mixing in the ** login name and the project id with the hash, different shared secrets ** are obtained even if two users select the same password, or if a ** single user selects the same password for multiple projects. */ char *sha1_shared_secret( const char *zPw, /* The password to encrypt */ const char *zLogin, /* Username */ const char *zProjCode /* Project-code. Use built-in project code if NULL */ ){ static char *zProjectId = 0; SHA1Context ctx; unsigned char zResult[20]; char zDigest[41]; SHA1Init(&ctx); if( zProjCode==0 ){ if( zProjectId==0 ){ zProjectId = db_get("project-code", 0); /* On the first xfer request of a clone, the project-code is not yet ** known. Use the cleartext password, since that is all we have. */ if( zProjectId==0 ){ return mprintf("%s", zPw); } } zProjCode = zProjectId; } SHA1Update(&ctx, (unsigned char*)zProjCode, strlen(zProjCode)); SHA1Update(&ctx, (unsigned char*)"/", 1); SHA1Update(&ctx, (unsigned char*)zLogin, strlen(zLogin)); SHA1Update(&ctx, (unsigned char*)"/", 1); SHA1Update(&ctx, (unsigned const char*)zPw, strlen(zPw)); SHA1Final(&ctx, zResult); DigestToBase16(zResult, zDigest); return mprintf("%s", zDigest); } /* ** Implement the shared_secret() SQL function. shared_secret() takes two or ** three arguments; the third argument is optional. ** ** (1) The cleartext password ** (2) The login name ** (3) The project code ** ** Returns sha1($password/$login/$projcode). */ void sha1_shared_secret_sql_function( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *zPw; const char *zLogin; const char *zProjid; assert( argc==2 || argc==3 ); zPw = (const char*)sqlite3_value_text(argv[0]); if( zPw==0 || zPw[0]==0 ) return; zLogin = (const char*)sqlite3_value_text(argv[1]); if( zLogin==0 ) return; if( argc==3 ){ zProjid = (const char*)sqlite3_value_text(argv[2]); if( zProjid && zProjid[0]==0 ) zProjid = 0; }else{ zProjid = 0; } sqlite3_result_text(context, sha1_shared_secret(zPw, zLogin, zProjid), -1, fossil_free); } /* ** COMMAND: sha1sum ** %fossil sha1sum FILE... ** ** Compute an SHA1 checksum of all files named on the command-line. ** If an file is named "-" then take its content from standard input. |
︙ | ︙ |
Changes to src/shell.c.
︙ | ︙ | |||
34 35 36 37 38 39 40 41 42 43 | # include <sys/types.h> #endif #ifdef __OS2__ # include <unistd.h> #endif #if defined(HAVE_READLINE) && HAVE_READLINE==1 # include <readline/readline.h> # include <readline/history.h> | > > > | > | 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | # include <sys/types.h> #endif #ifdef __OS2__ # include <unistd.h> #endif #ifdef HAVE_EDITLINE # include <editline/editline.h> #endif #if defined(HAVE_READLINE) && HAVE_READLINE==1 # include <readline/readline.h> # include <readline/history.h> #endif #if !defined(HAVE_EDITLINE) && (!defined(HAVE_READLINE) || HAVE_READLINE!=1) # define readline(p) local_getline(p,stdin) # define add_history(X) # define read_history(X) # define write_history(X) # define stifle_history(X) #endif |
︙ | ︙ | |||
411 412 413 414 415 416 417 418 419 420 421 422 423 424 | char nullvalue[20]; /* The text to print when a NULL comes back from ** the database */ struct previous_mode_data explainPrev; /* Holds the mode information just before ** .explain ON */ char outfile[FILENAME_MAX]; /* Filename for *out */ const char *zDbFilename; /* name of the database file */ sqlite3_stmt *pStmt; /* Current statement if any. */ FILE *pLog; /* Write log output here */ }; /* ** These are the allowed modes. */ | > | 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 | char nullvalue[20]; /* The text to print when a NULL comes back from ** the database */ struct previous_mode_data explainPrev; /* Holds the mode information just before ** .explain ON */ char outfile[FILENAME_MAX]; /* Filename for *out */ const char *zDbFilename; /* name of the database file */ const char *zVfs; /* Name of VFS to use */ sqlite3_stmt *pStmt; /* Current statement if any. */ FILE *pLog; /* Write log output here */ }; /* ** These are the allowed modes. */ |
︙ | ︙ | |||
976 977 978 979 980 981 982 | if( pArg && pArg->out ){ iHiwtr = iCur = -1; sqlite3_status(SQLITE_STATUS_MEMORY_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Memory Used: %d (max %d) bytes\n", iCur, iHiwtr); iHiwtr = iCur = -1; sqlite3_status(SQLITE_STATUS_MALLOC_COUNT, &iCur, &iHiwtr, bReset); | | | 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 | if( pArg && pArg->out ){ iHiwtr = iCur = -1; sqlite3_status(SQLITE_STATUS_MEMORY_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Memory Used: %d (max %d) bytes\n", iCur, iHiwtr); iHiwtr = iCur = -1; sqlite3_status(SQLITE_STATUS_MALLOC_COUNT, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Number of Outstanding Allocations: %d (max %d)\n", iCur, iHiwtr); /* ** Not currently used by the CLI. ** iHiwtr = iCur = -1; ** sqlite3_status(SQLITE_STATUS_PAGECACHE_USED, &iCur, &iHiwtr, bReset); ** fprintf(pArg->out, "Number of Pcache Pages Used: %d (max %d) pages\n", iCur, iHiwtr); */ iHiwtr = iCur = -1; |
︙ | ︙ | |||
1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 | #endif } if( pArg && pArg->out && db ){ iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Lookaside Slots Used: %d (max %d)\n", iCur, iHiwtr); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Pager Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_SCHEMA_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Schema Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; | > > > > > > | 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 | #endif } if( pArg && pArg->out && db ){ iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Lookaside Slots Used: %d (max %d)\n", iCur, iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_HIT, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Successful lookaside attempts: %d\n", iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Lookaside failures due to size: %d\n", iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Lookaside failures due to OOM: %d\n", iHiwtr); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Pager Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_SCHEMA_USED, &iCur, &iHiwtr, bReset); fprintf(pArg->out, "Schema Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; |
︙ | ︙ | |||
1836 1837 1838 1839 1840 1841 1842 | fprintf(stderr, "Error: %s\n", zErrMsg); sqlite3_free(zErrMsg); rc = 1; } }else #endif | | | 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 | fprintf(stderr, "Error: %s\n", zErrMsg); sqlite3_free(zErrMsg); rc = 1; } }else #endif if( c=='l' && strncmp(azArg[0], "log", n)==0 && nArg>=2 ){ const char *zFile = azArg[1]; if( p->pLog && p->pLog!=stdout && p->pLog!=stderr ){ fclose(p->pLog); p->pLog = 0; } if( strcmp(zFile,"stdout")==0 ){ p->pLog = stdout; |
︙ | ︙ | |||
2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 | printf("%s%-*s", zSp, maxlen, azResult[j] ? azResult[j] : ""); } printf("\n"); } } sqlite3_free_table(azResult); }else if( c=='t' && n>4 && strncmp(azArg[0], "timeout", n)==0 && nArg==2 ){ open_db(p); sqlite3_busy_timeout(p->db, atoi(azArg[1])); }else | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > | 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 | printf("%s%-*s", zSp, maxlen, azResult[j] ? azResult[j] : ""); } printf("\n"); } } sqlite3_free_table(azResult); }else if( c=='t' && n>=8 && strncmp(azArg[0], "testctrl", n)==0 && nArg>=2 ){ static const struct { const char *zCtrlName; /* Name of a test-control option */ int ctrlCode; /* Integer code for that option */ } aCtrl[] = { { "prng_save", SQLITE_TESTCTRL_PRNG_SAVE }, { "prng_restore", SQLITE_TESTCTRL_PRNG_RESTORE }, { "prng_reset", SQLITE_TESTCTRL_PRNG_RESET }, { "bitvec_test", SQLITE_TESTCTRL_BITVEC_TEST }, { "fault_install", SQLITE_TESTCTRL_FAULT_INSTALL }, { "benign_malloc_hooks", SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS }, { "pending_byte", SQLITE_TESTCTRL_PENDING_BYTE }, { "assert", SQLITE_TESTCTRL_ASSERT }, { "always", SQLITE_TESTCTRL_ALWAYS }, { "reserve", SQLITE_TESTCTRL_RESERVE }, { "optimizations", SQLITE_TESTCTRL_OPTIMIZATIONS }, { "iskeyword", SQLITE_TESTCTRL_ISKEYWORD }, { "pghdrsz", SQLITE_TESTCTRL_PGHDRSZ }, { "scratchmalloc", SQLITE_TESTCTRL_SCRATCHMALLOC }, }; int testctrl = -1; int rc = 0; int i, n; open_db(p); /* convert testctrl text option to value. allow any unique prefix ** of the option name, or a numerical value. */ n = strlen(azArg[1]); for(i=0; i<sizeof(aCtrl)/sizeof(aCtrl[0]); i++){ if( strncmp(azArg[1], aCtrl[i].zCtrlName, n)==0 ){ if( testctrl<0 ){ testctrl = aCtrl[i].ctrlCode; }else{ fprintf(stderr, "ambiguous option name: \"%s\"\n", azArg[i]); testctrl = -1; break; } } } if( testctrl<0 ) testctrl = atoi(azArg[1]); if( (testctrl<SQLITE_TESTCTRL_FIRST) || (testctrl>SQLITE_TESTCTRL_LAST) ){ fprintf(stderr,"Error: invalid testctrl option: %s\n", azArg[1]); }else{ switch(testctrl){ /* sqlite3_test_control(int, db, int) */ case SQLITE_TESTCTRL_OPTIMIZATIONS: case SQLITE_TESTCTRL_RESERVE: if( nArg==3 ){ int opt = (int)strtol(azArg[2], 0, 0); rc = sqlite3_test_control(testctrl, p->db, opt); printf("%d (0x%08x)\n", rc, rc); } else { fprintf(stderr,"Error: testctrl %s takes a single int option\n", azArg[1]); } break; /* sqlite3_test_control(int) */ case SQLITE_TESTCTRL_PRNG_SAVE: case SQLITE_TESTCTRL_PRNG_RESTORE: case SQLITE_TESTCTRL_PRNG_RESET: case SQLITE_TESTCTRL_PGHDRSZ: if( nArg==2 ){ rc = sqlite3_test_control(testctrl); printf("%d (0x%08x)\n", rc, rc); } else { fprintf(stderr,"Error: testctrl %s takes no options\n", azArg[1]); } break; /* sqlite3_test_control(int, uint) */ case SQLITE_TESTCTRL_PENDING_BYTE: if( nArg==3 ){ unsigned int opt = (unsigned int)atoi(azArg[2]); rc = sqlite3_test_control(testctrl, opt); printf("%d (0x%08x)\n", rc, rc); } else { fprintf(stderr,"Error: testctrl %s takes a single unsigned" " int option\n", azArg[1]); } break; /* sqlite3_test_control(int, int) */ case SQLITE_TESTCTRL_ASSERT: case SQLITE_TESTCTRL_ALWAYS: if( nArg==3 ){ int opt = atoi(azArg[2]); rc = sqlite3_test_control(testctrl, opt); printf("%d (0x%08x)\n", rc, rc); } else { fprintf(stderr,"Error: testctrl %s takes a single int option\n", azArg[1]); } break; /* sqlite3_test_control(int, char *) */ #ifdef SQLITE_N_KEYWORD case SQLITE_TESTCTRL_ISKEYWORD: if( nArg==3 ){ const char *opt = azArg[2]; rc = sqlite3_test_control(testctrl, opt); printf("%d (0x%08x)\n", rc, rc); } else { fprintf(stderr,"Error: testctrl %s takes a single char * option\n", azArg[1]); } break; #endif case SQLITE_TESTCTRL_BITVEC_TEST: case SQLITE_TESTCTRL_FAULT_INSTALL: case SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS: case SQLITE_TESTCTRL_SCRATCHMALLOC: default: fprintf(stderr,"Error: CLI support for testctrl %s not implemented\n", azArg[1]); break; } } }else if( c=='t' && n>4 && strncmp(azArg[0], "timeout", n)==0 && nArg==2 ){ open_db(p); sqlite3_busy_timeout(p->db, atoi(azArg[1])); }else if( HAS_TIMER && c=='t' && n>=5 && strncmp(azArg[0], "timer", n)==0 && nArg==2 ){ enableTimer = booleanValue(azArg[1]); }else if( c=='w' && strncmp(azArg[0], "width", n)==0 && nArg>1 ){ int j; assert( nArg<=ArraySize(azArg) ); for(j=1; j<nArg && j<ArraySize(p->colWidth); j++){ |
︙ | ︙ | |||
2349 2350 2351 2352 2353 2354 2355 | } free(zSql); zSql = 0; nSql = 0; } } if( zSql ){ | > | > | 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 | } free(zSql); zSql = 0; nSql = 0; } } if( zSql ){ if( !_all_whitespace(zSql) ){ fprintf(stderr, "Error: incomplete SQL: %s\n", zSql); } free(zSql); } free(zLine); return errCnt; } /* |
︙ | ︙ | |||
2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 | " -html set output mode to HTML\n" " -line set output mode to 'line'\n" " -list set output mode to 'list'\n" " -separator 'x' set output field separator (|)\n" " -stats print memory stats before each finalize\n" " -nullvalue 'text' set text string for NULL values\n" " -version show SQLite version\n" ; static void usage(int showDetail){ fprintf(stderr, "Usage: %s [OPTIONS] FILENAME [SQL]\n" "FILENAME is the name of an SQLite database. A new database is created\n" "if the file does not previously exist.\n", Argv0); if( showDetail ){ | > > > > | 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 | " -html set output mode to HTML\n" " -line set output mode to 'line'\n" " -list set output mode to 'list'\n" " -separator 'x' set output field separator (|)\n" " -stats print memory stats before each finalize\n" " -nullvalue 'text' set text string for NULL values\n" " -version show SQLite version\n" " -vfs NAME use NAME as the default VFS\n" #ifdef SQLITE_ENABLE_VFSTRACE " -vfstrace enable tracing of all VFS calls\n" #endif ; static void usage(int showDetail){ fprintf(stderr, "Usage: %s [OPTIONS] FILENAME [SQL]\n" "FILENAME is the name of an SQLite database. A new database is created\n" "if the file does not previously exist.\n", Argv0); if( showDetail ){ |
︙ | ︙ | |||
2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 | */ #ifdef SIGINT signal(SIGINT, interrupt_handler); #endif /* Do an initial pass through the command-line argument to locate ** the name of the database file, the name of the initialization file, ** and the first command to execute. */ for(i=1; i<argc-1; i++){ char *z; if( argv[i][0]!='-' ) break; z = argv[i]; if( z[0]=='-' && z[1]=='-' ) z++; if( strcmp(argv[i],"-separator")==0 || strcmp(argv[i],"-nullvalue")==0 ){ i++; }else if( strcmp(argv[i],"-init")==0 ){ i++; zInitFile = argv[i]; /* Need to check for batch mode here to so we can avoid printing ** informational messages (like from process_sqliterc) before ** we do the actual processing of arguments later in a second pass. */ }else if( strcmp(argv[i],"-batch")==0 ){ stdin_is_interactive = 0; } } if( i<argc ){ #if defined(SQLITE_OS_OS2) && SQLITE_OS_OS2 data.zDbFilename = (const char *)convertCpPathToUtf8( argv[i++] ); #else data.zDbFilename = argv[i++]; #endif }else{ #ifndef SQLITE_OMIT_MEMORYDB data.zDbFilename = ":memory:"; #else data.zDbFilename = 0; #endif /***** Begin Fossil Patch *****/ { | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 | */ #ifdef SIGINT signal(SIGINT, interrupt_handler); #endif /* Do an initial pass through the command-line argument to locate ** the name of the database file, the name of the initialization file, ** the size of the alternative malloc heap, ** and the first command to execute. */ for(i=1; i<argc-1; i++){ char *z; if( argv[i][0]!='-' ) break; z = argv[i]; if( z[0]=='-' && z[1]=='-' ) z++; if( strcmp(argv[i],"-separator")==0 || strcmp(argv[i],"-nullvalue")==0 ){ i++; }else if( strcmp(argv[i],"-init")==0 ){ i++; zInitFile = argv[i]; /* Need to check for batch mode here to so we can avoid printing ** informational messages (like from process_sqliterc) before ** we do the actual processing of arguments later in a second pass. */ }else if( strcmp(argv[i],"-batch")==0 ){ stdin_is_interactive = 0; }else if( strcmp(argv[i],"-heap")==0 ){ int j, c; const char *zSize; sqlite3_int64 szHeap; zSize = argv[++i]; szHeap = atoi(zSize); for(j=0; (c = zSize[j])!=0; j++){ if( c=='M' ){ szHeap *= 1000000; break; } if( c=='K' ){ szHeap *= 1000; break; } if( c=='G' ){ szHeap *= 1000000000; break; } } if( szHeap>0x7fff0000 ) szHeap = 0x7fff0000; #if defined(SQLITE_ENABLE_MEMSYS3) || defined(SQLITE_ENABLE_MEMSYS5) sqlite3_config(SQLITE_CONFIG_HEAP, malloc((int)szHeap), (int)szHeap, 64); #endif #ifdef SQLITE_ENABLE_VFSTRACE }else if( strcmp(argv[i],"-vfstrace")==0 ){ extern int vfstrace_register( const char *zTraceName, const char *zOldVfsName, int (*xOut)(const char*,void*), void *pOutArg, int makeDefault ); vfstrace_register("trace",0,(int(*)(const char*,void*))fputs,stderr,1); #endif }else if( strcmp(argv[i],"-vfs")==0 ){ sqlite3_vfs *pVfs = sqlite3_vfs_find(argv[++i]); if( pVfs ){ sqlite3_vfs_register(pVfs, 1); }else{ fprintf(stderr, "no such VFS: \"%s\"\n", argv[i]); exit(1); } } } if( i<argc ){ #if defined(SQLITE_OS_OS2) && SQLITE_OS_OS2 data.zDbFilename = (const char *)convertCpPathToUtf8( argv[i++] ); #else data.zDbFilename = argv[i++]; #endif }else{ #ifndef SQLITE_OMIT_MEMORYDB data.zDbFilename = ":memory:"; #else data.zDbFilename = 0; #endif /***** Begin Fossil Patch *****/ { extern void fossil_open(const char **); fossil_open(&data.zDbFilename); } /***** End Fossil Patch *****/ } if( i<argc ){ zFirstCmd = argv[i++]; } if( i<argc ){ |
︙ | ︙ | |||
2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 | }else if( strcmp(z,"-version")==0 ){ printf("%s\n", sqlite3_libversion()); return 0; }else if( strcmp(z,"-interactive")==0 ){ stdin_is_interactive = 1; }else if( strcmp(z,"-batch")==0 ){ stdin_is_interactive = 0; }else if( strcmp(z,"-help")==0 || strcmp(z, "--help")==0 ){ usage(1); }else{ fprintf(stderr,"%s: Error: unknown option: %s\n", Argv0, z); fprintf(stderr,"Use -help for a list of options.\n"); return 1; } | > > > > > > | 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 | }else if( strcmp(z,"-version")==0 ){ printf("%s\n", sqlite3_libversion()); return 0; }else if( strcmp(z,"-interactive")==0 ){ stdin_is_interactive = 1; }else if( strcmp(z,"-batch")==0 ){ stdin_is_interactive = 0; }else if( strcmp(z,"-heap")==0 ){ i++; }else if( strcmp(z,"-vfs")==0 ){ i++; }else if( strcmp(z,"-vfstrace")==0 ){ i++; }else if( strcmp(z,"-help")==0 || strcmp(z, "--help")==0 ){ usage(1); }else{ fprintf(stderr,"%s: Error: unknown option: %s\n", Argv0, z); fprintf(stderr,"Use -help for a list of options.\n"); return 1; } |
︙ | ︙ | |||
2725 2726 2727 2728 2729 2730 2731 | free(zHome); }else{ rc = process_input(&data, stdin); } } set_table_name(&data, 0); if( data.db ){ | | < < < < | 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 | free(zHome); }else{ rc = process_input(&data, stdin); } } set_table_name(&data, 0); if( data.db ){ sqlite3_close(data.db); } return rc; } |
Changes to src/shun.c.
︙ | ︙ | |||
46 47 48 49 50 51 52 | char zCanonical[UUID_SIZE+1]; login_check_credentials(); if( !g.okAdmin ){ login_needed(); } if( P("rebuild") ){ | | | | > | > > | > > > > > > > > > > > | 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | char zCanonical[UUID_SIZE+1]; login_check_credentials(); if( !g.okAdmin ){ login_needed(); } if( P("rebuild") ){ db_close(1); db_open_repository(g.zRepositoryName); db_begin_transaction(); rebuild_db(0, 0, 0); db_end_transaction(0); } if( zUuid ){ nUuid = strlen(zUuid); if( nUuid!=40 || !validate16(zUuid, nUuid) ){ zUuid = 0; }else{ memcpy(zCanonical, zUuid, UUID_SIZE+1); canonical16(zCanonical, UUID_SIZE); zUuid = zCanonical; } } style_header("Shunned Artifacts"); if( zUuid && P("sub") ){ login_verify_csrf_secret(); db_multi_exec("DELETE FROM shun WHERE uuid='%s'", zUuid); if( db_exists("SELECT 1 FROM blob WHERE uuid='%s'", zUuid) ){ @ <p class="noMoreShun">Artifact @ <a href="%s(g.zTop)/artifact/%s(zUuid)">%s(zUuid)</a> is no @ longer being shunned.</p> }else{ @ <p class="noMoreShun">Artifact %s(zUuid) will no longer @ be shunned. But it does not exist in the repository. It @ may be necessary to rebuild the repository using the @ <b>fossil rebuild</b> command-line before the artifact content @ can pulled in from other respositories.</p> } } if( zUuid && P("add") ){ int rid, tagid; login_verify_csrf_secret(); db_multi_exec( "INSERT OR IGNORE INTO shun(uuid,mtime)" " VALUES('%s', now())", zUuid); @ <p class="shunned">Artifact @ <a href="%s(g.zTop)/artifact/%s(zUuid)">%s(zUuid)</a> has been @ shunned. It will no longer be pushed. @ It will be removed from the repository the next time the respository @ is rebuilt using the <b>fossil rebuild</b> command-line</p> db_multi_exec("DELETE FROM attachment WHERE src=%Q", zUuid); rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", zUuid); if( rid ){ db_multi_exec("DELETE FROM event WHERE objid=%d", rid); } tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='tkt-%q'", zUuid); if( tagid ){ db_multi_exec("DELETE FROM ticket WHERE tkt_uuid=%Q", zUuid); db_multi_exec("DELETE FROM tag WHERE tagid=%d", tagid); db_multi_exec("DELETE FROM tagxref WHERE tagid=%d", tagid); } } @ <p>A shunned artifact will not be pushed nor accepted in a pull and the @ artifact content will be purged from the repository the next time the @ repository is rebuilt. A list of shunned artifacts can be seen at the @ bottom of this page.</p> @ @ <a name="addshun"></a> |
︙ | ︙ | |||
110 111 112 113 114 115 116 | @ from the repository. Inappropriate content includes such things as @ spam added to Wiki, files that violate copyright or patent agreements, @ or artifacts that by design or accident interfere with the processing @ of the repository. Do not shun artifacts merely to remove them from @ sight - set the "hidden" tag on such artifacts instead.</p> @ @ <blockquote> | | | | | | 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | @ from the repository. Inappropriate content includes such things as @ spam added to Wiki, files that violate copyright or patent agreements, @ or artifacts that by design or accident interfere with the processing @ of the repository. Do not shun artifacts merely to remove them from @ sight - set the "hidden" tag on such artifacts instead.</p> @ @ <blockquote> @ <form method="post" action="%s(g.zTop)/%s(g.zPath)"><div> login_insert_csrf_secret(); @ <input type="text" name="uuid" value="%h(PD("shun",""))" size="50" /> @ <input type="submit" name="add" value="Shun" /> @ </div></form> @ </blockquote> @ @ <p>Enter the UUID of a previous shunned artifact to cause it to be @ accepted again in the repository. The artifact content is not @ restored because the content is unknown. The only change is that @ the formerly shunned artifact will be accepted on subsequent sync @ operations.</p> @ @ <blockquote> @ <form method="post" action="%s(g.zTop)/%s(g.zPath)"><div> login_insert_csrf_secret(); @ <input type="text" name="uuid" size="50" /> @ <input type="submit" name="sub" value="Accept" /> @ </div></form> @ </blockquote> @ @ <p>Press the Rebuild button below to rebuild the respository. The @ content of newly shunned artifacts is not purged until the repository @ is rebuilt. On larger repositories, the rebuild may take minute or @ two, so be patient after pressing the button.</p> @ @ <blockquote> @ <form method="post" action="%s(g.zTop)/%s(g.zPath)"><div> login_insert_csrf_secret(); @ <input type="submit" name="rebuild" value="Rebuild" /> @ </div></form> @ </blockquote> @ @ <hr /><p>Shunned Artifacts:</p> @ <blockquote><p> db_prepare(&q, "SELECT uuid, EXISTS(SELECT 1 FROM blob WHERE blob.uuid=shun.uuid)" " FROM shun ORDER BY uuid"); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); int stillExists = db_column_int(&q, 1); cnt++; if( stillExists ){ @ <b><a href="%s(g.zTop)/artifact/%s(zUuid)">%s(zUuid)</a></b><br /> }else{ @ <b>%s(zUuid)</b><br /> } } if( cnt==0 ){ @ <i>no artifacts are shunned on this server</i> } |
︙ | ︙ | |||
300 301 302 303 304 305 306 | ); @ <tr><td valign="top" align="right"><b>Artifacts:</b></td> @ <td valign="top"> while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); const char *zUuid = db_column_text(&q, 1); int size = db_column_int(&q, 2); | | | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | ); @ <tr><td valign="top" align="right"><b>Artifacts:</b></td> @ <td valign="top"> while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); const char *zUuid = db_column_text(&q, 1); int size = db_column_int(&q, 2); @ <a href="%s(g.zTop)/info/%s(zUuid)">%s(zUuid)</a> @ (rid: %d(rid), size: %d(size))<br /> } @ </td></tr> @ </table> db_finalize(&q); style_footer(); } |
Changes to src/skins.c.
︙ | ︙ | |||
23 24 25 26 27 28 29 | /* @-comment: // */ /* ** A black-and-white theme with the project title in a bar across the top ** and no logo image. */ static const char zBuiltinSkin1[] = | > | | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | /* @-comment: // */ /* ** A black-and-white theme with the project title in a bar across the top ** and no logo image. */ static const char zBuiltinSkin1[] = @ REPLACE INTO config(name,mtime,value) @ VALUES('css',now(),'/* General settings for the entire page */ @ body { @ margin: 0ex 1ex; @ padding: 0px; @ background-color: white; @ font-family: sans-serif; @ } @ |
︙ | ︙ | |||
150 151 152 153 154 155 156 | @ @ /* The label/value pairs on (for example) the vinfo page */ @ table.label-value th { @ vertical-align: top; @ text-align: right; @ padding: 0.2ex 2ex; @ }'); | | | | | | | | < | | | | | | | | | > > | | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | @ @ /* The label/value pairs on (for example) the vinfo page */ @ table.label-value th { @ vertical-align: top; @ text-align: right; @ padding: 0.2ex 2ex; @ }'); @ REPLACE INTO config(name,mtime,value) VALUES('header',now(),'<html> @ <head> @ <title>$<project_name>: $<title></title> @ <link rel="alternate" type="application/rss+xml" title="RSS Feed" @ href="$home/timeline.rss"> @ <link rel="stylesheet" href="$home/style.css?blackwhite" type="text/css" @ media="screen"> @ </head> @ <body> @ <div class="header"> @ <div class="logo"> @ <img src="$home/logo" alt="logo"> @ </div> @ <div class="title"><small>$<project_name></small><br />$<title></div> @ <div class="status"><nobr><th1> @ if {[info exists login]} { @ puts "Logged in as $login" @ } else { @ puts "Not logged in" @ } @ </th1></nobr></div> @ </div> @ <div class="mainmenu"><th1> @ html "<a href=''$home$index_page''>Home</a> " @ if {[anycap jor]} { @ html "<a href=''$home/timeline''>Timeline</a> " @ } @ if {[hascap oh]} { @ html "<a href=''$home/dir?ci=tip''>Files</a> " @ } @ if {[hascap o]} { @ html "<a href=''$home/brlist''>Branches</a> " @ html "<a href=''$home/taglist''>Tags</a> " @ } @ if {[hascap r]} { @ html "<a href=''$home/reportlist''>Tickets</a> " @ } @ if {[hascap j]} { @ html "<a href=''$home/wiki''>Wiki</a> " @ } @ if {[hascap s]} { @ html "<a href=''$home/setup''>Admin</a> " @ } elseif {[hascap a]} { @ html "<a href=''$home/setup_ulist''>Users</a> " @ } @ if {[info exists login]} { @ html "<a href=''$home/login''>Logout</a> " @ } else { @ html "<a href=''$home/login''>Login</a> " @ } @ </th1></div> @ '); @ REPLACE INTO config(name,mtime,value) @ VALUES('footer',now(),'<div class="footer"> @ Fossil version $manifest_version $manifest_date @ </div> @ </body></html> @ '); ; /* ** A tan theme with the project title above the user identification ** and no logo image. */ static const char zBuiltinSkin2[] = @ REPLACE INTO config(name,mtime,value) @ VALUES('css',now(),'/* General settings for the entire page */ @ body { @ margin: 0ex 0ex; @ padding: 0px; @ background-color: #fef3bc; @ font-family: sans-serif; @ } @ |
︙ | ︙ | |||
353 354 355 356 357 358 359 | @ /* The label/value pairs on (for example) the ci page */ @ table.label-value th { @ vertical-align: top; @ text-align: right; @ padding: 0.2ex 2ex; @ } @ '); | | | | | | | < | | | | | | | | | > > | | 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 | @ /* The label/value pairs on (for example) the ci page */ @ table.label-value th { @ vertical-align: top; @ text-align: right; @ padding: 0.2ex 2ex; @ } @ '); @ REPLACE INTO config(name,mtime,value) VALUES('header',now(),'<html> @ <head> @ <title>$<project_name>: $<title></title> @ <link rel="alternate" type="application/rss+xml" title="RSS Feed" @ href="$home/timeline.rss"> @ <link rel="stylesheet" href="$home/style.css?tan" type="text/css" @ media="screen"> @ </head> @ <body> @ <div class="header"> @ <div class="title">$<title></div> @ <div class="status"> @ <div class="logo"><nobr>$<project_name></nobr></div><br/> @ <nobr><th1> @ if {[info exists login]} { @ puts "Logged in as $login" @ } else { @ puts "Not logged in" @ } @ </th1></nobr></div> @ </div> @ <div class="mainmenu"><th1> @ html "<a href=''$home$index_page''>Home</a> " @ if {[anycap jor]} { @ html "<a href=''$home/timeline''>Timeline</a> " @ } @ if {[hascap oh]} { @ html "<a href=''$home/dir?ci=tip''>Files</a> " @ } @ if {[hascap o]} { @ html "<a href=''$home/brlist''>Branches</a> " @ html "<a href=''$home/taglist''>Tags</a> " @ } @ if {[hascap r]} { @ html "<a href=''$home/reportlist''>Tickets</a> " @ } @ if {[hascap j]} { @ html "<a href=''$home/wiki''>Wiki</a> " @ } @ if {[hascap s]} { @ html "<a href=''$home/setup''>Admin</a> " @ } elseif {[hascap a]} { @ html "<a href=''$home/setup_ulist''>Users</a> " @ } @ if {[info exists login]} { @ html "<a href=''$home/login''>Logout</a> " @ } else { @ html "<a href=''$home/login''>Login</a> " @ } @ </th1></div> @ '); @ REPLACE INTO config(name,mtime,value) @ VALUES('footer',now(),'<div class="footer"> @ Fossil version $manifest_version $manifest_date @ </div> @ </body></html> @ '); ; /* ** Black letters on a white or cream background with the main menu ** stuck on the left-hand side. */ static const char zBuiltinSkin3[] = @ REPLACE INTO config(name,mtime,value) @ VALUES('css',now(),'/* General settings for the entire page */ @ body { @ margin:0px 0px 0px 0px; @ padding:0px; @ font-family:verdana, arial, helvetica, "sans serif"; @ color:#333; @ background-color:white; @ } |
︙ | ︙ | |||
586 587 588 589 590 591 592 | @ @ /* The label/value pairs on (for example) the ci page */ @ table.label-value th { @ vertical-align: top; @ text-align: right; @ padding: 0.2ex 2ex; @ }'); | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | | < | | | | | | | | | > > | 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 | @ @ /* The label/value pairs on (for example) the ci page */ @ table.label-value th { @ vertical-align: top; @ text-align: right; @ padding: 0.2ex 2ex; @ }'); @ REPLACE INTO config(name,mtime,value) VALUES('header',now(),'<html> @ <head> @ <title>$<project_name>: $<title></title> @ <link rel="alternate" type="application/rss+xml" title="RSS Feed" @ href="$home/timeline.rss"> @ <link rel="stylesheet" href="$home/style.css?black2" type="text/css" @ media="screen"> @ </head> @ <body> @ <div class="header"> @ <div class="logo"> @ <img src="$home/logo" alt="logo"> @ <br /><nobr>$<project_name></nobr> @ </div> @ <div class="title">$<title></div> @ <div class="status"><nobr><th1> @ if {[info exists login]} { @ puts "Logged in as $login" @ } else { @ puts "Not logged in" @ } @ </th1></nobr></div> @ </div> @ <div class="mainmenu"><th1> @ html "<li><a href=''$home$index_page''>Home</a></li>" @ if {[anycap jor]} { @ html "<li><a href=''$home/timeline''>Timeline</a></li>" @ } @ if {[hascap oh]} { @ html "<li><a href=''$home/dir?ci=tip''>Files</a></li>" @ } @ if {[hascap o]} { @ html "<li><a href=''$home/brlist''>Branches</a></li>" @ html "<li><a href=''$home/taglist''>Tags</a></li>" @ } @ if {[hascap r]} { @ html "<li><a href=''$home/reportlist''>Tickets</a></li>" @ } @ if {[hascap j]} { @ html "<li><a href=''$home/wiki''>Wiki</a></li>" @ } @ if {[hascap s]} { @ html "<li><a href=''$home/setup''>Admin</a></li>" @ } elseif {[hascap a]} { @ html "<li><a href=''$home/setup_ulist''>Users</a></li>" @ } @ if {[info exists login]} { @ html "<li><a href=''$home/login''>Logout</a></li>" @ } else { @ html "<li><a href=''$home/login''>Login</a></li>" @ } @ </th1></ul></div> @ <div id="container"> @ '); @ REPLACE INTO config(name,mtime,value) VALUES('footer',now(),'</div> @ <div class="footer"> @ Fossil version $manifest_version $manifest_date @ </div> @ </body></html> @ '); ; /* ** Gradients and rounded corners. */ static const char zBuiltinSkin4[] = @ REPLACE INTO config(name,mtime,value) @ VALUES('css',now(),'/* General settings for the entire page */ @ html { @ min-height: 100%; @ } @ body { @ margin: 0ex 1ex; @ padding: 0px; @ background-color: white; @ color: #333; @ font-family: Verdana, sans-serif; @ font-size: 0.8em; @ } @ @ /* The project logo in the upper left-hand corner of each page */ @ div.logo { @ display: table-cell; @ text-align: right; @ vertical-align: bottom; @ font-weight: normal; @ } @ @ /* Widths */ @ div.header, div.mainmenu, div.submenu, div.content, div.footer { @ max-width: 900px; @ margin: auto; @ padding: 3px 20px 3px 20px; @ clear: both; @ } @ @ /* The page title at the top of each page */ @ div.title { @ display: table-cell; @ padding-left: 10px; @ font-size: 2em; @ margin: 10px 0 10px -20px; @ vertical-align: bottom; @ text-align: left; @ width: 80%; @ font-family: Verdana, sans-serif; @ font-weight: bold; @ color: #558195; @ text-shadow: 0px 2px 2px #999999; @ } @ @ /* The login status message in the top right-hand corner */ @ div.status { @ display: table-cell; @ text-align: right; @ vertical-align: bottom; @ color: #333; @ margin-right: -20px; @ } @ @ /* The main menu bar that appears at the top of the page beneath @ ** the header */ @ div.mainmenu { @ text-align: center; @ color: white; @ -moz-border-top-right-radius: 5px; @ -moz-border-top-left-radius: 5px; @ -webkit-border-top-right-radius: 5px; @ -webkit-border-top-left-radius: 5px; @ -border-top-right-radius: 5px; @ -border-top-left-radius: 5px; @ border-top-left-radius: 5px; @ border-top-right-radius: 5px; @ vertical-align: center; @ min-height: 2em; @ background-color: #446979; @ background: -webkit-gradient(linear,left bottom,left top, color-stop(0.02, rgb(51,81,94)), color-stop(0.76, rgb(85,129,149))); @ background: -moz-linear-gradient(center bottom,rgb(51,81,94) 2%, rgb(85,129,149) 76%); @ -webkit-box-shadow: 0px 3px 4px #333333; @ -moz-box-shadow: 0px 3px 4px #333333; @ box-shadow: 0px 3px 4px #333333; @ } @ @ /* The submenu bar that *sometimes* appears below the main menu */ @ div.submenu { @ padding-top:10px; @ padding-bottom:0; @ text-align: right; @ color: #000; @ background-color: #fff; @ height: 1.5em; @ vertical-align:middle; @ -webkit-box-shadow: 0px 3px 4px #999; @ -moz-box-shadow: 0px 3px 4px #999; @ box-shadow: 0px 3px 4px #999; @ } @ div.mainmenu a, div.mainmenu a:visited { @ padding: 3px 10px 3px 10px; @ color: white; @ text-decoration: none; @ } @ div.submenu a, div.submenu a:visited { @ padding: 2px 8px; @ color: #000; @ font-family: Arial; @ text-decoration: none; @ margin:auto; @ -webkit-border-radius: 5px; @ -moz-border-radius: 5px; @ border-radius: 5px; @ background: -webkit-gradient(linear,left bottom, left top, color-stop(0, rgb(184,184,184)), color-stop(0.75, rgb(214,214,214))); @ background: -moz-linear-gradient(center bottom, rgb(184,184,184) 0%, rgb(214,214,214) 75%); @ background-color: #e0e0e0 ; @ text-shadow: 0px -1px 0px #eee; @ filter: dropshadow(color=#eeeeee, offx=0, offy=-1); @ border: 1px solid #000; @ } @ @ div.mainmenu a:hover { @ color: #000; @ background-color: white; @ } @ @ div.submenu a:hover { @ background: -webkit-gradient(linear,left bottom, left top, color-stop(0, rgb(214,214,214)), color-stop(0.75, rgb(184,184,184))); @ background: -moz-linear-gradient(center bottom, rgb(214,214,214) 0%, rgb(184,184,184) 75%); @ background-color: #c0c0c0 ; @ } @ @ /* All page content from the bottom of the menu or submenu down to @ ** the footer */ @ div.content { @ background-color: #fff; @ -webkit-box-shadow: 0px 3px 4px #999; @ -moz-box-shadow: 0px 3px 4px #999; @ box-shadow: 0px 3px 4px #999; @ -moz-border-bottom-right-radius: 5px; @ -moz-border-bottom-left-radius: 5px; @ -webkit-border-bottom-right-radius: 5px; @ -webkit-border-bottom-left-radius: 5px; @ border-bottom-right-radius: 5px; @ border-bottom-left-radius: 5px; @ padding-bottom: 1em; @ min-height:40%; @ } @ @ @ /* Some pages have section dividers */ @ div.section { @ margin-bottom: 0.5em; @ margin-top: 1em; @ margin-right: auto; @ @ padding: 1px 1px 1px 1px; @ font-size: 1.2em; @ font-weight: bold; @ @ text-align: center; @ color: white; @ @ -webkit-border-radius: 5px; @ -moz-border-radius: 5px; @ border-radius: 5px; @ @ background-color: #446979; @ background: -webkit-gradient(linear,left bottom,left top, color-stop(0.02, rgb(51,81,94)), color-stop(0.76, rgb(85,129,149))); @ background: -moz-linear-gradient(center bottom,rgb(51,81,94) 2%, rgb(85,129,149) 76%); @ @ -webkit-box-shadow: 0px 3px 4px #333333; @ -moz-box-shadow: 0px 3px 4px #333333; @ box-shadow: 0px 3px 4px #333333; @ } @ @ /* The "Date" that occurs on the left hand side of timelines */ @ div.divider { @ font-size: 1.2em; @ font-family: Georgia, serif; @ font-weight: bold; @ margin-top: 1em; @ white-space: nowrap; @ } @ @ /* The footer at the very bottom of the page */ @ div.footer { @ font-size: 0.9em; @ text-align: right; @ margin-bottom: 1em; @ color: #666; @ } @ @ /* Hyperlink colors in the footer */ @ div.footer a { color: white; } @ div.footer a:link { color: white; } @ div.footer a:visited { color: white; } @ div.footer a:hover { background-color: white; color: #558195; } @ @ /* <verbatim> blocks */ @ pre.verbatim, blockquote pre { @ font-family: Dejavu Sans Mono, Monaco, Lucida Console, monospace; @ background-color: #f3f3f3; @ padding: 0.5em; @ white-space: pre-wrap; @ } @ @ blockquote pre { @ border: 1px #000 dashed; @ } @ @ /* The label/value pairs on (for example) the ci page */ @ table.label-value th { @ vertical-align: top; @ text-align: right; @ padding: 0.2ex 2ex; @ } @ @ @ table.report { @ border-collapse:collapse; @ border: 1px solid #999; @ margin: 1em 0 1em 0; @ } @ @ table.report tr th { @ padding: 3px 5px; @ text-transform : capitalize; @ } @ @ table.report tr td { @ padding: 3px 5px; @ } @ @ textarea { @ font-size: 1em; @ }'); @ REPLACE INTO config(name,mtime,value) VALUES('header',now(),'<html> @ <head> @ <title>$<project_name>: $<title></title> @ <link rel="alternate" type="application/rss+xml" title="RSS Feed" @ href="$home/timeline.rss"> @ <link rel="stylesheet" href="$home/style.css?black2" type="text/css" @ media="screen"> @ </head> @ <body> @ <div class="header"> @ <div class="logo"> @ <img src="$home/logo" alt="logo"> @ <br /><nobr>$<project_name></nobr> @ </div> @ <div class="title">$<title></div> @ <div class="status"><nobr><th1> @ if {[info exists login]} { @ puts "Logged in as $login" @ } else { @ puts "Not logged in" @ } @ </th1></nobr></div> @ </div> @ <div class="mainmenu"><ul><th1> @ html "<a href=''$home$index_page''>Home</a>" @ if {[anycap jor]} { @ html "<a href=''$home/timeline''>Timeline</a>" @ } @ if {[hascap oh]} { @ html "<a href=''$home/dir?ci=tip''>Files</a>" @ } @ if {[hascap o]} { @ html "<a href=''$home/brlist''>Branches</a>" @ html "<a href=''$home/taglist''>Tags</a>" @ } @ if {[hascap r]} { @ html "<a href=''$home/reportlist''>Tickets</a>" @ } @ if {[hascap j]} { @ html "<a href=''$home/wiki''>Wiki</a>" @ } @ if {[hascap s]} { @ html "<a href=''$home/setup''>Admin</a>" @ } elseif {[hascap a]} { @ html "<a href=''$home/setup_ulist''>Users</a>" @ } @ if {[info exists login]} { @ html "<a href=''$home/login''>Logout</a>" @ } else { @ html "<a href=''$home/login''>Login</a>" @ } @ </th1></ul></div> @ <div id="container"> @ '); @ REPLACE INTO config(name,mtime,value) VALUES('footer',now(),'</div> @ <div class="footer"> @ Fossil version $manifest_version $manifest_date @ </div> @ </body></html> @ '); ; /* ** An array of available built-in skins. */ static struct BuiltinSkin { const char *zName; const char *zValue; } aBuiltinSkin[] = { { "Default", 0 /* Filled in at runtime */ }, { "Plain Gray, No Logo", zBuiltinSkin1 }, { "Khaki, No Logo", zBuiltinSkin2 }, { "Black & White, Menu on Left", zBuiltinSkin3 }, { "Gradient, Rounded Corners", zBuiltinSkin4 }, }; /* ** For a skin named zSkinName, compute the name of the CONFIG table ** entry where that skin is stored and return it. ** ** Return NULL if zSkinName is NULL or an empty string. |
︙ | ︙ | |||
689 690 691 692 693 694 695 | ** useDefault==0 or a string for the default skin if useDefault==1. ** ** Memory to hold the returned string is obtained from malloc. */ static char *getSkin(int useDefault){ Blob val; blob_zero(&val); | | > | > | > | 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 | ** useDefault==0 or a string for the default skin if useDefault==1. ** ** Memory to hold the returned string is obtained from malloc. */ static char *getSkin(int useDefault){ Blob val; blob_zero(&val); blob_appendf(&val, "REPLACE INTO config(name,value,mtime) VALUES('css',%Q,now());\n", useDefault ? zDefaultCSS : db_get("css", (char*)zDefaultCSS) ); blob_appendf(&val, "REPLACE INTO config(name,value,mtime) VALUES('header',%Q,now());\n", useDefault ? zDefaultHeader : db_get("header", (char*)zDefaultHeader) ); blob_appendf(&val, "REPLACE INTO config(name,value,mtime) VALUES('footer',%Q,now());\n", useDefault ? zDefaultFooter : db_get("footer", (char*)zDefaultFooter) ); return blob_str(&val); } /* ** Construct the default skin string and fill in the corresponding |
︙ | ︙ | |||
729 730 731 732 733 734 735 | login_needed(); } db_begin_transaction(); /* Process requests to delete a user-defined skin */ if( P("del1") && (zName = skinVarName(P("sn"), 1))!=0 ){ style_header("Confirm Custom Skin Delete"); | | | 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 | login_needed(); } db_begin_transaction(); /* Process requests to delete a user-defined skin */ if( P("del1") && (zName = skinVarName(P("sn"), 1))!=0 ){ style_header("Confirm Custom Skin Delete"); @ <form action="%s(g.zTop)/setup_skin" method="post"><div> @ <p>Deletion of a custom skin is a permanent action that cannot @ be undone. Please confirm that this is what you want to do:</p> @ <input type="hidden" name="sn" value="%h(P("sn"))" /> @ <input type="submit" name="del2" value="Confirm - Delete The Skin" /> @ <input type="submit" name="cancel" value="Cancel - Do Not Delete" /> login_insert_csrf_secret(); @ </div></form> |
︙ | ︙ | |||
753 754 755 756 757 758 759 | if( P("save")!=0 && (zName = skinVarName(P("save"),0))!=0 ){ if( db_exists("SELECT 1 FROM config WHERE name=%Q", zName) || strcmp(zName, "Default")==0 ){ zErr = mprintf("Skin name \"%h\" already exists. " "Choose a different name.", P("sn")); }else{ | | | | | 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 | if( P("save")!=0 && (zName = skinVarName(P("save"),0))!=0 ){ if( db_exists("SELECT 1 FROM config WHERE name=%Q", zName) || strcmp(zName, "Default")==0 ){ zErr = mprintf("Skin name \"%h\" already exists. " "Choose a different name.", P("sn")); }else{ db_multi_exec("INSERT INTO config(name,value,mtime) VALUES(%Q,%Q,now())", zName, zCurrent ); } } /* The user pressed the "Use This Skin" button. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ if( strcmp(aBuiltinSkin[i].zValue, zCurrent)==0 ){ seen = 1; break; } } if( !seen ){ seen = db_exists("SELECT 1 FROM config WHERE name GLOB 'skin:*'" " AND value=%Q", zCurrent); } if( !seen ){ db_multi_exec( "INSERT INTO config(name,value,mtime) VALUES(" " strftime('skin:Backup On %%Y-%%m-%%d %%H:%%M:%%S')," " %Q,now())", zCurrent ); } seen = 0; for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ if( strcmp(aBuiltinSkin[i].zName, z)==0 ){ seen = 1; zCurrent = aBuiltinSkin[i].zValue; |
︙ | ︙ | |||
810 811 812 813 814 815 816 | @ <h2>Available Skins:</h2> @ <ol> for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ z = aBuiltinSkin[i].zName; if( strcmp(aBuiltinSkin[i].zValue, zCurrent)==0 ){ @ <li><p>%h(z). <b>Currently In Use</b></p> }else{ | | | | 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 | @ <h2>Available Skins:</h2> @ <ol> for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ z = aBuiltinSkin[i].zName; if( strcmp(aBuiltinSkin[i].zValue, zCurrent)==0 ){ @ <li><p>%h(z). <b>Currently In Use</b></p> }else{ @ <li><form action="%s(g.zTop)/setup_skin" method="post"><div> @ %h(z). @ <input type="hidden" name="sn" value="%h(z)" /> @ <input type="submit" name="load" value="Use This Skin" /> @ </div></form></li> } } db_prepare(&q, "SELECT substr(name, 6), value FROM config" " WHERE name GLOB 'skin:*'" " ORDER BY name" ); while( db_step(&q)==SQLITE_ROW ){ const char *zN = db_column_text(&q, 0); const char *zV = db_column_text(&q, 1); if( strcmp(zV, zCurrent)==0 ){ @ <li><p>%h(zN). <b>Currently In Use</b></p> }else{ @ <li><form action="%s(g.zTop)/setup_skin" method="post"> @ %h(zN). @ <input type="hidden" name="sn" value="%h(zN)"> @ <input type="submit" name="load" value="Use This Skin"> @ <input type="submit" name="del1" value="Delete This Skin"> @ </form></li> } } db_finalize(&q); @ </ol> style_footer(); db_end_transaction(0); } |
Added src/sqlcmd.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | /* ** Copyright (c) 2010 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This module contains the code that initializes the "sqlite3" command-line ** shell against the repository database. The command-line shell itself ** is a copy of the "shell.c" code from SQLite. This file contains logic ** to initialize the code in shell.c. */ #include "config.h" #include "sqlcmd.h" #include <zlib.h> /* ** Implementation of the "content(X)" SQL function. Return the complete ** content of artifact identified by X as a blob. */ static void sqlcmd_content( sqlite3_context *context, int argc, sqlite3_value **argv ){ int rid; Blob cx; const char *zName; assert( argc==1 ); zName = (const char*)sqlite3_value_text(argv[0]); if( zName==0 ) return; g.db = sqlite3_context_db_handle(context); g.repositoryOpen = 1; rid = name_to_rid(zName); if( rid==0 ) return; if( content_get(rid, &cx) ){ sqlite3_result_blob(context, blob_buffer(&cx), blob_size(&cx), SQLITE_TRANSIENT); blob_reset(&cx); } } /* ** Implementation of the "compress(X)" SQL function. The input X is ** compressed using zLib and the output is returned. */ static void sqlcmd_compress( sqlite3_context *context, int argc, sqlite3_value **argv ){ const unsigned char *pIn; unsigned char *pOut; unsigned int nIn; unsigned long int nOut; pIn = sqlite3_value_blob(argv[0]); nIn = sqlite3_value_bytes(argv[0]); nOut = 13 + nIn + (nIn+999)/1000; pOut = sqlite3_malloc( nOut+4 ); pOut[0] = nIn>>24 & 0xff; pOut[1] = nIn>>16 & 0xff; pOut[2] = nIn>>8 & 0xff; pOut[3] = nIn & 0xff; compress(&pOut[4], &nOut, pIn, nIn); sqlite3_result_blob(context, pOut, nOut+4, sqlite3_free); } /* ** Implementation of the "uncontent(X)" SQL function. The argument X ** is a blob which was obtained from compress(Y). The output will be ** the value Y. */ static void sqlcmd_decompress( sqlite3_context *context, int argc, sqlite3_value **argv ){ const unsigned char *pIn; unsigned char *pOut; unsigned int nIn; unsigned long int nOut; int rc; pIn = sqlite3_value_blob(argv[0]); nIn = sqlite3_value_bytes(argv[0]); nOut = (pIn[0]<<24) + (pIn[1]<<16) + (pIn[2]<<8) + pIn[3]; pOut = sqlite3_malloc( nOut+1 ); rc = uncompress(pOut, &nOut, &pIn[4], nIn-4); if( rc==Z_OK ){ sqlite3_result_blob(context, pOut, nOut, sqlite3_free); }else{ sqlite3_result_error(context, "input is not zlib compressed", -1); } } /* ** This is the "automatic extensionn" initializer that runs right after ** the connection to the repository database is opened. Set up the ** database connection to be more useful to the human operator. */ static int sqlcmd_autoinit( sqlite3 *db, const char **pzErrMsg, const void *notUsed ){ sqlite3_create_function(db, "content", 1, SQLITE_ANY, 0, sqlcmd_content, 0, 0); sqlite3_create_function(db, "compress", 1, SQLITE_ANY, 0, sqlcmd_compress, 0, 0); sqlite3_create_function(db, "decompress", 1, SQLITE_ANY, 0, sqlcmd_decompress, 0, 0); return SQLITE_OK; } /* ** COMMAND: sqlite3 ** ** Usage: %fossil sqlite3 ?DATABASE? ?OPTIONS? ** ** Run the standalone sqlite3 command-line shell on DATABASE with OPTIONS. ** If DATABASE is omitted, then the repository that serves the working ** directory is opened. ** ** WARNING: Careless use of this command can corrupt a Fossil repository ** in ways that are unrecoverable. Be sure you know what you are doing before ** running any SQL commands that modifies the repository database. */ void sqlite3_cmd(void){ extern int sqlite3_shell(int, char**); db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); db_close(1); sqlite3_shutdown(); sqlite3_shell(g.argc-1, g.argv+1); } /* ** This routine is called by the patched sqlite3 command-line shell in order ** to load the name and database connection for the open Fossil database. */ void fossil_open(const char **pzRepoName){ sqlite3_auto_extension((void(*)(void))sqlcmd_autoinit); *pzRepoName = g.zRepositoryName; } |
Changes to src/sqlite3.c.
more than 10,000 changes
Changes to src/sqlite3.h.
︙ | ︙ | |||
103 104 105 106 107 108 109 | ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ | | | | | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.7.7" #define SQLITE_VERSION_NUMBER 3007007 #define SQLITE_SOURCE_ID "2011-05-18 03:02:10 186d7ff1d9804d508e472e4939608bf2be67bdc2" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version, sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ | |||
371 372 373 374 375 376 377 | ** KEYWORDS: {result code} {result codes} ** ** Many SQLite functions return an integer result code from the set shown ** here in order to indicates success or failure. ** ** New error codes may be added in future versions of SQLite. ** | | > | | 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | ** KEYWORDS: {result code} {result codes} ** ** Many SQLite functions return an integer result code from the set shown ** here in order to indicates success or failure. ** ** New error codes may be added in future versions of SQLite. ** ** See also: [SQLITE_IOERR_READ | extended result codes], ** [sqlite3_vtab_on_conflict()] [SQLITE_ROLLBACK | result codes]. */ #define SQLITE_OK 0 /* Successful result */ /* beginning-of-error-codes */ #define SQLITE_ERROR 1 /* SQL error or missing database */ #define SQLITE_INTERNAL 2 /* Internal logic error in SQLite */ #define SQLITE_PERM 3 /* Access permission denied */ #define SQLITE_ABORT 4 /* Callback routine requested an abort */ #define SQLITE_BUSY 5 /* The database file is locked */ #define SQLITE_LOCKED 6 /* A table in the database is locked */ #define SQLITE_NOMEM 7 /* A malloc() failed */ #define SQLITE_READONLY 8 /* Attempt to write a readonly database */ #define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite3_interrupt()*/ #define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */ #define SQLITE_CORRUPT 11 /* The database disk image is malformed */ #define SQLITE_NOTFOUND 12 /* Unknown opcode in sqlite3_file_control() */ #define SQLITE_FULL 13 /* Insertion failed because database is full */ #define SQLITE_CANTOPEN 14 /* Unable to open the database file */ #define SQLITE_PROTOCOL 15 /* Database lock protocol error */ #define SQLITE_EMPTY 16 /* Database is empty */ #define SQLITE_SCHEMA 17 /* The database schema changed */ #define SQLITE_TOOBIG 18 /* String or BLOB exceeds size limit */ #define SQLITE_CONSTRAINT 19 /* Abort due to constraint violation */ |
︙ | ︙ | |||
448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | #define SQLITE_IOERR_CHECKRESERVEDLOCK (SQLITE_IOERR | (14<<8)) #define SQLITE_IOERR_LOCK (SQLITE_IOERR | (15<<8)) #define SQLITE_IOERR_CLOSE (SQLITE_IOERR | (16<<8)) #define SQLITE_IOERR_DIR_CLOSE (SQLITE_IOERR | (17<<8)) #define SQLITE_IOERR_SHMOPEN (SQLITE_IOERR | (18<<8)) #define SQLITE_IOERR_SHMSIZE (SQLITE_IOERR | (19<<8)) #define SQLITE_IOERR_SHMLOCK (SQLITE_IOERR | (20<<8)) #define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8)) #define SQLITE_BUSY_RECOVERY (SQLITE_BUSY | (1<<8)) #define SQLITE_CANTOPEN_NOTEMPDIR (SQLITE_CANTOPEN | (1<<8)) /* ** CAPI3REF: Flags For File Open Operations ** ** These bit values are intended for use in the ** 3rd parameter to the [sqlite3_open_v2()] interface and | > > > | < > > > | 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 | #define SQLITE_IOERR_CHECKRESERVEDLOCK (SQLITE_IOERR | (14<<8)) #define SQLITE_IOERR_LOCK (SQLITE_IOERR | (15<<8)) #define SQLITE_IOERR_CLOSE (SQLITE_IOERR | (16<<8)) #define SQLITE_IOERR_DIR_CLOSE (SQLITE_IOERR | (17<<8)) #define SQLITE_IOERR_SHMOPEN (SQLITE_IOERR | (18<<8)) #define SQLITE_IOERR_SHMSIZE (SQLITE_IOERR | (19<<8)) #define SQLITE_IOERR_SHMLOCK (SQLITE_IOERR | (20<<8)) #define SQLITE_IOERR_SHMMAP (SQLITE_IOERR | (21<<8)) #define SQLITE_IOERR_SEEK (SQLITE_IOERR | (22<<8)) #define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8)) #define SQLITE_BUSY_RECOVERY (SQLITE_BUSY | (1<<8)) #define SQLITE_CANTOPEN_NOTEMPDIR (SQLITE_CANTOPEN | (1<<8)) #define SQLITE_CORRUPT_VTAB (SQLITE_CORRUPT | (1<<8)) /* ** CAPI3REF: Flags For File Open Operations ** ** These bit values are intended for use in the ** 3rd parameter to the [sqlite3_open_v2()] interface and ** in the 4th parameter to the [sqlite3_vfs.xOpen] method. */ #define SQLITE_OPEN_READONLY 0x00000001 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_READWRITE 0x00000002 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_CREATE 0x00000004 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_DELETEONCLOSE 0x00000008 /* VFS only */ #define SQLITE_OPEN_EXCLUSIVE 0x00000010 /* VFS only */ #define SQLITE_OPEN_AUTOPROXY 0x00000020 /* VFS only */ #define SQLITE_OPEN_URI 0x00000040 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_MAIN_DB 0x00000100 /* VFS only */ #define SQLITE_OPEN_TEMP_DB 0x00000200 /* VFS only */ #define SQLITE_OPEN_TRANSIENT_DB 0x00000400 /* VFS only */ #define SQLITE_OPEN_MAIN_JOURNAL 0x00000800 /* VFS only */ #define SQLITE_OPEN_TEMP_JOURNAL 0x00001000 /* VFS only */ #define SQLITE_OPEN_SUBJOURNAL 0x00002000 /* VFS only */ #define SQLITE_OPEN_MASTER_JOURNAL 0x00004000 /* VFS only */ #define SQLITE_OPEN_NOMUTEX 0x00008000 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_FULLMUTEX 0x00010000 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_SHAREDCACHE 0x00020000 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_PRIVATECACHE 0x00040000 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_WAL 0x00080000 /* VFS only */ /* Reserved: 0x00F00000 */ /* ** CAPI3REF: Device Characteristics ** ** The xDeviceCharacteristics method of the [sqlite3_io_methods] ** object returns an integer which is a vector of the these ** bit values expressing I/O characteristics of the mass storage |
︙ | ︙ | |||
538 539 540 541 542 543 544 545 546 547 548 549 550 551 | ** ** When the SQLITE_SYNC_DATAONLY flag is used, it means that the ** sync operation only needs to flush data to mass storage. Inode ** information need not be flushed. If the lower four bits of the flag ** equal SQLITE_SYNC_NORMAL, that means to use normal fsync() semantics. ** If the lower four bits equal SQLITE_SYNC_FULL, that means ** to use Mac OS X style fullsync instead of fsync(). */ #define SQLITE_SYNC_NORMAL 0x00002 #define SQLITE_SYNC_FULL 0x00003 #define SQLITE_SYNC_DATAONLY 0x00010 /* ** CAPI3REF: OS Interface Open File Handle | > > > > > > > > > > > > | 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 | ** ** When the SQLITE_SYNC_DATAONLY flag is used, it means that the ** sync operation only needs to flush data to mass storage. Inode ** information need not be flushed. If the lower four bits of the flag ** equal SQLITE_SYNC_NORMAL, that means to use normal fsync() semantics. ** If the lower four bits equal SQLITE_SYNC_FULL, that means ** to use Mac OS X style fullsync instead of fsync(). ** ** Do not confuse the SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL flags ** with the [PRAGMA synchronous]=NORMAL and [PRAGMA synchronous]=FULL ** settings. The [synchronous pragma] determines when calls to the ** xSync VFS method occur and applies uniformly across all platforms. ** The SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL flags determine how ** energetic or rigorous or forceful the sync operations are and ** only make a difference on Mac OSX for the default SQLite code. ** (Third-party VFS implementations might also make the distinction ** between SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL, but among the ** operating systems natively supported by SQLite, only Mac OSX ** cares about the difference.) */ #define SQLITE_SYNC_NORMAL 0x00002 #define SQLITE_SYNC_FULL 0x00003 #define SQLITE_SYNC_DATAONLY 0x00010 /* ** CAPI3REF: OS Interface Open File Handle |
︙ | ︙ | |||
562 563 564 565 566 567 568 | struct sqlite3_file { const struct sqlite3_io_methods *pMethods; /* Methods for an open file */ }; /* ** CAPI3REF: OS Interface File Virtual Methods Object ** | | | | | > | | 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 | struct sqlite3_file { const struct sqlite3_io_methods *pMethods; /* Methods for an open file */ }; /* ** CAPI3REF: OS Interface File Virtual Methods Object ** ** Every file opened by the [sqlite3_vfs.xOpen] method populates an ** [sqlite3_file] object (or, more commonly, a subclass of the ** [sqlite3_file] object) with a pointer to an instance of this object. ** This object defines the methods used to perform various operations ** against the open file represented by the [sqlite3_file] object. ** ** If the [sqlite3_vfs.xOpen] method sets the sqlite3_file.pMethods element ** to a non-NULL pointer, then the sqlite3_io_methods.xClose method ** may be invoked even if the [sqlite3_vfs.xOpen] reported that it failed. The ** only way to prevent a call to xClose following a failed [sqlite3_vfs.xOpen] ** is for the [sqlite3_vfs.xOpen] to set the sqlite3_file.pMethods element ** to NULL. ** ** The flags argument to xSync may be one of [SQLITE_SYNC_NORMAL] or ** [SQLITE_SYNC_FULL]. The first choice is the normal fsync(). ** The second choice is a Mac OS X style fullsync. The [SQLITE_SYNC_DATAONLY] ** flag may be ORed in to indicate that only the data of the file ** and not its inode needs to be synced. ** |
︙ | ︙ | |||
606 607 608 609 610 611 612 | ** write return values. Potential uses for xFileControl() might be ** functions to enable blocking locks with timeouts, to change the ** locking strategy (for example to use dot-file locks), to inquire ** about the status of a lock, or to break stale locks. The SQLite ** core reserves all opcodes less than 100 for its own use. ** A [SQLITE_FCNTL_LOCKSTATE | list of opcodes] less than 100 is available. ** Applications that define a custom xFileControl method should use opcodes | | > > | 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 | ** write return values. Potential uses for xFileControl() might be ** functions to enable blocking locks with timeouts, to change the ** locking strategy (for example to use dot-file locks), to inquire ** about the status of a lock, or to break stale locks. The SQLite ** core reserves all opcodes less than 100 for its own use. ** A [SQLITE_FCNTL_LOCKSTATE | list of opcodes] less than 100 is available. ** Applications that define a custom xFileControl method should use opcodes ** greater than 100 to avoid conflicts. VFS implementations should ** return [SQLITE_NOTFOUND] for file control opcodes that they do not ** recognize. ** ** The xSectorSize() method returns the sector size of the ** device that underlies the file. The sector size is the ** minimum write that can be performed without disturbing ** other bytes in the file. The xDeviceCharacteristics() ** method returns a bit vector describing behaviors of the ** underlying device: |
︙ | ︙ | |||
699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 | ** The [SQLITE_FCNTL_CHUNK_SIZE] opcode is used to request that the VFS ** extends and truncates the database file in chunks of a size specified ** by the user. The fourth argument to [sqlite3_file_control()] should ** point to an integer (type int) containing the new chunk-size to use ** for the nominated database. Allocating database file space in large ** chunks (say 1MB at a time), may reduce file-system fragmentation and ** improve performance on some systems. */ #define SQLITE_FCNTL_LOCKSTATE 1 #define SQLITE_GET_LOCKPROXYFILE 2 #define SQLITE_SET_LOCKPROXYFILE 3 #define SQLITE_LAST_ERRNO 4 #define SQLITE_FCNTL_SIZE_HINT 5 #define SQLITE_FCNTL_CHUNK_SIZE 6 /* ** CAPI3REF: Mutex Handle ** ** The mutex module within SQLite defines [sqlite3_mutex] to be an ** abstract type for a mutex object. The SQLite core never looks ** at the internal representation of an [sqlite3_mutex]. It only ** deals with pointers to the [sqlite3_mutex] object. ** ** Mutexes are created using [sqlite3_mutex_alloc()]. */ typedef struct sqlite3_mutex sqlite3_mutex; /* ** CAPI3REF: OS Interface Object ** ** An instance of the sqlite3_vfs object defines the interface between ** the SQLite core and the underlying operating system. The "vfs" ** in the name of the object stands for "virtual file system". ** ** The value of the iVersion field is initially 1 but may be larger in ** future versions of SQLite. Additional fields may be appended to this | > > > > > > > > > > > > > > > > > > > | 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 | ** The [SQLITE_FCNTL_CHUNK_SIZE] opcode is used to request that the VFS ** extends and truncates the database file in chunks of a size specified ** by the user. The fourth argument to [sqlite3_file_control()] should ** point to an integer (type int) containing the new chunk-size to use ** for the nominated database. Allocating database file space in large ** chunks (say 1MB at a time), may reduce file-system fragmentation and ** improve performance on some systems. ** ** The [SQLITE_FCNTL_FILE_POINTER] opcode is used to obtain a pointer ** to the [sqlite3_file] object associated with a particular database ** connection. See the [sqlite3_file_control()] documentation for ** additional information. ** ** ^(The [SQLITE_FCNTL_SYNC_OMITTED] opcode is generated internally by ** SQLite and sent to all VFSes in place of a call to the xSync method ** when the database connection has [PRAGMA synchronous] set to OFF.)^ ** Some specialized VFSes need this signal in order to operate correctly ** when [PRAGMA synchronous | PRAGMA synchronous=OFF] is set, but most ** VFSes do not need this signal and should silently ignore this opcode. ** Applications should not call [sqlite3_file_control()] with this ** opcode as doing so may disrupt the operation of the specialized VFSes ** that do require it. */ #define SQLITE_FCNTL_LOCKSTATE 1 #define SQLITE_GET_LOCKPROXYFILE 2 #define SQLITE_SET_LOCKPROXYFILE 3 #define SQLITE_LAST_ERRNO 4 #define SQLITE_FCNTL_SIZE_HINT 5 #define SQLITE_FCNTL_CHUNK_SIZE 6 #define SQLITE_FCNTL_FILE_POINTER 7 #define SQLITE_FCNTL_SYNC_OMITTED 8 /* ** CAPI3REF: Mutex Handle ** ** The mutex module within SQLite defines [sqlite3_mutex] to be an ** abstract type for a mutex object. The SQLite core never looks ** at the internal representation of an [sqlite3_mutex]. It only ** deals with pointers to the [sqlite3_mutex] object. ** ** Mutexes are created using [sqlite3_mutex_alloc()]. */ typedef struct sqlite3_mutex sqlite3_mutex; /* ** CAPI3REF: OS Interface Object ** KEYWORDS: VFS VFSes ** ** An instance of the sqlite3_vfs object defines the interface between ** the SQLite core and the underlying operating system. The "vfs" ** in the name of the object stands for "virtual file system". ** ** The value of the iVersion field is initially 1 but may be larger in ** future versions of SQLite. Additional fields may be appended to this |
︙ | ︙ | |||
753 754 755 756 757 758 759 760 761 762 763 764 765 766 | ** or modify this field while holding a particular static mutex. ** The application should never modify anything within the sqlite3_vfs ** object once the object has been registered. ** ** The zName field holds the name of the VFS module. The name must ** be unique across all VFS modules. ** ** ^SQLite guarantees that the zFilename parameter to xOpen ** is either a NULL pointer or string obtained ** from xFullPathname() with an optional suffix added. ** ^If a suffix is added to the zFilename parameter, it will ** consist of a single "-" character followed by no more than ** 10 alphanumeric and/or "-" characters. ** ^SQLite further guarantees that | > | 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 | ** or modify this field while holding a particular static mutex. ** The application should never modify anything within the sqlite3_vfs ** object once the object has been registered. ** ** The zName field holds the name of the VFS module. The name must ** be unique across all VFS modules. ** ** [[sqlite3_vfs.xOpen]] ** ^SQLite guarantees that the zFilename parameter to xOpen ** is either a NULL pointer or string obtained ** from xFullPathname() with an optional suffix added. ** ^If a suffix is added to the zFilename parameter, it will ** consist of a single "-" character followed by no more than ** 10 alphanumeric and/or "-" characters. ** ^SQLite further guarantees that |
︙ | ︙ | |||
830 831 832 833 834 835 836 837 838 839 840 841 842 843 | ** allocate the structure; it should just fill it in. Note that ** the xOpen method must set the sqlite3_file.pMethods to either ** a valid [sqlite3_io_methods] object or to NULL. xOpen must do ** this even if the open fails. SQLite expects that the sqlite3_file.pMethods ** element will be valid after xOpen returns regardless of the success ** or failure of the xOpen call. ** ** ^The flags argument to xAccess() may be [SQLITE_ACCESS_EXISTS] ** to test for the existence of a file, or [SQLITE_ACCESS_READWRITE] to ** test whether a file is readable and writable, or [SQLITE_ACCESS_READ] ** to test whether a file is at least readable. The file can be a ** directory. ** ** ^SQLite will always allocate at least mxPathname+1 bytes for the | > | 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 | ** allocate the structure; it should just fill it in. Note that ** the xOpen method must set the sqlite3_file.pMethods to either ** a valid [sqlite3_io_methods] object or to NULL. xOpen must do ** this even if the open fails. SQLite expects that the sqlite3_file.pMethods ** element will be valid after xOpen returns regardless of the success ** or failure of the xOpen call. ** ** [[sqlite3_vfs.xAccess]] ** ^The flags argument to xAccess() may be [SQLITE_ACCESS_EXISTS] ** to test for the existence of a file, or [SQLITE_ACCESS_READWRITE] to ** test whether a file is readable and writable, or [SQLITE_ACCESS_READ] ** to test whether a file is at least readable. The file can be a ** directory. ** ** ^SQLite will always allocate at least mxPathname+1 bytes for the |
︙ | ︙ | |||
860 861 862 863 864 865 866 867 868 869 | ** ^The xCurrentTimeInt64() method returns, as an integer, the Julian ** Day Number multipled by 86400000 (the number of milliseconds in ** a 24-hour day). ** ^SQLite will use the xCurrentTimeInt64() method to get the current ** date and time if that method is available (if iVersion is 2 or ** greater and the function pointer is not NULL) and will fall back ** to xCurrentTime() if xCurrentTimeInt64() is unavailable. */ typedef struct sqlite3_vfs sqlite3_vfs; struct sqlite3_vfs { | > > > > > > > > > > > > > | | 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 | ** ^The xCurrentTimeInt64() method returns, as an integer, the Julian ** Day Number multipled by 86400000 (the number of milliseconds in ** a 24-hour day). ** ^SQLite will use the xCurrentTimeInt64() method to get the current ** date and time if that method is available (if iVersion is 2 or ** greater and the function pointer is not NULL) and will fall back ** to xCurrentTime() if xCurrentTimeInt64() is unavailable. ** ** ^The xSetSystemCall(), xGetSystemCall(), and xNestSystemCall() interfaces ** are not used by the SQLite core. These optional interfaces are provided ** by some VFSes to facilitate testing of the VFS code. By overriding ** system calls with functions under its control, a test program can ** simulate faults and error conditions that would otherwise be difficult ** or impossible to induce. The set of system calls that can be overridden ** varies from one VFS to another, and from one version of the same VFS to the ** next. Applications that use these interfaces must be prepared for any ** or all of these interfaces to be NULL or for their behavior to change ** from one release to the next. Applications must not attempt to access ** any of these methods if the iVersion of the VFS is less than 3. */ typedef struct sqlite3_vfs sqlite3_vfs; typedef void (*sqlite3_syscall_ptr)(void); struct sqlite3_vfs { int iVersion; /* Structure version number (currently 3) */ int szOsFile; /* Size of subclassed sqlite3_file */ int mxPathname; /* Maximum file pathname length */ sqlite3_vfs *pNext; /* Next registered VFS */ const char *zName; /* Name of this virtual file system */ void *pAppData; /* Pointer to application-specific data */ int (*xOpen)(sqlite3_vfs*, const char *zName, sqlite3_file*, int flags, int *pOutFlags); |
︙ | ︙ | |||
889 890 891 892 893 894 895 896 897 898 899 900 901 902 | /* ** The methods above are in version 1 of the sqlite_vfs object ** definition. Those that follow are added in version 2 or later */ int (*xCurrentTimeInt64)(sqlite3_vfs*, sqlite3_int64*); /* ** The methods above are in versions 1 and 2 of the sqlite_vfs object. ** New fields may be appended in figure versions. The iVersion ** value will increment whenever this happens. */ }; /* ** CAPI3REF: Flags for the xAccess VFS method | > > > > > > > | 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 | /* ** The methods above are in version 1 of the sqlite_vfs object ** definition. Those that follow are added in version 2 or later */ int (*xCurrentTimeInt64)(sqlite3_vfs*, sqlite3_int64*); /* ** The methods above are in versions 1 and 2 of the sqlite_vfs object. ** Those below are for version 3 and greater. */ int (*xSetSystemCall)(sqlite3_vfs*, const char *zName, sqlite3_syscall_ptr); sqlite3_syscall_ptr (*xGetSystemCall)(sqlite3_vfs*, const char *zName); const char *(*xNextSystemCall)(sqlite3_vfs*, const char *zName); /* ** The methods above are in versions 1 through 3 of the sqlite_vfs object. ** New fields may be appended in figure versions. The iVersion ** value will increment whenever this happens. */ }; /* ** CAPI3REF: Flags for the xAccess VFS method |
︙ | ︙ | |||
1056 1057 1058 1059 1060 1061 1062 | ** [sqlite3_initialize()] or after shutdown by [sqlite3_shutdown()]. ** ^If sqlite3_config() is called after [sqlite3_initialize()] and before ** [sqlite3_shutdown()] then it will return SQLITE_MISUSE. ** Note, however, that ^sqlite3_config() can be called as part of the ** implementation of an application-defined [sqlite3_os_init()]. ** ** The first argument to sqlite3_config() is an integer | | | | < < < | | < < | | 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 | ** [sqlite3_initialize()] or after shutdown by [sqlite3_shutdown()]. ** ^If sqlite3_config() is called after [sqlite3_initialize()] and before ** [sqlite3_shutdown()] then it will return SQLITE_MISUSE. ** Note, however, that ^sqlite3_config() can be called as part of the ** implementation of an application-defined [sqlite3_os_init()]. ** ** The first argument to sqlite3_config() is an integer ** [configuration option] that determines ** what property of SQLite is to be configured. Subsequent arguments ** vary depending on the [configuration option] ** in the first argument. ** ** ^When a configuration option is set, sqlite3_config() returns [SQLITE_OK]. ** ^If the option is unknown or SQLite is unable to set the option ** then this routine returns a non-zero [error code]. */ SQLITE_API int sqlite3_config(int, ...); /* ** CAPI3REF: Configure database connections ** ** The sqlite3_db_config() interface is used to make configuration ** changes to a [database connection]. The interface is similar to ** [sqlite3_config()] except that the changes apply to a single ** [database connection] (specified in the first argument). ** ** The second argument to sqlite3_db_config(D,V,...) is the ** [SQLITE_DBCONFIG_LOOKASIDE | configuration verb] - an integer code ** that indicates what aspect of the [database connection] is being configured. ** Subsequent arguments vary depending on the configuration verb. ** ** ^Calls to sqlite3_db_config() return SQLITE_OK if and only if ** the call is considered successful. */ SQLITE_API int sqlite3_db_config(sqlite3*, int op, ...); /* |
︙ | ︙ | |||
1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 | int (*xInit)(void*); /* Initialize the memory allocator */ void (*xShutdown)(void*); /* Deinitialize the memory allocator */ void *pAppData; /* Argument to xInit() and xShutdown() */ }; /* ** CAPI3REF: Configuration Options ** ** These constants are the available integer configuration options that ** can be passed as the first argument to the [sqlite3_config()] interface. ** ** New configuration options may be added in future releases of SQLite. ** Existing configuration options might be discontinued. Applications ** should check the return code from [sqlite3_config()] to make sure that ** the call worked. The [sqlite3_config()] interface will return a ** non-zero [error code] if a discontinued or unsupported configuration option ** is invoked. ** ** <dl> | > | | | | | | | | | | | > > | | | | | | > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > | > > | 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 | int (*xInit)(void*); /* Initialize the memory allocator */ void (*xShutdown)(void*); /* Deinitialize the memory allocator */ void *pAppData; /* Argument to xInit() and xShutdown() */ }; /* ** CAPI3REF: Configuration Options ** KEYWORDS: {configuration option} ** ** These constants are the available integer configuration options that ** can be passed as the first argument to the [sqlite3_config()] interface. ** ** New configuration options may be added in future releases of SQLite. ** Existing configuration options might be discontinued. Applications ** should check the return code from [sqlite3_config()] to make sure that ** the call worked. The [sqlite3_config()] interface will return a ** non-zero [error code] if a discontinued or unsupported configuration option ** is invoked. ** ** <dl> ** [[SQLITE_CONFIG_SINGLETHREAD]] <dt>SQLITE_CONFIG_SINGLETHREAD</dt> ** <dd>There are no arguments to this option. ^This option sets the ** [threading mode] to Single-thread. In other words, it disables ** all mutexing and puts SQLite into a mode where it can only be used ** by a single thread. ^If SQLite is compiled with ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** it is not possible to change the [threading mode] from its default ** value of Single-thread and so [sqlite3_config()] will return ** [SQLITE_ERROR] if called with the SQLITE_CONFIG_SINGLETHREAD ** configuration option.</dd> ** ** [[SQLITE_CONFIG_MULTITHREAD]] <dt>SQLITE_CONFIG_MULTITHREAD</dt> ** <dd>There are no arguments to this option. ^This option sets the ** [threading mode] to Multi-thread. In other words, it disables ** mutexing on [database connection] and [prepared statement] objects. ** The application is responsible for serializing access to ** [database connections] and [prepared statements]. But other mutexes ** are enabled so that SQLite will be safe to use in a multi-threaded ** environment as long as no two threads attempt to use the same ** [database connection] at the same time. ^If SQLite is compiled with ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** it is not possible to set the Multi-thread [threading mode] and ** [sqlite3_config()] will return [SQLITE_ERROR] if called with the ** SQLITE_CONFIG_MULTITHREAD configuration option.</dd> ** ** [[SQLITE_CONFIG_SERIALIZED]] <dt>SQLITE_CONFIG_SERIALIZED</dt> ** <dd>There are no arguments to this option. ^This option sets the ** [threading mode] to Serialized. In other words, this option enables ** all mutexes including the recursive ** mutexes on [database connection] and [prepared statement] objects. ** In this mode (which is the default when SQLite is compiled with ** [SQLITE_THREADSAFE=1]) the SQLite library will itself serialize access ** to [database connections] and [prepared statements] so that the ** application is free to use the same [database connection] or the ** same [prepared statement] in different threads at the same time. ** ^If SQLite is compiled with ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** it is not possible to set the Serialized [threading mode] and ** [sqlite3_config()] will return [SQLITE_ERROR] if called with the ** SQLITE_CONFIG_SERIALIZED configuration option.</dd> ** ** [[SQLITE_CONFIG_MALLOC]] <dt>SQLITE_CONFIG_MALLOC</dt> ** <dd> ^(This option takes a single argument which is a pointer to an ** instance of the [sqlite3_mem_methods] structure. The argument specifies ** alternative low-level memory allocation routines to be used in place of ** the memory allocation routines built into SQLite.)^ ^SQLite makes ** its own private copy of the content of the [sqlite3_mem_methods] structure ** before the [sqlite3_config()] call returns.</dd> ** ** [[SQLITE_CONFIG_GETMALLOC]] <dt>SQLITE_CONFIG_GETMALLOC</dt> ** <dd> ^(This option takes a single argument which is a pointer to an ** instance of the [sqlite3_mem_methods] structure. The [sqlite3_mem_methods] ** structure is filled with the currently defined memory allocation routines.)^ ** This option can be used to overload the default memory allocation ** routines with a wrapper that simulations memory allocation failure or ** tracks memory usage, for example. </dd> ** ** [[SQLITE_CONFIG_MEMSTATUS]] <dt>SQLITE_CONFIG_MEMSTATUS</dt> ** <dd> ^This option takes single argument of type int, interpreted as a ** boolean, which enables or disables the collection of memory allocation ** statistics. ^(When memory allocation statistics are disabled, the ** following SQLite interfaces become non-operational: ** <ul> ** <li> [sqlite3_memory_used()] ** <li> [sqlite3_memory_highwater()] ** <li> [sqlite3_soft_heap_limit64()] ** <li> [sqlite3_status()] ** </ul>)^ ** ^Memory allocation statistics are enabled by default unless SQLite is ** compiled with [SQLITE_DEFAULT_MEMSTATUS]=0 in which case memory ** allocation statistics are disabled by default. ** </dd> ** ** [[SQLITE_CONFIG_SCRATCH]] <dt>SQLITE_CONFIG_SCRATCH</dt> ** <dd> ^This option specifies a static memory buffer that SQLite can use for ** scratch memory. There are three arguments: A pointer an 8-byte ** aligned memory buffer from which the scratch allocations will be ** drawn, the size of each scratch allocation (sz), ** and the maximum number of scratch allocations (N). The sz ** argument must be a multiple of 16. ** The first argument must be a pointer to an 8-byte aligned buffer ** of at least sz*N bytes of memory. ** ^SQLite will use no more than two scratch buffers per thread. So ** N should be set to twice the expected maximum number of threads. ** ^SQLite will never require a scratch buffer that is more than 6 ** times the database page size. ^If SQLite needs needs additional ** scratch memory beyond what is provided by this configuration option, then ** [sqlite3_malloc()] will be used to obtain the memory needed.</dd> ** ** [[SQLITE_CONFIG_PAGECACHE]] <dt>SQLITE_CONFIG_PAGECACHE</dt> ** <dd> ^This option specifies a static memory buffer that SQLite can use for ** the database page cache with the default page cache implemenation. ** This configuration should not be used if an application-define page ** cache implementation is loaded using the SQLITE_CONFIG_PCACHE option. ** There are three arguments to this option: A pointer to 8-byte aligned ** memory, the size of each page buffer (sz), and the number of pages (N). ** The sz argument should be the size of the largest database page ** (a power of two between 512 and 32768) plus a little extra for each ** page header. ^The page header size is 20 to 40 bytes depending on ** the host architecture. ^It is harmless, apart from the wasted memory, ** to make sz a little too large. The first ** argument should point to an allocation of at least sz*N bytes of memory. ** ^SQLite will use the memory provided by the first argument to satisfy its ** memory needs for the first N pages that it adds to cache. ^If additional ** page cache memory is needed beyond what is provided by this option, then ** SQLite goes to [sqlite3_malloc()] for the additional storage space. ** The pointer in the first argument must ** be aligned to an 8-byte boundary or subsequent behavior of SQLite ** will be undefined.</dd> ** ** [[SQLITE_CONFIG_HEAP]] <dt>SQLITE_CONFIG_HEAP</dt> ** <dd> ^This option specifies a static memory buffer that SQLite will use ** for all of its dynamic memory allocation needs beyond those provided ** for by [SQLITE_CONFIG_SCRATCH] and [SQLITE_CONFIG_PAGECACHE]. ** There are three arguments: An 8-byte aligned pointer to the memory, ** the number of bytes in the memory buffer, and the minimum allocation size. ** ^If the first pointer (the memory pointer) is NULL, then SQLite reverts ** to using its default memory allocator (the system malloc() implementation), ** undoing any prior invocation of [SQLITE_CONFIG_MALLOC]. ^If the ** memory pointer is not NULL and either [SQLITE_ENABLE_MEMSYS3] or ** [SQLITE_ENABLE_MEMSYS5] are defined, then the alternative memory ** allocator is engaged to handle all of SQLites memory allocation needs. ** The first pointer (the memory pointer) must be aligned to an 8-byte ** boundary or subsequent behavior of SQLite will be undefined. ** The minimum allocation size is capped at 2^12. Reasonable values ** for the minimum allocation size are 2^5 through 2^8.</dd> ** ** [[SQLITE_CONFIG_MUTEX]] <dt>SQLITE_CONFIG_MUTEX</dt> ** <dd> ^(This option takes a single argument which is a pointer to an ** instance of the [sqlite3_mutex_methods] structure. The argument specifies ** alternative low-level mutex routines to be used in place ** the mutex routines built into SQLite.)^ ^SQLite makes a copy of the ** content of the [sqlite3_mutex_methods] structure before the call to ** [sqlite3_config()] returns. ^If SQLite is compiled with ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** the entire mutexing subsystem is omitted from the build and hence calls to ** [sqlite3_config()] with the SQLITE_CONFIG_MUTEX configuration option will ** return [SQLITE_ERROR].</dd> ** ** [[SQLITE_CONFIG_GETMUTEX]] <dt>SQLITE_CONFIG_GETMUTEX</dt> ** <dd> ^(This option takes a single argument which is a pointer to an ** instance of the [sqlite3_mutex_methods] structure. The ** [sqlite3_mutex_methods] ** structure is filled with the currently defined mutex routines.)^ ** This option can be used to overload the default mutex allocation ** routines with a wrapper used to track mutex usage for performance ** profiling or testing, for example. ^If SQLite is compiled with ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** the entire mutexing subsystem is omitted from the build and hence calls to ** [sqlite3_config()] with the SQLITE_CONFIG_GETMUTEX configuration option will ** return [SQLITE_ERROR].</dd> ** ** [[SQLITE_CONFIG_LOOKASIDE]] <dt>SQLITE_CONFIG_LOOKASIDE</dt> ** <dd> ^(This option takes two arguments that determine the default ** memory allocation for the lookaside memory allocator on each ** [database connection]. The first argument is the ** size of each lookaside buffer slot and the second is the number of ** slots allocated to each database connection.)^ ^(This option sets the ** <i>default</i> lookaside size. The [SQLITE_DBCONFIG_LOOKASIDE] ** verb to [sqlite3_db_config()] can be used to change the lookaside ** configuration on individual connections.)^ </dd> ** ** [[SQLITE_CONFIG_PCACHE]] <dt>SQLITE_CONFIG_PCACHE</dt> ** <dd> ^(This option takes a single argument which is a pointer to ** an [sqlite3_pcache_methods] object. This object specifies the interface ** to a custom page cache implementation.)^ ^SQLite makes a copy of the ** object and uses it for page cache memory allocations.</dd> ** ** [[SQLITE_CONFIG_GETPCACHE]] <dt>SQLITE_CONFIG_GETPCACHE</dt> ** <dd> ^(This option takes a single argument which is a pointer to an ** [sqlite3_pcache_methods] object. SQLite copies of the current ** page cache implementation into that object.)^ </dd> ** ** [[SQLITE_CONFIG_LOG]] <dt>SQLITE_CONFIG_LOG</dt> ** <dd> ^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a ** function with a call signature of void(*)(void*,int,const char*), ** and a pointer to void. ^If the function pointer is not NULL, it is ** invoked by [sqlite3_log()] to process each logging event. ^If the ** function pointer is NULL, the [sqlite3_log()] interface becomes a no-op. ** ^The void pointer that is the second argument to SQLITE_CONFIG_LOG is ** passed through as the first parameter to the application-defined logger ** function whenever that function is invoked. ^The second parameter to ** the logger function is a copy of the first parameter to the corresponding ** [sqlite3_log()] call and is intended to be a [result code] or an ** [extended result code]. ^The third parameter passed to the logger is ** log message after formatting via [sqlite3_snprintf()]. ** The SQLite logging interface is not reentrant; the logger function ** supplied by the application must not invoke any SQLite interface. ** In a multi-threaded application, the application-defined logger ** function must be threadsafe. </dd> ** ** [[SQLITE_CONFIG_URI]] <dt>SQLITE_CONFIG_URI ** <dd> This option takes a single argument of type int. If non-zero, then ** URI handling is globally enabled. If the parameter is zero, then URI handling ** is globally disabled. If URI handling is globally enabled, all filenames ** passed to [sqlite3_open()], [sqlite3_open_v2()], [sqlite3_open16()] or ** specified as part of [ATTACH] commands are interpreted as URIs, regardless ** of whether or not the [SQLITE_OPEN_URI] flag is set when the database ** connection is opened. If it is globally disabled, filenames are ** only interpreted as URIs if the SQLITE_OPEN_URI flag is set when the ** database connection is opened. By default, URI handling is globally ** disabled. The default value may be changed by compiling with the ** [SQLITE_USE_URI] symbol defined. ** </dl> */ #define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */ #define SQLITE_CONFIG_MULTITHREAD 2 /* nil */ #define SQLITE_CONFIG_SERIALIZED 3 /* nil */ #define SQLITE_CONFIG_MALLOC 4 /* sqlite3_mem_methods* */ #define SQLITE_CONFIG_GETMALLOC 5 /* sqlite3_mem_methods* */ #define SQLITE_CONFIG_SCRATCH 6 /* void*, int sz, int N */ #define SQLITE_CONFIG_PAGECACHE 7 /* void*, int sz, int N */ #define SQLITE_CONFIG_HEAP 8 /* void*, int nByte, int min */ #define SQLITE_CONFIG_MEMSTATUS 9 /* boolean */ #define SQLITE_CONFIG_MUTEX 10 /* sqlite3_mutex_methods* */ #define SQLITE_CONFIG_GETMUTEX 11 /* sqlite3_mutex_methods* */ /* previously SQLITE_CONFIG_CHUNKALLOC 12 which is now unused. */ #define SQLITE_CONFIG_LOOKASIDE 13 /* int int */ #define SQLITE_CONFIG_PCACHE 14 /* sqlite3_pcache_methods* */ #define SQLITE_CONFIG_GETPCACHE 15 /* sqlite3_pcache_methods* */ #define SQLITE_CONFIG_LOG 16 /* xFunc, void* */ #define SQLITE_CONFIG_URI 17 /* int */ /* ** CAPI3REF: Database Connection Configuration Options ** ** These constants are the available integer configuration options that ** can be passed as the second argument to the [sqlite3_db_config()] interface. ** ** New configuration options may be added in future releases of SQLite. ** Existing configuration options might be discontinued. Applications ** should check the return code from [sqlite3_db_config()] to make sure that ** the call worked. ^The [sqlite3_db_config()] interface will return a ** non-zero [error code] if a discontinued or unsupported configuration option ** is invoked. ** ** <dl> ** <dt>SQLITE_DBCONFIG_LOOKASIDE</dt> ** <dd> ^This option takes three additional arguments that determine the ** [lookaside memory allocator] configuration for the [database connection]. ** ^The first argument (the third parameter to [sqlite3_db_config()] is a ** pointer to a memory buffer to use for lookaside memory. ** ^The first argument after the SQLITE_DBCONFIG_LOOKASIDE verb ** may be NULL in which case SQLite will allocate the ** lookaside buffer itself using [sqlite3_malloc()]. ^The second argument is the ** size of each lookaside buffer slot. ^The third argument is the number of ** slots. The size of the buffer in the first argument must be greater than ** or equal to the product of the second and third arguments. The buffer ** must be aligned to an 8-byte boundary. ^If the second argument to ** SQLITE_DBCONFIG_LOOKASIDE is not a multiple of 8, it is internally ** rounded down to the next smaller multiple of 8. ^(The lookaside memory ** configuration for a database connection can only be changed when that ** connection is not currently using lookaside memory, or in other words ** when the "current value" returned by ** [sqlite3_db_status](D,[SQLITE_CONFIG_LOOKASIDE],...) is zero. ** Any attempt to change the lookaside memory configuration when lookaside ** memory is in use leaves the configuration unchanged and returns ** [SQLITE_BUSY].)^</dd> ** ** <dt>SQLITE_DBCONFIG_ENABLE_FKEY</dt> ** <dd> ^This option is used to enable or disable the enforcement of ** [foreign key constraints]. There should be two additional arguments. ** The first argument is an integer which is 0 to disable FK enforcement, ** positive to enable FK enforcement or negative to leave FK enforcement ** unchanged. The second parameter is a pointer to an integer into which ** is written 0 or 1 to indicate whether FK enforcement is off or on ** following this call. The second parameter may be a NULL pointer, in ** which case the FK enforcement setting is not reported back. </dd> ** ** <dt>SQLITE_DBCONFIG_ENABLE_TRIGGER</dt> ** <dd> ^This option is used to enable or disable [CREATE TRIGGER | triggers]. ** There should be two additional arguments. ** The first argument is an integer which is 0 to disable triggers, ** positive to enable triggers or negative to leave the setting unchanged. ** The second parameter is a pointer to an integer into which ** is written 0 or 1 to indicate whether triggers are disabled or enabled ** following this call. The second parameter may be a NULL pointer, in ** which case the trigger setting is not reported back. </dd> ** ** </dl> */ #define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ #define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ /* ** CAPI3REF: Enable Or Disable Extended Result Codes ** ** ^The sqlite3_extended_result_codes() routine enables or disables the ** [extended result codes] feature of SQLite. ^The extended result |
︙ | ︙ | |||
1455 1456 1457 1458 1459 1460 1461 | ** as an undeclared column named ROWID, OID, or _ROWID_ as long as those ** names are not also used by explicitly declared columns. ^If ** the table has a column of type [INTEGER PRIMARY KEY] then that column ** is another alias for the rowid. ** ** ^This routine returns the [rowid] of the most recent ** successful [INSERT] into the database from the [database connection] | | > > | > | | | > | 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 | ** as an undeclared column named ROWID, OID, or _ROWID_ as long as those ** names are not also used by explicitly declared columns. ^If ** the table has a column of type [INTEGER PRIMARY KEY] then that column ** is another alias for the rowid. ** ** ^This routine returns the [rowid] of the most recent ** successful [INSERT] into the database from the [database connection] ** in the first argument. ^As of SQLite version 3.7.7, this routines ** records the last insert rowid of both ordinary tables and [virtual tables]. ** ^If no successful [INSERT]s ** have ever occurred on that database connection, zero is returned. ** ** ^(If an [INSERT] occurs within a trigger or within a [virtual table] ** method, then this routine will return the [rowid] of the inserted ** row as long as the trigger or virtual table method is running. ** But once the trigger or virtual table method ends, the value returned ** by this routine reverts to what it was before the trigger or virtual ** table method began.)^ ** ** ^An [INSERT] that fails due to a constraint violation is not a ** successful [INSERT] and does not change the value returned by this ** routine. ^Thus INSERT OR FAIL, INSERT OR IGNORE, INSERT OR ROLLBACK, ** and INSERT OR ABORT make no changes to the return value of this ** routine when their insertion fails. ^(When INSERT OR REPLACE ** encounters a constraint violation, it does not fail. The |
︙ | ︙ | |||
1824 1825 1826 1827 1828 1829 1830 | ** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their ** results into memory obtained from [sqlite3_malloc()]. ** The strings returned by these two routines should be ** released by [sqlite3_free()]. ^Both routines return a ** NULL pointer if [sqlite3_malloc()] is unable to allocate enough ** memory to hold the resulting string. ** | | > > | 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 | ** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their ** results into memory obtained from [sqlite3_malloc()]. ** The strings returned by these two routines should be ** released by [sqlite3_free()]. ^Both routines return a ** NULL pointer if [sqlite3_malloc()] is unable to allocate enough ** memory to hold the resulting string. ** ** ^(The sqlite3_snprintf() routine is similar to "snprintf()" from ** the standard C library. The result is written into the ** buffer supplied as the second parameter whose size is given by ** the first parameter. Note that the order of the ** first two parameters is reversed from snprintf().)^ This is an ** historical accident that cannot be fixed without breaking ** backwards compatibility. ^(Note also that sqlite3_snprintf() ** returns a pointer to its buffer instead of the number of ** characters actually written into the buffer.)^ We admit that ** the number of characters written would be a more useful return ** value but we cannot change the implementation of sqlite3_snprintf() ** now without breaking compatibility. ** ** ^As long as the buffer size is greater than zero, sqlite3_snprintf() ** guarantees that the buffer is always zero-terminated. ^The first ** parameter "n" is the total size of the buffer, including space for ** the zero terminator. So the longest string that can be completely ** written will be n-1 characters. ** ** ^The sqlite3_vsnprintf() routine is a varargs version of sqlite3_snprintf(). ** ** These routines all implement some additional formatting ** options that are useful for constructing SQL statements. ** All of the usual printf() formatting options apply. In addition, there ** is are "%q", "%Q", and "%z" options. ** ** ^(The %q option works like %s in that it substitutes a null-terminated |
︙ | ︙ | |||
1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 | ** ^(The "%z" formatting option works like "%s" but with the ** addition that after the string has been read and copied into ** the result, [sqlite3_free()] is called on the input string.)^ */ SQLITE_API char *sqlite3_mprintf(const char*,...); SQLITE_API char *sqlite3_vmprintf(const char*, va_list); SQLITE_API char *sqlite3_snprintf(int,char*,const char*, ...); /* ** CAPI3REF: Memory Allocation Subsystem ** ** The SQLite core uses these three routines for all of its own ** internal memory allocation needs. "Core" in the previous sentence ** does not include operating-system specific VFS implementation. The | > | 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 | ** ^(The "%z" formatting option works like "%s" but with the ** addition that after the string has been read and copied into ** the result, [sqlite3_free()] is called on the input string.)^ */ SQLITE_API char *sqlite3_mprintf(const char*,...); SQLITE_API char *sqlite3_vmprintf(const char*, va_list); SQLITE_API char *sqlite3_snprintf(int,char*,const char*, ...); SQLITE_API char *sqlite3_vsnprintf(int,char*,const char*, va_list); /* ** CAPI3REF: Memory Allocation Subsystem ** ** The SQLite core uses these three routines for all of its own ** internal memory allocation needs. "Core" in the previous sentence ** does not include operating-system specific VFS implementation. The |
︙ | ︙ | |||
2030 2031 2032 2033 2034 2035 2036 | ** method. */ SQLITE_API void sqlite3_randomness(int N, void *P); /* ** CAPI3REF: Compile-Time Authorization Callbacks ** | | | 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 | ** method. */ SQLITE_API void sqlite3_randomness(int N, void *P); /* ** CAPI3REF: Compile-Time Authorization Callbacks ** ** ^This routine registers an authorizer callback with a particular ** [database connection], supplied in the first argument. ** ^The authorizer callback is invoked as SQL statements are being compiled ** by [sqlite3_prepare()] or its variants [sqlite3_prepare_v2()], ** [sqlite3_prepare16()] and [sqlite3_prepare16_v2()]. ^At various ** points during the compilation process, as logic is being created ** to perform various actions, the authorizer callback is invoked to ** see if those actions are allowed. ^The authorizer callback should |
︙ | ︙ | |||
2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 | ** CAPI3REF: Authorizer Return Codes ** ** The [sqlite3_set_authorizer | authorizer callback function] must ** return either [SQLITE_OK] or one of these two constants in order ** to signal SQLite whether or not the action is permitted. See the ** [sqlite3_set_authorizer | authorizer documentation] for additional ** information. */ #define SQLITE_DENY 1 /* Abort the SQL statement with an error */ #define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */ /* ** CAPI3REF: Authorizer Action Codes ** | > > > | 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 | ** CAPI3REF: Authorizer Return Codes ** ** The [sqlite3_set_authorizer | authorizer callback function] must ** return either [SQLITE_OK] or one of these two constants in order ** to signal SQLite whether or not the action is permitted. See the ** [sqlite3_set_authorizer | authorizer documentation] for additional ** information. ** ** Note that SQLITE_IGNORE is also used as a [SQLITE_ROLLBACK | return code] ** from the [sqlite3_vtab_on_conflict()] interface. */ #define SQLITE_DENY 1 /* Abort the SQL statement with an error */ #define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */ /* ** CAPI3REF: Authorizer Action Codes ** |
︙ | ︙ | |||
2243 2244 2245 2246 2247 2248 2249 | ** */ SQLITE_API void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); /* ** CAPI3REF: Opening A New Database Connection ** | | | 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 | ** */ SQLITE_API void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); /* ** CAPI3REF: Opening A New Database Connection ** ** ^These routines open an SQLite database file as specified by the ** filename argument. ^The filename argument is interpreted as UTF-8 for ** sqlite3_open() and sqlite3_open_v2() and as UTF-16 in the native byte ** order for sqlite3_open16(). ^(A [database connection] handle is usually ** returned in *ppDb, even if an error occurs. The only exception is that ** if SQLite is unable to allocate memory to hold the [sqlite3] object, ** a NULL will be written into *ppDb instead of a pointer to the [sqlite3] ** object.)^ ^(If the database is opened (and/or created) successfully, then |
︙ | ︙ | |||
2270 2271 2272 2273 2274 2275 2276 | ** ** The sqlite3_open_v2() interface works like sqlite3_open() ** except that it accepts two additional parameters for additional control ** over the new database connection. ^(The flags parameter to ** sqlite3_open_v2() can take one of ** the following three values, optionally combined with the ** [SQLITE_OPEN_NOMUTEX], [SQLITE_OPEN_FULLMUTEX], [SQLITE_OPEN_SHAREDCACHE], | | | | < | > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > | > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 | ** ** The sqlite3_open_v2() interface works like sqlite3_open() ** except that it accepts two additional parameters for additional control ** over the new database connection. ^(The flags parameter to ** sqlite3_open_v2() can take one of ** the following three values, optionally combined with the ** [SQLITE_OPEN_NOMUTEX], [SQLITE_OPEN_FULLMUTEX], [SQLITE_OPEN_SHAREDCACHE], ** [SQLITE_OPEN_PRIVATECACHE], and/or [SQLITE_OPEN_URI] flags:)^ ** ** <dl> ** ^(<dt>[SQLITE_OPEN_READONLY]</dt> ** <dd>The database is opened in read-only mode. If the database does not ** already exist, an error is returned.</dd>)^ ** ** ^(<dt>[SQLITE_OPEN_READWRITE]</dt> ** <dd>The database is opened for reading and writing if possible, or reading ** only if the file is write protected by the operating system. In either ** case the database must already exist, otherwise an error is returned.</dd>)^ ** ** ^(<dt>[SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]</dt> ** <dd>The database is opened for reading and writing, and is created if ** it does not already exist. This is the behavior that is always used for ** sqlite3_open() and sqlite3_open16().</dd>)^ ** </dl> ** ** If the 3rd parameter to sqlite3_open_v2() is not one of the ** combinations shown above optionally combined with other ** [SQLITE_OPEN_READONLY | SQLITE_OPEN_* bits] ** then the behavior is undefined. ** ** ^If the [SQLITE_OPEN_NOMUTEX] flag is set, then the database connection ** opens in the multi-thread [threading mode] as long as the single-thread ** mode has not been set at compile-time or start-time. ^If the ** [SQLITE_OPEN_FULLMUTEX] flag is set then the database connection opens ** in the serialized [threading mode] unless single-thread was ** previously selected at compile-time or start-time. ** ^The [SQLITE_OPEN_SHAREDCACHE] flag causes the database connection to be ** eligible to use [shared cache mode], regardless of whether or not shared ** cache is enabled using [sqlite3_enable_shared_cache()]. ^The ** [SQLITE_OPEN_PRIVATECACHE] flag causes the database connection to not ** participate in [shared cache mode] even if it is enabled. ** ** ^The fourth parameter to sqlite3_open_v2() is the name of the ** [sqlite3_vfs] object that defines the operating system interface that ** the new database connection should use. ^If the fourth parameter is ** a NULL pointer then the default [sqlite3_vfs] object is used. ** ** ^If the filename is ":memory:", then a private, temporary in-memory database ** is created for the connection. ^This in-memory database will vanish when ** the database connection is closed. Future versions of SQLite might ** make use of additional special filenames that begin with the ":" character. ** It is recommended that when a database filename actually does begin with ** a ":" character you should prefix the filename with a pathname such as ** "./" to avoid ambiguity. ** ** ^If the filename is an empty string, then a private, temporary ** on-disk database will be created. ^This private database will be ** automatically deleted as soon as the database connection is closed. ** ** [[URI filenames in sqlite3_open()]] <h3>URI Filenames</h3> ** ** ^If [URI filename] interpretation is enabled, and the filename argument ** begins with "file:", then the filename is interpreted as a URI. ^URI ** filename interpretation is enabled if the [SQLITE_OPEN_URI] flag is ** is set in the fourth argument to sqlite3_open_v2(), or if it has ** been enabled globally using the [SQLITE_CONFIG_URI] option with the ** [sqlite3_config()] method or by the [SQLITE_USE_URI] compile-time option. ** As of SQLite version 3.7.7, URI filename interpretation is turned off ** by default, but future releases of SQLite might enable URI filename ** intepretation by default. See "[URI filenames]" for additional ** information. ** ** URI filenames are parsed according to RFC 3986. ^If the URI contains an ** authority, then it must be either an empty string or the string ** "localhost". ^If the authority is not an empty string or "localhost", an ** error is returned to the caller. ^The fragment component of a URI, if ** present, is ignored. ** ** ^SQLite uses the path component of the URI as the name of the disk file ** which contains the database. ^If the path begins with a '/' character, ** then it is interpreted as an absolute path. ^If the path does not begin ** with a '/' (meaning that the authority section is omitted from the URI) ** then the path is interpreted as a relative path. ** ^On windows, the first component of an absolute path ** is a drive specification (e.g. "C:"). ** ** [[core URI query parameters]] ** The query component of a URI may contain parameters that are interpreted ** either by SQLite itself, or by a [VFS | custom VFS implementation]. ** SQLite interprets the following three query parameters: ** ** <ul> ** <li> <b>vfs</b>: ^The "vfs" parameter may be used to specify the name of ** a VFS object that provides the operating system interface that should ** be used to access the database file on disk. ^If this option is set to ** an empty string the default VFS object is used. ^Specifying an unknown ** VFS is an error. ^If sqlite3_open_v2() is used and the vfs option is ** present, then the VFS specified by the option takes precedence over ** the value passed as the fourth parameter to sqlite3_open_v2(). ** ** <li> <b>mode</b>: ^(The mode parameter may be set to either "ro", "rw" or ** "rwc". Attempting to set it to any other value is an error)^. ** ^If "ro" is specified, then the database is opened for read-only ** access, just as if the [SQLITE_OPEN_READONLY] flag had been set in the ** third argument to sqlite3_prepare_v2(). ^If the mode option is set to ** "rw", then the database is opened for read-write (but not create) ** access, as if SQLITE_OPEN_READWRITE (but not SQLITE_OPEN_CREATE) had ** been set. ^Value "rwc" is equivalent to setting both ** SQLITE_OPEN_READWRITE and SQLITE_OPEN_CREATE. ^If sqlite3_open_v2() is ** used, it is an error to specify a value for the mode parameter that is ** less restrictive than that specified by the flags passed as the third ** parameter. ** ** <li> <b>cache</b>: ^The cache parameter may be set to either "shared" or ** "private". ^Setting it to "shared" is equivalent to setting the ** SQLITE_OPEN_SHAREDCACHE bit in the flags argument passed to ** sqlite3_open_v2(). ^Setting the cache parameter to "private" is ** equivalent to setting the SQLITE_OPEN_PRIVATECACHE bit. ** ^If sqlite3_open_v2() is used and the "cache" parameter is present in ** a URI filename, its value overrides any behaviour requested by setting ** SQLITE_OPEN_PRIVATECACHE or SQLITE_OPEN_SHAREDCACHE flag. ** </ul> ** ** ^Specifying an unknown parameter in the query component of a URI is not an ** error. Future versions of SQLite might understand additional query ** parameters. See "[query parameters with special meaning to SQLite]" for ** additional information. ** ** [[URI filename examples]] <h3>URI filename examples</h3> ** ** <table border="1" align=center cellpadding=5> ** <tr><th> URI filenames <th> Results ** <tr><td> file:data.db <td> ** Open the file "data.db" in the current directory. ** <tr><td> file:/home/fred/data.db<br> ** file:///home/fred/data.db <br> ** file://localhost/home/fred/data.db <br> <td> ** Open the database file "/home/fred/data.db". ** <tr><td> file://darkstar/home/fred/data.db <td> ** An error. "darkstar" is not a recognized authority. ** <tr><td style="white-space:nowrap"> ** file:///C:/Documents%20and%20Settings/fred/Desktop/data.db ** <td> Windows only: Open the file "data.db" on fred's desktop on drive ** C:. Note that the %20 escaping in this example is not strictly ** necessary - space characters can be used literally ** in URI filenames. ** <tr><td> file:data.db?mode=ro&cache=private <td> ** Open file "data.db" in the current directory for read-only access. ** Regardless of whether or not shared-cache mode is enabled by ** default, use a private cache. ** <tr><td> file:/home/fred/data.db?vfs=unix-nolock <td> ** Open file "/home/fred/data.db". Use the special VFS "unix-nolock". ** <tr><td> file:data.db?mode=readonly <td> ** An error. "readonly" is not a valid option for the "mode" parameter. ** </table> ** ** ^URI hexadecimal escape sequences (%HH) are supported within the path and ** query components of a URI. A hexadecimal escape sequence consists of a ** percent sign - "%" - followed by exactly two hexadecimal digits ** specifying an octet value. ^Before the path or query components of a ** URI filename are interpreted, they are encoded using UTF-8 and all ** hexadecimal escape sequences replaced by a single byte containing the ** corresponding octet. If this process generates an invalid UTF-8 encoding, ** the results are undefined. ** ** <b>Note to Windows users:</b> The encoding used for the filename argument ** of sqlite3_open() and sqlite3_open_v2() must be UTF-8, not whatever ** codepage is currently defined. Filenames containing international ** characters must be converted to UTF-8 prior to passing them into ** sqlite3_open() or sqlite3_open_v2(). */ |
︙ | ︙ | |||
2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 | ); SQLITE_API int sqlite3_open_v2( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb, /* OUT: SQLite db handle */ int flags, /* Flags */ const char *zVfs /* Name of VFS module to use */ ); /* ** CAPI3REF: Error Codes And Messages ** ** ^The sqlite3_errcode() interface returns the numeric [result code] or ** [extended result code] for the most recent failed sqlite3_* API call ** associated with a [database connection]. If a prior API call failed | > > > > > > > > > > > > > > > > > > > > | 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 | ); SQLITE_API int sqlite3_open_v2( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb, /* OUT: SQLite db handle */ int flags, /* Flags */ const char *zVfs /* Name of VFS module to use */ ); /* ** CAPI3REF: Obtain Values For URI Parameters ** ** This is a utility routine, useful to VFS implementations, that checks ** to see if a database file was a URI that contained a specific query ** parameter, and if so obtains the value of the query parameter. ** ** The zFilename argument is the filename pointer passed into the xOpen() ** method of a VFS implementation. The zParam argument is the name of the ** query parameter we seek. This routine returns the value of the zParam ** parameter if it exists. If the parameter does not exist, this routine ** returns a NULL pointer. ** ** If the zFilename argument to this function is not a pointer that SQLite ** passed into the xOpen VFS method, then the behavior of this routine ** is undefined and probably undesirable. */ SQLITE_API const char *sqlite3_uri_parameter(const char *zFilename, const char *zParam); /* ** CAPI3REF: Error Codes And Messages ** ** ^The sqlite3_errcode() interface returns the numeric [result code] or ** [extended result code] for the most recent failed sqlite3_* API call ** associated with a [database connection]. If a prior API call failed |
︙ | ︙ | |||
2459 2460 2461 2462 2463 2464 2465 | ** ** These constants define various performance limits ** that can be lowered at run-time using [sqlite3_limit()]. ** The synopsis of the meanings of the various limits is shown below. ** Additional information is available at [limits | Limits in SQLite]. ** ** <dl> | | | | | | | | | > > | | 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 | ** ** These constants define various performance limits ** that can be lowered at run-time using [sqlite3_limit()]. ** The synopsis of the meanings of the various limits is shown below. ** Additional information is available at [limits | Limits in SQLite]. ** ** <dl> ** [[SQLITE_LIMIT_LENGTH]] ^(<dt>SQLITE_LIMIT_LENGTH</dt> ** <dd>The maximum size of any string or BLOB or table row, in bytes.<dd>)^ ** ** [[SQLITE_LIMIT_SQL_LENGTH]] ^(<dt>SQLITE_LIMIT_SQL_LENGTH</dt> ** <dd>The maximum length of an SQL statement, in bytes.</dd>)^ ** ** [[SQLITE_LIMIT_COLUMN]] ^(<dt>SQLITE_LIMIT_COLUMN</dt> ** <dd>The maximum number of columns in a table definition or in the ** result set of a [SELECT] or the maximum number of columns in an index ** or in an ORDER BY or GROUP BY clause.</dd>)^ ** ** [[SQLITE_LIMIT_EXPR_DEPTH]] ^(<dt>SQLITE_LIMIT_EXPR_DEPTH</dt> ** <dd>The maximum depth of the parse tree on any expression.</dd>)^ ** ** [[SQLITE_LIMIT_COMPOUND_SELECT]] ^(<dt>SQLITE_LIMIT_COMPOUND_SELECT</dt> ** <dd>The maximum number of terms in a compound SELECT statement.</dd>)^ ** ** [[SQLITE_LIMIT_VDBE_OP]] ^(<dt>SQLITE_LIMIT_VDBE_OP</dt> ** <dd>The maximum number of instructions in a virtual machine program ** used to implement an SQL statement. This limit is not currently ** enforced, though that might be added in some future release of ** SQLite.</dd>)^ ** ** [[SQLITE_LIMIT_FUNCTION_ARG]] ^(<dt>SQLITE_LIMIT_FUNCTION_ARG</dt> ** <dd>The maximum number of arguments on a function.</dd>)^ ** ** [[SQLITE_LIMIT_ATTACHED]] ^(<dt>SQLITE_LIMIT_ATTACHED</dt> ** <dd>The maximum number of [ATTACH | attached databases].)^</dd> ** ** [[SQLITE_LIMIT_LIKE_PATTERN_LENGTH]] ** ^(<dt>SQLITE_LIMIT_LIKE_PATTERN_LENGTH</dt> ** <dd>The maximum length of the pattern argument to the [LIKE] or ** [GLOB] operators.</dd>)^ ** ** [[SQLITE_LIMIT_VARIABLE_NUMBER]] ** ^(<dt>SQLITE_LIMIT_VARIABLE_NUMBER</dt> ** <dd>The maximum index number of any [parameter] in an SQL statement.)^ ** ** [[SQLITE_LIMIT_TRIGGER_DEPTH]] ^(<dt>SQLITE_LIMIT_TRIGGER_DEPTH</dt> ** <dd>The maximum depth of recursion for triggers.</dd>)^ ** </dl> */ #define SQLITE_LIMIT_LENGTH 0 #define SQLITE_LIMIT_SQL_LENGTH 1 #define SQLITE_LIMIT_COLUMN 2 #define SQLITE_LIMIT_EXPR_DEPTH 3 |
︙ | ︙ | |||
2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 | ** ** ^This interface can be used to retrieve a saved copy of the original ** SQL text used to create a [prepared statement] if that statement was ** compiled using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()]. */ SQLITE_API const char *sqlite3_sql(sqlite3_stmt *pStmt); /* ** CAPI3REF: Dynamically Typed Value Object ** KEYWORDS: {protected sqlite3_value} {unprotected sqlite3_value} ** ** SQLite uses the sqlite3_value object to represent all values ** that can be stored in a database table. SQLite uses dynamic typing ** for the values it stores. ^Values stored in sqlite3_value objects ** can be integers, floating point values, strings, BLOBs, or NULL. ** ** An sqlite3_value object may be either "protected" or "unprotected". ** Some interfaces require a protected sqlite3_value. Other interfaces ** will accept either a protected or an unprotected sqlite3_value. ** Every interface that accepts sqlite3_value arguments specifies ** whether or not it requires a protected sqlite3_value. ** ** The terms "protected" and "unprotected" refer to whether or not | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 | ** ** ^This interface can be used to retrieve a saved copy of the original ** SQL text used to create a [prepared statement] if that statement was ** compiled using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()]. */ SQLITE_API const char *sqlite3_sql(sqlite3_stmt *pStmt); /* ** CAPI3REF: Determine If An SQL Statement Writes The Database ** ** ^The sqlite3_stmt_readonly(X) interface returns true (non-zero) if ** and only if the [prepared statement] X makes no direct changes to ** the content of the database file. ** ** Note that [application-defined SQL functions] or ** [virtual tables] might change the database indirectly as a side effect. ** ^(For example, if an application defines a function "eval()" that ** calls [sqlite3_exec()], then the following SQL statement would ** change the database file through side-effects: ** ** <blockquote><pre> ** SELECT eval('DELETE FROM t1') FROM t2; ** </pre></blockquote> ** ** But because the [SELECT] statement does not change the database file ** directly, sqlite3_stmt_readonly() would still return true.)^ ** ** ^Transaction control statements such as [BEGIN], [COMMIT], [ROLLBACK], ** [SAVEPOINT], and [RELEASE] cause sqlite3_stmt_readonly() to return true, ** since the statements themselves do not actually modify the database but ** rather they control the timing of when other statements modify the ** database. ^The [ATTACH] and [DETACH] statements also cause ** sqlite3_stmt_readonly() to return true since, while those statements ** change the configuration of a database connection, they do not make ** changes to the content of the database files on disk. */ SQLITE_API int sqlite3_stmt_readonly(sqlite3_stmt *pStmt); /* ** CAPI3REF: Dynamically Typed Value Object ** KEYWORDS: {protected sqlite3_value} {unprotected sqlite3_value} ** ** SQLite uses the sqlite3_value object to represent all values ** that can be stored in a database table. SQLite uses dynamic typing ** for the values it stores. ^Values stored in sqlite3_value objects ** can be integers, floating point values, strings, BLOBs, or NULL. ** ** An sqlite3_value object may be either "protected" or "unprotected". ** Some interfaces require a protected sqlite3_value. Other interfaces ** will accept either a protected or an unprotected sqlite3_value. ** Every interface that accepts sqlite3_value arguments specifies ** whether or not it requires a protected sqlite3_value. ** ** The terms "protected" and "unprotected" refer to whether or not ** a mutex is held. An internal mutex is held for a protected ** sqlite3_value object but no mutex is held for an unprotected ** sqlite3_value object. If SQLite is compiled to be single-threaded ** (with [SQLITE_THREADSAFE=0] and with [sqlite3_threadsafe()] returning 0) ** or if SQLite is run in one of reduced mutex modes ** [SQLITE_CONFIG_SINGLETHREAD] or [SQLITE_CONFIG_MULTITHREAD] ** then there is no distinction between protected and unprotected ** sqlite3_value objects and they can be used interchangeably. However, |
︙ | ︙ | |||
2728 2729 2730 2731 2732 2733 2734 | ** number of bytes in the parameter. To be clear: the value is the ** number of <u>bytes</u> in the value, not the number of characters.)^ ** ^If the fourth parameter is negative, the length of the string is ** the number of bytes up to the first zero terminator. ** ** ^The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and ** sqlite3_bind_text16() is a destructor used to dispose of the BLOB or | | > > > | 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 | ** number of bytes in the parameter. To be clear: the value is the ** number of <u>bytes</u> in the value, not the number of characters.)^ ** ^If the fourth parameter is negative, the length of the string is ** the number of bytes up to the first zero terminator. ** ** ^The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and ** sqlite3_bind_text16() is a destructor used to dispose of the BLOB or ** string after SQLite has finished with it. ^The destructor is called ** to dispose of the BLOB or string even if the call to sqlite3_bind_blob(), ** sqlite3_bind_text(), or sqlite3_bind_text16() fails. ** ^If the fifth argument is ** the special value [SQLITE_STATIC], then SQLite assumes that the ** information is in static, unmanaged space and does not need to be freed. ** ^If the fifth argument has the value [SQLITE_TRANSIENT], then ** SQLite makes its own private copy of the data immediately, before ** the sqlite3_bind_*() routine returns. ** ** ^The sqlite3_bind_zeroblob() routine binds a BLOB of length N that |
︙ | ︙ | |||
2866 2867 2868 2869 2870 2871 2872 | ** interface returns a pointer to a zero-terminated UTF-8 string ** and sqlite3_column_name16() returns a pointer to a zero-terminated ** UTF-16 string. ^The first parameter is the [prepared statement] ** that implements the [SELECT] statement. ^The second parameter is the ** column number. ^The leftmost column is number 0. ** ** ^The returned string pointer is valid until either the [prepared statement] | | > > | 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 | ** interface returns a pointer to a zero-terminated UTF-8 string ** and sqlite3_column_name16() returns a pointer to a zero-terminated ** UTF-16 string. ^The first parameter is the [prepared statement] ** that implements the [SELECT] statement. ^The second parameter is the ** column number. ^The leftmost column is number 0. ** ** ^The returned string pointer is valid until either the [prepared statement] ** is destroyed by [sqlite3_finalize()] or until the statement is automatically ** reprepared by the first call to [sqlite3_step()] for a particular run ** or until the next call to ** sqlite3_column_name() or sqlite3_column_name16() on the same column. ** ** ^If sqlite3_malloc() fails during the processing of either routine ** (for example during a conversion from UTF-8 to UTF-16) then a ** NULL pointer is returned. ** ** ^The name of a result column is the value of the "AS" clause for |
︙ | ︙ | |||
2892 2893 2894 2895 2896 2897 2898 | ** table column that is the origin of a particular result column in ** [SELECT] statement. ** ^The name of the database or table or column can be returned as ** either a UTF-8 or UTF-16 string. ^The _database_ routines return ** the database name, the _table_ routines return the table name, and ** the origin_ routines return the column name. ** ^The returned string is valid until the [prepared statement] is destroyed | | > > | 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 | ** table column that is the origin of a particular result column in ** [SELECT] statement. ** ^The name of the database or table or column can be returned as ** either a UTF-8 or UTF-16 string. ^The _database_ routines return ** the database name, the _table_ routines return the table name, and ** the origin_ routines return the column name. ** ^The returned string is valid until the [prepared statement] is destroyed ** using [sqlite3_finalize()] or until the statement is automatically ** reprepared by the first call to [sqlite3_step()] for a particular run ** or until the same information is requested ** again in a different encoding. ** ** ^The names returned are the original un-aliased names of the ** database, table, and column. ** ** ^The first argument to these interfaces is a [prepared statement]. ** ^These functions return information about the Nth result column returned by |
︙ | ︙ | |||
3016 3017 3018 3019 3020 3021 3022 | ** [SQLITE_MISUSE] means that the this routine was called inappropriately. ** Perhaps it was called on a [prepared statement] that has ** already been [sqlite3_finalize | finalized] or on one that had ** previously returned [SQLITE_ERROR] or [SQLITE_DONE]. Or it could ** be the case that the same database connection is being used by two or ** more threads at the same moment in time. ** | | | | | | | | > > > > | 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 | ** [SQLITE_MISUSE] means that the this routine was called inappropriately. ** Perhaps it was called on a [prepared statement] that has ** already been [sqlite3_finalize | finalized] or on one that had ** previously returned [SQLITE_ERROR] or [SQLITE_DONE]. Or it could ** be the case that the same database connection is being used by two or ** more threads at the same moment in time. ** ** For all versions of SQLite up to and including 3.6.23.1, a call to ** [sqlite3_reset()] was required after sqlite3_step() returned anything ** other than [SQLITE_ROW] before any subsequent invocation of ** sqlite3_step(). Failure to reset the prepared statement using ** [sqlite3_reset()] would result in an [SQLITE_MISUSE] return from ** sqlite3_step(). But after version 3.6.23.1, sqlite3_step() began ** calling [sqlite3_reset()] automatically in this circumstance rather ** than returning [SQLITE_MISUSE]. This is not considered a compatibility ** break because any application that ever receives an SQLITE_MISUSE error ** is broken by definition. The [SQLITE_OMIT_AUTORESET] compile-time option ** can be used to restore the legacy behavior. ** ** <b>Goofy Interface Alert:</b> In the legacy interface, the sqlite3_step() ** API always returns a generic error code, [SQLITE_ERROR], following any ** error other than [SQLITE_BUSY] and [SQLITE_MISUSE]. You must call ** [sqlite3_reset()] or [sqlite3_finalize()] in order to find one of the ** specific [error codes] that better describes the error. ** We admit that this is a goofy design. The problem has been fixed |
︙ | ︙ | |||
3320 3321 3322 3323 3324 3325 3326 | ** KEYWORDS: {application-defined SQL function} ** KEYWORDS: {application-defined SQL functions} ** ** ^These functions (collectively known as "function creation routines") ** are used to add SQL functions or aggregates or to redefine the behavior ** of existing SQL functions or aggregates. The only differences between ** these routines are the text encoding expected for | | | 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 | ** KEYWORDS: {application-defined SQL function} ** KEYWORDS: {application-defined SQL functions} ** ** ^These functions (collectively known as "function creation routines") ** are used to add SQL functions or aggregates or to redefine the behavior ** of existing SQL functions or aggregates. The only differences between ** these routines are the text encoding expected for ** the second parameter (the name of the function being created) ** and the presence or absence of a destructor callback for ** the application data pointer. ** ** ^The first parameter is the [database connection] to which the SQL ** function is to be added. ^If an application uses more than one database ** connection then application-defined SQL functions must be added ** to each database connection separately. |
︙ | ︙ | |||
3359 3360 3361 3362 3363 3364 3365 | ** will pick the one that involves the least amount of data conversion. ** If there is only a single implementation which does not care what text ** encoding is used, then the fourth argument should be [SQLITE_ANY]. ** ** ^(The fifth parameter is an arbitrary pointer. The implementation of the ** function can gain access to this pointer using [sqlite3_user_data()].)^ ** | | | | > | | > > | | | | 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 | ** will pick the one that involves the least amount of data conversion. ** If there is only a single implementation which does not care what text ** encoding is used, then the fourth argument should be [SQLITE_ANY]. ** ** ^(The fifth parameter is an arbitrary pointer. The implementation of the ** function can gain access to this pointer using [sqlite3_user_data()].)^ ** ** ^The sixth, seventh and eighth parameters, xFunc, xStep and xFinal, are ** pointers to C-language functions that implement the SQL function or ** aggregate. ^A scalar SQL function requires an implementation of the xFunc ** callback only; NULL pointers must be passed as the xStep and xFinal ** parameters. ^An aggregate SQL function requires an implementation of xStep ** and xFinal and NULL pointer must be passed for xFunc. ^To delete an existing ** SQL function or aggregate, pass NULL pointers for all three function ** callbacks. ** ** ^(If the ninth parameter to sqlite3_create_function_v2() is not NULL, ** then it is destructor for the application data pointer. ** The destructor is invoked when the function is deleted, either by being ** overloaded or when the database connection closes.)^ ** ^The destructor is also invoked if the call to ** sqlite3_create_function_v2() fails. ** ^When the destructor callback of the tenth parameter is invoked, it ** is passed a single argument which is a copy of the application data ** pointer which was the fifth parameter to sqlite3_create_function_v2(). ** ** ^It is permitted to register multiple implementations of the same ** functions with the same name but with either differing numbers of ** arguments or differing preferred text encodings. ^SQLite will use ** the implementation that most closely matches the way in which the ** SQL function is used. ^A function implementation with a non-negative ** nArg parameter is a better match than a function implementation with |
︙ | ︙ | |||
3469 3470 3471 3472 3473 3474 3475 | ** The C-language implementation of SQL functions and aggregates uses ** this set of interface routines to access the parameter values on ** the function or aggregate. ** ** The xFunc (for scalar functions) or xStep (for aggregates) parameters ** to [sqlite3_create_function()] and [sqlite3_create_function16()] ** define callbacks that implement the SQL functions and aggregates. | | | 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 | ** The C-language implementation of SQL functions and aggregates uses ** this set of interface routines to access the parameter values on ** the function or aggregate. ** ** The xFunc (for scalar functions) or xStep (for aggregates) parameters ** to [sqlite3_create_function()] and [sqlite3_create_function16()] ** define callbacks that implement the SQL functions and aggregates. ** The 3rd parameter to these callbacks is an array of pointers to ** [protected sqlite3_value] objects. There is one [sqlite3_value] object for ** each parameter to the SQL function. These routines are used to ** extract values from the [sqlite3_value] objects. ** ** These routines work only with [protected sqlite3_value] objects. ** Any attempt to use these routines on an [unprotected sqlite3_value] ** object results in undefined behavior. |
︙ | ︙ | |||
3796 3797 3798 3799 3800 3801 3802 | ** ^The eTextRep argument determines the encoding of strings passed ** to the collating function callback, xCallback. ** ^The [SQLITE_UTF16] and [SQLITE_UTF16_ALIGNED] values for eTextRep ** force strings to be UTF16 with native byte order. ** ^The [SQLITE_UTF16_ALIGNED] value for eTextRep forces strings to begin ** on an even byte address. ** | | | | 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 | ** ^The eTextRep argument determines the encoding of strings passed ** to the collating function callback, xCallback. ** ^The [SQLITE_UTF16] and [SQLITE_UTF16_ALIGNED] values for eTextRep ** force strings to be UTF16 with native byte order. ** ^The [SQLITE_UTF16_ALIGNED] value for eTextRep forces strings to begin ** on an even byte address. ** ** ^The fourth argument, pArg, is an application data pointer that is passed ** through as the first argument to the collating function callback. ** ** ^The fifth argument, xCallback, is a pointer to the collating function. ** ^Multiple collating functions can be registered using the same name but ** with different eTextRep parameters and SQLite will use whichever ** function requires the least amount of data transformation. ** ^If the xCallback argument is NULL then the collating function is ** deleted. ^When all collating functions having the same name are deleted, ** that collation is no longer usable. ** ** ^The collating function callback is invoked with a copy of the pArg ** application data pointer and with two strings in the encoding specified ** by the eTextRep argument. The collating function must return an ** integer that is negative, zero, or positive ** if the first string is less than, equal to, or greater than the second, ** respectively. A collating function must always return the same answer ** given the same inputs. If two or more collating functions are registered ** to the same collation name (using different eTextRep values) then all ** must give an equivalent answer when invoked with equivalent strings. ** The collating function must obey the following properties for all ** strings A, B, and C: ** ** <ol> |
︙ | ︙ | |||
3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 | ** ** ^The sqlite3_create_collation_v2() works like sqlite3_create_collation() ** with the addition that the xDestroy callback is invoked on pArg when ** the collating function is deleted. ** ^Collating functions are deleted when they are overridden by later ** calls to the collation creation functions or when the ** [database connection] is closed using [sqlite3_close()]. ** ** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()]. */ SQLITE_API int sqlite3_create_collation( sqlite3*, const char *zName, int eTextRep, | > > > > > > > > > | 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 | ** ** ^The sqlite3_create_collation_v2() works like sqlite3_create_collation() ** with the addition that the xDestroy callback is invoked on pArg when ** the collating function is deleted. ** ^Collating functions are deleted when they are overridden by later ** calls to the collation creation functions or when the ** [database connection] is closed using [sqlite3_close()]. ** ** ^The xDestroy callback is <u>not</u> called if the ** sqlite3_create_collation_v2() function fails. Applications that invoke ** sqlite3_create_collation_v2() with a non-NULL xDestroy argument should ** check the return code and dispose of the application data pointer ** themselves rather than expecting SQLite to deal with it for them. ** This is different from every other SQLite interface. The inconsistency ** is unfortunate but cannot be changed without breaking backwards ** compatibility. ** ** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()]. */ SQLITE_API int sqlite3_create_collation( sqlite3*, const char *zName, int eTextRep, |
︙ | ︙ | |||
4215 4216 4217 4218 4219 4220 4221 | ** if one or more of following conditions are true: ** ** <ul> ** <li> The soft heap limit is set to zero. ** <li> Memory accounting is disabled using a combination of the ** [sqlite3_config]([SQLITE_CONFIG_MEMSTATUS],...) start-time option and ** the [SQLITE_DEFAULT_MEMSTATUS] compile-time option. | | | 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 | ** if one or more of following conditions are true: ** ** <ul> ** <li> The soft heap limit is set to zero. ** <li> Memory accounting is disabled using a combination of the ** [sqlite3_config]([SQLITE_CONFIG_MEMSTATUS],...) start-time option and ** the [SQLITE_DEFAULT_MEMSTATUS] compile-time option. ** <li> An alternative page cache implementation is specified using ** [sqlite3_config]([SQLITE_CONFIG_PCACHE],...). ** <li> The page cache allocates from its own memory pool supplied ** by [sqlite3_config]([SQLITE_CONFIG_PAGECACHE],...) rather than ** from the heap. ** </ul>)^ ** ** Beginning with SQLite version 3.7.3, the soft heap limit is enforced |
︙ | ︙ | |||
4436 4437 4438 4439 4440 4441 4442 | typedef struct sqlite3_vtab_cursor sqlite3_vtab_cursor; typedef struct sqlite3_module sqlite3_module; /* ** CAPI3REF: Virtual Table Object ** KEYWORDS: sqlite3_module {virtual table module} ** | | | 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 | typedef struct sqlite3_vtab_cursor sqlite3_vtab_cursor; typedef struct sqlite3_module sqlite3_module; /* ** CAPI3REF: Virtual Table Object ** KEYWORDS: sqlite3_module {virtual table module} ** ** This structure, sometimes called a "virtual table module", ** defines the implementation of a [virtual tables]. ** This structure consists mostly of methods for the module. ** ** ^A virtual table module is created by filling in a persistent ** instance of this structure and passing a pointer to that instance ** to [sqlite3_create_module()] or [sqlite3_create_module_v2()]. ** ^The registration remains valid until it is replaced by a different |
︙ | ︙ | |||
4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 | int (*xSync)(sqlite3_vtab *pVTab); int (*xCommit)(sqlite3_vtab *pVTab); int (*xRollback)(sqlite3_vtab *pVTab); int (*xFindFunction)(sqlite3_vtab *pVtab, int nArg, const char *zName, void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), void **ppArg); int (*xRename)(sqlite3_vtab *pVtab, const char *zNew); }; /* ** CAPI3REF: Virtual Table Indexing Information ** KEYWORDS: sqlite3_index_info ** ** The sqlite3_index_info structure and its substructures is used as part | > > > > > | 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 | int (*xSync)(sqlite3_vtab *pVTab); int (*xCommit)(sqlite3_vtab *pVTab); int (*xRollback)(sqlite3_vtab *pVTab); int (*xFindFunction)(sqlite3_vtab *pVtab, int nArg, const char *zName, void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), void **ppArg); int (*xRename)(sqlite3_vtab *pVtab, const char *zNew); /* The methods above are in version 1 of the sqlite_module object. Those ** below are for version 2 and greater. */ int (*xSavepoint)(sqlite3_vtab *pVTab, int); int (*xRelease)(sqlite3_vtab *pVTab, int); int (*xRollbackTo)(sqlite3_vtab *pVTab, int); }; /* ** CAPI3REF: Virtual Table Indexing Information ** KEYWORDS: sqlite3_index_info ** ** The sqlite3_index_info structure and its substructures is used as part |
︙ | ︙ | |||
4591 4592 4593 4594 4595 4596 4597 | ** parameter is an arbitrary client data pointer that is passed through ** into the [xCreate] and [xConnect] methods of the virtual table module ** when a new virtual table is be being created or reinitialized. ** ** ^The sqlite3_create_module_v2() interface has a fifth parameter which ** is a pointer to a destructor for the pClientData. ^SQLite will ** invoke the destructor function (if it is not NULL) when SQLite | | > > | 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 | ** parameter is an arbitrary client data pointer that is passed through ** into the [xCreate] and [xConnect] methods of the virtual table module ** when a new virtual table is be being created or reinitialized. ** ** ^The sqlite3_create_module_v2() interface has a fifth parameter which ** is a pointer to a destructor for the pClientData. ^SQLite will ** invoke the destructor function (if it is not NULL) when SQLite ** no longer needs the pClientData pointer. ^The destructor will also ** be invoked if the call to sqlite3_create_module_v2() fails. ** ^The sqlite3_create_module() ** interface is equivalent to sqlite3_create_module_v2() with a NULL ** destructor. */ SQLITE_API int sqlite3_create_module( sqlite3 *db, /* SQLite connection to register module with */ const char *zName, /* Name of the module */ const sqlite3_module *p, /* Methods for the module */ |
︙ | ︙ | |||
4746 4747 4748 4749 4750 4751 4752 | ** ** ^(If the row that a BLOB handle points to is modified by an ** [UPDATE], [DELETE], or by [ON CONFLICT] side-effects ** then the BLOB handle is marked as "expired". ** This is true if any column of the row is changed, even a column ** other than the one the BLOB handle is open on.)^ ** ^Calls to [sqlite3_blob_read()] and [sqlite3_blob_write()] for | | | 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 | ** ** ^(If the row that a BLOB handle points to is modified by an ** [UPDATE], [DELETE], or by [ON CONFLICT] side-effects ** then the BLOB handle is marked as "expired". ** This is true if any column of the row is changed, even a column ** other than the one the BLOB handle is open on.)^ ** ^Calls to [sqlite3_blob_read()] and [sqlite3_blob_write()] for ** an expired BLOB handle fail with a return code of [SQLITE_ABORT]. ** ^(Changes written into a BLOB prior to the BLOB expiring are not ** rolled back by the expiration of the BLOB. Such changes will eventually ** commit if the transaction continues to completion.)^ ** ** ^Use the [sqlite3_blob_bytes()] interface to determine the size of ** the opened blob. ^The size of a blob may not be changed by this ** interface. Use the [UPDATE] SQL command to change the size of a |
︙ | ︙ | |||
4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 | const char *zTable, const char *zColumn, sqlite3_int64 iRow, int flags, sqlite3_blob **ppBlob ); /* ** CAPI3REF: Close A BLOB Handle ** ** ^Closes an open [BLOB handle]. ** ** ^Closing a BLOB shall cause the current transaction to commit ** if there are no other BLOBs, no pending prepared statements, and the | > > > > > > > > > > > > > > > > > > > > > > > > | 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 | const char *zTable, const char *zColumn, sqlite3_int64 iRow, int flags, sqlite3_blob **ppBlob ); /* ** CAPI3REF: Move a BLOB Handle to a New Row ** ** ^This function is used to move an existing blob handle so that it points ** to a different row of the same database table. ^The new row is identified ** by the rowid value passed as the second argument. Only the row can be ** changed. ^The database, table and column on which the blob handle is open ** remain the same. Moving an existing blob handle to a new row can be ** faster than closing the existing handle and opening a new one. ** ** ^(The new row must meet the same criteria as for [sqlite3_blob_open()] - ** it must exist and there must be either a blob or text value stored in ** the nominated column.)^ ^If the new row is not present in the table, or if ** it does not contain a blob or text value, or if another error occurs, an ** SQLite error code is returned and the blob handle is considered aborted. ** ^All subsequent calls to [sqlite3_blob_read()], [sqlite3_blob_write()] or ** [sqlite3_blob_reopen()] on an aborted blob handle immediately return ** SQLITE_ABORT. ^Calling [sqlite3_blob_bytes()] on an aborted blob handle ** always returns zero. ** ** ^This function sets the database handle error code and message. */ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64); /* ** CAPI3REF: Close A BLOB Handle ** ** ^Closes an open [BLOB handle]. ** ** ^Closing a BLOB shall cause the current transaction to commit ** if there are no other BLOBs, no pending prepared statements, and the |
︙ | ︙ | |||
5162 5163 5164 5165 5166 5167 5168 | #define SQLITE_MUTEX_RECURSIVE 1 #define SQLITE_MUTEX_STATIC_MASTER 2 #define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */ #define SQLITE_MUTEX_STATIC_MEM2 4 /* NOT USED */ #define SQLITE_MUTEX_STATIC_OPEN 4 /* sqlite3BtreeOpen() */ #define SQLITE_MUTEX_STATIC_PRNG 5 /* sqlite3_random() */ #define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ | | > | > > > > > > | 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 | #define SQLITE_MUTEX_RECURSIVE 1 #define SQLITE_MUTEX_STATIC_MASTER 2 #define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */ #define SQLITE_MUTEX_STATIC_MEM2 4 /* NOT USED */ #define SQLITE_MUTEX_STATIC_OPEN 4 /* sqlite3BtreeOpen() */ #define SQLITE_MUTEX_STATIC_PRNG 5 /* sqlite3_random() */ #define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ #define SQLITE_MUTEX_STATIC_LRU2 7 /* NOT USED */ #define SQLITE_MUTEX_STATIC_PMEM 7 /* sqlite3PageMalloc() */ /* ** CAPI3REF: Retrieve the mutex for a database connection ** ** ^This interface returns a pointer the [sqlite3_mutex] object that ** serializes access to the [database connection] given in the argument ** when the [threading mode] is Serialized. ** ^If the [threading mode] is Single-thread or Multi-thread then this ** routine returns a NULL pointer. */ SQLITE_API sqlite3_mutex *sqlite3_db_mutex(sqlite3*); /* ** CAPI3REF: Low-Level Control Of Database Files ** ** ^The [sqlite3_file_control()] interface makes a direct call to the ** xFileControl method for the [sqlite3_io_methods] object associated ** with a particular database identified by the second argument. ^The ** name of the database is "main" for the main database or "temp" for the ** TEMP database, or the name that appears after the AS keyword for ** databases that are added using the [ATTACH] SQL command. ** ^A NULL pointer can be used in place of "main" to refer to the ** main database file. ** ^The third and fourth parameters to this routine ** are passed directly through to the second and third parameters of ** the xFileControl method. ^The return value of the xFileControl ** method becomes the return value of this routine. ** ** ^The SQLITE_FCNTL_FILE_POINTER value for the op parameter causes ** a pointer to the underlying [sqlite3_file] object to be written into ** the space pointed to by the 4th parameter. ^The SQLITE_FCNTL_FILE_POINTER ** case is a short-circuit path which does not actually invoke the ** underlying sqlite3_io_methods.xFileControl method. ** ** ^If the second parameter (zDbName) does not match the name of any ** open database file, then SQLITE_ERROR is returned. ^This error ** code is not remembered and will not be recalled by [sqlite3_errcode()] ** or [sqlite3_errmsg()]. The underlying xFileControl method might ** also return SQLITE_ERROR. There is no way to distinguish between ** an incorrect zDbName and an SQLITE_ERROR return from the underlying |
︙ | ︙ | |||
5257 5258 5259 5260 5261 5262 5263 | /* ** CAPI3REF: SQLite Runtime Status ** ** ^This interface is used to retrieve runtime status information ** about the performance of SQLite, and optionally to reset various ** highwater marks. ^The first argument is an integer code for ** the specific parameter to measure. ^(Recognized integer codes | | | 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 | /* ** CAPI3REF: SQLite Runtime Status ** ** ^This interface is used to retrieve runtime status information ** about the performance of SQLite, and optionally to reset various ** highwater marks. ^The first argument is an integer code for ** the specific parameter to measure. ^(Recognized integer codes ** are of the form [status parameters | SQLITE_STATUS_...].)^ ** ^The current value of the parameter is returned into *pCurrent. ** ^The highest recorded value is returned in *pHighwater. ^If the ** resetFlag is true, then the highest record value is reset after ** *pHighwater is written. ^(Some parameters do not record the highest ** value. For those parameters ** nothing is written into *pHighwater and the resetFlag is ignored.)^ ** ^(Other parameters record only the highwater mark and not the current |
︙ | ︙ | |||
5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 | ** See also: [sqlite3_db_status()] */ SQLITE_API int sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); /* ** CAPI3REF: Status Parameters ** ** These integer constants designate various run-time status parameters ** that can be returned by [sqlite3_status()]. ** ** <dl> | > | | | | > | > | | | | | | 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 | ** See also: [sqlite3_db_status()] */ SQLITE_API int sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); /* ** CAPI3REF: Status Parameters ** KEYWORDS: {status parameters} ** ** These integer constants designate various run-time status parameters ** that can be returned by [sqlite3_status()]. ** ** <dl> ** [[SQLITE_STATUS_MEMORY_USED]] ^(<dt>SQLITE_STATUS_MEMORY_USED</dt> ** <dd>This parameter is the current amount of memory checked out ** using [sqlite3_malloc()], either directly or indirectly. The ** figure includes calls made to [sqlite3_malloc()] by the application ** and internal memory usage by the SQLite library. Scratch memory ** controlled by [SQLITE_CONFIG_SCRATCH] and auxiliary page-cache ** memory controlled by [SQLITE_CONFIG_PAGECACHE] is not included in ** this parameter. The amount returned is the sum of the allocation ** sizes as reported by the xSize method in [sqlite3_mem_methods].</dd>)^ ** ** [[SQLITE_STATUS_MALLOC_SIZE]] ^(<dt>SQLITE_STATUS_MALLOC_SIZE</dt> ** <dd>This parameter records the largest memory allocation request ** handed to [sqlite3_malloc()] or [sqlite3_realloc()] (or their ** internal equivalents). Only the value returned in the ** *pHighwater parameter to [sqlite3_status()] is of interest. ** The value written into the *pCurrent parameter is undefined.</dd>)^ ** ** [[SQLITE_STATUS_MALLOC_COUNT]] ^(<dt>SQLITE_STATUS_MALLOC_COUNT</dt> ** <dd>This parameter records the number of separate memory allocations ** currently checked out.</dd>)^ ** ** [[SQLITE_STATUS_PAGECACHE_USED]] ^(<dt>SQLITE_STATUS_PAGECACHE_USED</dt> ** <dd>This parameter returns the number of pages used out of the ** [pagecache memory allocator] that was configured using ** [SQLITE_CONFIG_PAGECACHE]. The ** value returned is in pages, not in bytes.</dd>)^ ** ** [[SQLITE_STATUS_PAGECACHE_OVERFLOW]] ** ^(<dt>SQLITE_STATUS_PAGECACHE_OVERFLOW</dt> ** <dd>This parameter returns the number of bytes of page cache ** allocation which could not be satisfied by the [SQLITE_CONFIG_PAGECACHE] ** buffer and where forced to overflow to [sqlite3_malloc()]. The ** returned value includes allocations that overflowed because they ** where too large (they were larger than the "sz" parameter to ** [SQLITE_CONFIG_PAGECACHE]) and allocations that overflowed because ** no space was left in the page cache.</dd>)^ ** ** [[SQLITE_STATUS_PAGECACHE_SIZE]] ^(<dt>SQLITE_STATUS_PAGECACHE_SIZE</dt> ** <dd>This parameter records the largest memory allocation request ** handed to [pagecache memory allocator]. Only the value returned in the ** *pHighwater parameter to [sqlite3_status()] is of interest. ** The value written into the *pCurrent parameter is undefined.</dd>)^ ** ** [[SQLITE_STATUS_SCRATCH_USED]] ^(<dt>SQLITE_STATUS_SCRATCH_USED</dt> ** <dd>This parameter returns the number of allocations used out of the ** [scratch memory allocator] configured using ** [SQLITE_CONFIG_SCRATCH]. The value returned is in allocations, not ** in bytes. Since a single thread may only have one scratch allocation ** outstanding at time, this parameter also reports the number of threads ** using scratch memory at the same time.</dd>)^ ** ** [[SQLITE_STATUS_SCRATCH_OVERFLOW]] ^(<dt>SQLITE_STATUS_SCRATCH_OVERFLOW</dt> ** <dd>This parameter returns the number of bytes of scratch memory ** allocation which could not be satisfied by the [SQLITE_CONFIG_SCRATCH] ** buffer and where forced to overflow to [sqlite3_malloc()]. The values ** returned include overflows because the requested allocation was too ** larger (that is, because the requested allocation was larger than the ** "sz" parameter to [SQLITE_CONFIG_SCRATCH]) and because no scratch buffer ** slots were available. ** </dd>)^ ** ** [[SQLITE_STATUS_SCRATCH_SIZE]] ^(<dt>SQLITE_STATUS_SCRATCH_SIZE</dt> ** <dd>This parameter records the largest memory allocation request ** handed to [scratch memory allocator]. Only the value returned in the ** *pHighwater parameter to [sqlite3_status()] is of interest. ** The value written into the *pCurrent parameter is undefined.</dd>)^ ** ** [[SQLITE_STATUS_PARSER_STACK]] ^(<dt>SQLITE_STATUS_PARSER_STACK</dt> ** <dd>This parameter records the deepest parser stack. It is only ** meaningful if SQLite is compiled with [YYTRACKMAXSTACKDEPTH].</dd>)^ ** </dl> ** ** New status parameters may be added from time to time. */ #define SQLITE_STATUS_MEMORY_USED 0 |
︙ | ︙ | |||
5379 5380 5381 5382 5383 5384 5385 | /* ** CAPI3REF: Database Connection Status ** ** ^This interface is used to retrieve runtime status information ** about a single [database connection]. ^The first argument is the ** database connection object to be interrogated. ^The second argument ** is an integer constant, taken from the set of | | | > | > > > > > > | > > > > > > > > > > > > > > > | | | | | | > > > | | | > | | | | 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 | /* ** CAPI3REF: Database Connection Status ** ** ^This interface is used to retrieve runtime status information ** about a single [database connection]. ^The first argument is the ** database connection object to be interrogated. ^The second argument ** is an integer constant, taken from the set of ** [SQLITE_DBSTATUS options], that ** determines the parameter to interrogate. The set of ** [SQLITE_DBSTATUS options] is likely ** to grow in future releases of SQLite. ** ** ^The current value of the requested parameter is written into *pCur ** and the highest instantaneous value is written into *pHiwtr. ^If ** the resetFlg is true, then the highest instantaneous value is ** reset back down to the current value. ** ** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a ** non-zero [error code] on failure. ** ** See also: [sqlite3_status()] and [sqlite3_stmt_status()]. */ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg); /* ** CAPI3REF: Status Parameters for database connections ** KEYWORDS: {SQLITE_DBSTATUS options} ** ** These constants are the available integer "verbs" that can be passed as ** the second argument to the [sqlite3_db_status()] interface. ** ** New verbs may be added in future releases of SQLite. Existing verbs ** might be discontinued. Applications should check the return code from ** [sqlite3_db_status()] to make sure that the call worked. ** The [sqlite3_db_status()] interface will return a non-zero error code ** if a discontinued or unsupported verb is invoked. ** ** <dl> ** [[SQLITE_DBSTATUS_LOOKASIDE_USED]] ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_USED</dt> ** <dd>This parameter returns the number of lookaside memory slots currently ** checked out.</dd>)^ ** ** [[SQLITE_DBSTATUS_LOOKASIDE_HIT]] ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_HIT</dt> ** <dd>This parameter returns the number malloc attempts that were ** satisfied using lookaside memory. Only the high-water value is meaningful; ** the current value is always zero.)^ ** ** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE]] ** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE</dt> ** <dd>This parameter returns the number malloc attempts that might have ** been satisfied using lookaside memory but failed due to the amount of ** memory requested being larger than the lookaside slot size. ** Only the high-water value is meaningful; ** the current value is always zero.)^ ** ** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL]] ** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL</dt> ** <dd>This parameter returns the number malloc attempts that might have ** been satisfied using lookaside memory but failed due to all lookaside ** memory already being in use. ** Only the high-water value is meaningful; ** the current value is always zero.)^ ** ** [[SQLITE_DBSTATUS_CACHE_USED]] ^(<dt>SQLITE_DBSTATUS_CACHE_USED</dt> ** <dd>This parameter returns the approximate number of of bytes of heap ** memory used by all pager caches associated with the database connection.)^ ** ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_USED is always 0. ** ** [[SQLITE_DBSTATUS_SCHEMA_USED]] ^(<dt>SQLITE_DBSTATUS_SCHEMA_USED</dt> ** <dd>This parameter returns the approximate number of of bytes of heap ** memory used to store the schema for all databases associated ** with the connection - main, temp, and any [ATTACH]-ed databases.)^ ** ^The full amount of memory used by the schemas is reported, even if the ** schema memory is shared with other database connections due to ** [shared cache mode] being enabled. ** ^The highwater mark associated with SQLITE_DBSTATUS_SCHEMA_USED is always 0. ** ** [[SQLITE_DBSTATUS_STMT_USED]] ^(<dt>SQLITE_DBSTATUS_STMT_USED</dt> ** <dd>This parameter returns the approximate number of of bytes of heap ** and lookaside memory used by all prepared statements associated with ** the database connection.)^ ** ^The highwater mark associated with SQLITE_DBSTATUS_STMT_USED is always 0. ** </dd> ** </dl> */ #define SQLITE_DBSTATUS_LOOKASIDE_USED 0 #define SQLITE_DBSTATUS_CACHE_USED 1 #define SQLITE_DBSTATUS_SCHEMA_USED 2 #define SQLITE_DBSTATUS_STMT_USED 3 #define SQLITE_DBSTATUS_LOOKASIDE_HIT 4 #define SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE 5 #define SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL 6 #define SQLITE_DBSTATUS_MAX 6 /* Largest defined DBSTATUS */ /* ** CAPI3REF: Prepared Statement Status ** ** ^(Each prepared statement maintains various ** [SQLITE_STMTSTATUS counters] that measure the number ** of times it has performed specific operations.)^ These counters can ** be used to monitor the performance characteristics of the prepared ** statements. For example, if the number of table steps greatly exceeds ** the number of table searches or result rows, that would tend to indicate ** that the prepared statement is using a full table scan rather than ** an index. ** ** ^(This interface is used to retrieve and reset counter values from ** a [prepared statement]. The first argument is the prepared statement ** object to be interrogated. The second argument ** is an integer code for a specific [SQLITE_STMTSTATUS counter] ** to be interrogated.)^ ** ^The current value of the requested counter is returned. ** ^If the resetFlg is true, then the counter is reset to zero after this ** interface call returns. ** ** See also: [sqlite3_status()] and [sqlite3_db_status()]. */ SQLITE_API int sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg); /* ** CAPI3REF: Status Parameters for prepared statements ** KEYWORDS: {SQLITE_STMTSTATUS counter} {SQLITE_STMTSTATUS counters} ** ** These preprocessor macros define integer codes that name counter ** values associated with the [sqlite3_stmt_status()] interface. ** The meanings of the various counters are as follows: ** ** <dl> ** [[SQLITE_STMTSTATUS_FULLSCAN_STEP]] <dt>SQLITE_STMTSTATUS_FULLSCAN_STEP</dt> ** <dd>^This is the number of times that SQLite has stepped forward in ** a table as part of a full table scan. Large numbers for this counter ** may indicate opportunities for performance improvement through ** careful use of indices.</dd> ** ** [[SQLITE_STMTSTATUS_SORT]] <dt>SQLITE_STMTSTATUS_SORT</dt> ** <dd>^This is the number of sort operations that have occurred. ** A non-zero value in this counter may indicate an opportunity to ** improvement performance through careful use of indices.</dd> ** ** [[SQLITE_STMTSTATUS_AUTOINDEX]] <dt>SQLITE_STMTSTATUS_AUTOINDEX</dt> ** <dd>^This is the number of rows inserted into transient indices that ** were created automatically in order to help joins run faster. ** A non-zero value in this counter may indicate an opportunity to ** improvement performance by adding permanent indices that do not ** need to be reinitialized each time the statement is run.</dd> ** ** </dl> |
︙ | ︙ | |||
5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 | ** The built-in page cache is recommended for most uses. ** ** ^(The contents of the sqlite3_pcache_methods structure are copied to an ** internal buffer by SQLite within the call to [sqlite3_config]. Hence ** the application may discard the parameter after the call to ** [sqlite3_config()] returns.)^ ** ** ^(The xInit() method is called once for each effective ** call to [sqlite3_initialize()])^ ** (usually only once during the lifetime of the process). ^(The xInit() ** method is passed a copy of the sqlite3_pcache_methods.pArg value.)^ ** The intent of the xInit() method is to set up global data structures ** required by the custom page cache implementation. ** ^(If the xInit() method is NULL, then the ** built-in default page cache is used instead of the application defined ** page cache.)^ ** ** ^The xShutdown() method is called by [sqlite3_shutdown()]. ** It can be used to clean up ** any outstanding resources before process shutdown, if required. ** ^The xShutdown() method may be NULL. ** ** ^SQLite automatically serializes calls to the xInit method, ** so the xInit method need not be threadsafe. ^The ** xShutdown method is only called from [sqlite3_shutdown()] so it does ** not need to be threadsafe either. All other methods must be threadsafe ** in multithreaded applications. ** ** ^SQLite will never invoke xInit() more than once without an intervening ** call to xShutdown(). ** ** ^SQLite invokes the xCreate() method to construct a new cache instance. ** SQLite will typically create one cache instance for each open database file, ** though this is not guaranteed. ^The ** first parameter, szPage, is the size in bytes of the pages that must ** be allocated by the cache. ^szPage will not be a power of two. ^szPage ** will the page size of the database file that is to be cached plus an | > > > | | > > > > > | > > > | 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 6021 6022 | ** The built-in page cache is recommended for most uses. ** ** ^(The contents of the sqlite3_pcache_methods structure are copied to an ** internal buffer by SQLite within the call to [sqlite3_config]. Hence ** the application may discard the parameter after the call to ** [sqlite3_config()] returns.)^ ** ** [[the xInit() page cache method]] ** ^(The xInit() method is called once for each effective ** call to [sqlite3_initialize()])^ ** (usually only once during the lifetime of the process). ^(The xInit() ** method is passed a copy of the sqlite3_pcache_methods.pArg value.)^ ** The intent of the xInit() method is to set up global data structures ** required by the custom page cache implementation. ** ^(If the xInit() method is NULL, then the ** built-in default page cache is used instead of the application defined ** page cache.)^ ** ** [[the xShutdown() page cache method]] ** ^The xShutdown() method is called by [sqlite3_shutdown()]. ** It can be used to clean up ** any outstanding resources before process shutdown, if required. ** ^The xShutdown() method may be NULL. ** ** ^SQLite automatically serializes calls to the xInit method, ** so the xInit method need not be threadsafe. ^The ** xShutdown method is only called from [sqlite3_shutdown()] so it does ** not need to be threadsafe either. All other methods must be threadsafe ** in multithreaded applications. ** ** ^SQLite will never invoke xInit() more than once without an intervening ** call to xShutdown(). ** ** [[the xCreate() page cache methods]] ** ^SQLite invokes the xCreate() method to construct a new cache instance. ** SQLite will typically create one cache instance for each open database file, ** though this is not guaranteed. ^The ** first parameter, szPage, is the size in bytes of the pages that must ** be allocated by the cache. ^szPage will not be a power of two. ^szPage ** will the page size of the database file that is to be cached plus an ** increment (here called "R") of less than 250. SQLite will use the ** extra R bytes on each page to store metadata about the underlying ** database page on disk. The value of R depends ** on the SQLite version, the target platform, and how SQLite was compiled. ** ^(R is constant for a particular build of SQLite. Except, there are two ** distinct values of R when SQLite is compiled with the proprietary ** ZIPVFS extension.)^ ^The second argument to ** xCreate(), bPurgeable, is true if the cache being created will ** be used to cache database pages of a file stored on disk, or ** false if it is used for an in-memory database. The cache implementation ** does not have to do anything special based with the value of bPurgeable; ** it is purely advisory. ^On a cache where bPurgeable is false, SQLite will ** never invoke xUnpin() except to deliberately delete a page. ** ^In other words, calls to xUnpin() on a cache with bPurgeable set to ** false will always have the "discard" flag set to true. ** ^Hence, a cache created with bPurgeable false will ** never contain any unpinned pages. ** ** [[the xCachesize() page cache method]] ** ^(The xCachesize() method may be called at any time by SQLite to set the ** suggested maximum cache-size (number of pages stored by) the cache ** instance passed as the first argument. This is the value configured using ** the SQLite "[PRAGMA cache_size]" command.)^ As with the bPurgeable ** parameter, the implementation is not required to do anything with this ** value; it is advisory only. ** ** [[the xPagecount() page cache methods]] ** The xPagecount() method must return the number of pages currently ** stored in the cache, both pinned and unpinned. ** ** [[the xFetch() page cache methods]] ** The xFetch() method locates a page in the cache and returns a pointer to ** the page, or a NULL pointer. ** A "page", in this context, means a buffer of szPage bytes aligned at an ** 8-byte boundary. The page to be fetched is determined by the key. ^The ** mimimum key value is 1. After it has been retrieved using xFetch, the page ** is considered to be "pinned". ** ** If the requested page is already in the page cache, then the page cache ** implementation must return a pointer to the page buffer with its content ** intact. If the requested page is not already in the cache, then the ** cache implementation should use the value of the createFlag ** parameter to help it determined what action to take: ** ** <table border=1 width=85% align=center> ** <tr><th> createFlag <th> Behaviour when page is not already in cache ** <tr><td> 0 <td> Do not allocate a new page. Return NULL. ** <tr><td> 1 <td> Allocate a new page if it easy and convenient to do so. ** Otherwise return NULL. ** <tr><td> 2 <td> Make every effort to allocate a new page. Only return ** NULL if allocating a new page is effectively impossible. ** </table> ** ** ^(SQLite will normally invoke xFetch() with a createFlag of 0 or 1. SQLite ** will only use a createFlag of 2 after a prior call with a createFlag of 1 ** failed.)^ In between the to xFetch() calls, SQLite may ** attempt to unpin one or more cache pages by spilling the content of ** pinned pages to disk and synching the operating system disk cache. ** ** [[the xUnpin() page cache method]] ** ^xUnpin() is called by SQLite with a pointer to a currently pinned page ** as its second argument. If the third parameter, discard, is non-zero, ** then the page must be evicted from the cache. ** ^If the discard parameter is ** zero, then the page may be discarded or retained at the discretion of ** page cache implementation. ^The page cache implementation ** may choose to evict unpinned pages at any time. ** ** The cache must not perform any reference counting. A single ** call to xUnpin() unpins the page regardless of the number of prior calls ** to xFetch(). ** ** [[the xRekey() page cache methods]] ** The xRekey() method is used to change the key value associated with the ** page passed as the second argument. If the cache ** previously contains an entry associated with newKey, it must be ** discarded. ^Any prior cache entry associated with newKey is guaranteed not ** to be pinned. ** ** When SQLite calls the xTruncate() method, the cache must discard all ** existing cache entries with page numbers (keys) greater than or equal ** to the value of the iLimit parameter passed to xTruncate(). If any ** of these pages are pinned, they are implicitly unpinned, meaning that ** they can be safely discarded. ** ** [[the xDestroy() page cache method]] ** ^The xDestroy() method is used to delete a cache allocated by xCreate(). ** All resources associated with the specified cache should be freed. ^After ** calling the xDestroy() method, SQLite considers the [sqlite3_pcache*] ** handle invalid, and will not use it with any other sqlite3_pcache_methods ** functions. */ typedef struct sqlite3_pcache_methods sqlite3_pcache_methods; |
︙ | ︙ | |||
5687 5688 5689 5690 5691 5692 5693 | ** ** The backup API copies the content of one database into another. ** It is useful either for creating backups of databases or ** for copying in-memory databases to or from persistent files. ** ** See Also: [Using the SQLite Online Backup API] ** | | | | | | > | | | | | | | 6051 6052 6053 6054 6055 6056 6057 6058 6059 6060 6061 6062 6063 6064 6065 6066 6067 6068 6069 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 6120 6121 6122 6123 6124 6125 6126 6127 6128 6129 6130 6131 6132 | ** ** The backup API copies the content of one database into another. ** It is useful either for creating backups of databases or ** for copying in-memory databases to or from persistent files. ** ** See Also: [Using the SQLite Online Backup API] ** ** ^SQLite holds a write transaction open on the destination database file ** for the duration of the backup operation. ** ^The source database is read-locked only while it is being read; ** it is not locked continuously for the entire backup operation. ** ^Thus, the backup may be performed on a live source database without ** preventing other database connections from ** reading or writing to the source database while the backup is underway. ** ** ^(To perform a backup operation: ** <ol> ** <li><b>sqlite3_backup_init()</b> is called once to initialize the ** backup, ** <li><b>sqlite3_backup_step()</b> is called one or more times to transfer ** the data between the two databases, and finally ** <li><b>sqlite3_backup_finish()</b> is called to release all resources ** associated with the backup operation. ** </ol>)^ ** There should be exactly one call to sqlite3_backup_finish() for each ** successful call to sqlite3_backup_init(). ** ** [[sqlite3_backup_init()]] <b>sqlite3_backup_init()</b> ** ** ^The D and N arguments to sqlite3_backup_init(D,N,S,M) are the ** [database connection] associated with the destination database ** and the database name, respectively. ** ^The database name is "main" for the main database, "temp" for the ** temporary database, or the name specified after the AS keyword in ** an [ATTACH] statement for an attached database. ** ^The S and M arguments passed to ** sqlite3_backup_init(D,N,S,M) identify the [database connection] ** and database name of the source database, respectively. ** ^The source and destination [database connections] (parameters S and D) ** must be different or else sqlite3_backup_init(D,N,S,M) will fail with ** an error. ** ** ^If an error occurs within sqlite3_backup_init(D,N,S,M), then NULL is ** returned and an error code and error message are stored in the ** destination [database connection] D. ** ^The error code and message for the failed call to sqlite3_backup_init() ** can be retrieved using the [sqlite3_errcode()], [sqlite3_errmsg()], and/or ** [sqlite3_errmsg16()] functions. ** ^A successful call to sqlite3_backup_init() returns a pointer to an ** [sqlite3_backup] object. ** ^The [sqlite3_backup] object may be used with the sqlite3_backup_step() and ** sqlite3_backup_finish() functions to perform the specified backup ** operation. ** ** [[sqlite3_backup_step()]] <b>sqlite3_backup_step()</b> ** ** ^Function sqlite3_backup_step(B,N) will copy up to N pages between ** the source and destination databases specified by [sqlite3_backup] object B. ** ^If N is negative, all remaining source pages are copied. ** ^If sqlite3_backup_step(B,N) successfully copies N pages and there ** are still more pages to be copied, then the function returns [SQLITE_OK]. ** ^If sqlite3_backup_step(B,N) successfully finishes copying all pages ** from source to destination, then it returns [SQLITE_DONE]. ** ^If an error occurs while running sqlite3_backup_step(B,N), ** then an [error code] is returned. ^As well as [SQLITE_OK] and ** [SQLITE_DONE], a call to sqlite3_backup_step() may return [SQLITE_READONLY], ** [SQLITE_NOMEM], [SQLITE_BUSY], [SQLITE_LOCKED], or an ** [SQLITE_IOERR_ACCESS | SQLITE_IOERR_XXX] extended error code. ** ** ^(The sqlite3_backup_step() might return [SQLITE_READONLY] if ** <ol> ** <li> the destination database was opened read-only, or ** <li> the destination database is using write-ahead-log journaling ** and the destination and source page sizes differ, or ** <li> the destination database is an in-memory database and the ** destination and source page sizes differ. ** </ol>)^ ** ** ^If sqlite3_backup_step() cannot obtain a required file-system lock, then ** the [sqlite3_busy_handler | busy-handler function] ** is invoked (if one is specified). ^If the ** busy-handler returns non-zero before the lock is available, then |
︙ | ︙ | |||
5790 5791 5792 5793 5794 5795 5796 | ** external process or via a database connection other than the one being ** used by the backup operation, then the backup will be automatically ** restarted by the next call to sqlite3_backup_step(). ^If the source ** database is modified by the using the same database connection as is used ** by the backup operation, then the backup database is automatically ** updated at the same time. ** | | | 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 | ** external process or via a database connection other than the one being ** used by the backup operation, then the backup will be automatically ** restarted by the next call to sqlite3_backup_step(). ^If the source ** database is modified by the using the same database connection as is used ** by the backup operation, then the backup database is automatically ** updated at the same time. ** ** [[sqlite3_backup_finish()]] <b>sqlite3_backup_finish()</b> ** ** When sqlite3_backup_step() has returned [SQLITE_DONE], or when the ** application wishes to abandon the backup operation, the application ** should destroy the [sqlite3_backup] by passing it to sqlite3_backup_finish(). ** ^The sqlite3_backup_finish() interfaces releases all ** resources associated with the [sqlite3_backup] object. ** ^If sqlite3_backup_step() has not yet returned [SQLITE_DONE], then any |
︙ | ︙ | |||
5813 5814 5815 5816 5817 5818 5819 | ** sqlite3_backup_step() call on the same [sqlite3_backup] object, then ** sqlite3_backup_finish() returns the corresponding [error code]. ** ** ^A return of [SQLITE_BUSY] or [SQLITE_LOCKED] from sqlite3_backup_step() ** is not a permanent error and does not affect the return value of ** sqlite3_backup_finish(). ** | > | | 6178 6179 6180 6181 6182 6183 6184 6185 6186 6187 6188 6189 6190 6191 6192 6193 | ** sqlite3_backup_step() call on the same [sqlite3_backup] object, then ** sqlite3_backup_finish() returns the corresponding [error code]. ** ** ^A return of [SQLITE_BUSY] or [SQLITE_LOCKED] from sqlite3_backup_step() ** is not a permanent error and does not affect the return value of ** sqlite3_backup_finish(). ** ** [[sqlite3_backup__remaining()]] [[sqlite3_backup_pagecount()]] ** <b>sqlite3_backup_remaining() and sqlite3_backup_pagecount()</b> ** ** ^Each call to sqlite3_backup_step() sets two values inside ** the [sqlite3_backup] object: the number of pages still to be backed ** up and the total number of pages in the source database file. ** The sqlite3_backup_remaining() and sqlite3_backup_pagecount() interfaces ** retrieve these two values, respectively. ** |
︙ | ︙ | |||
6084 6085 6086 6087 6088 6089 6090 | ** using [sqlite3_wal_hook()] disables the automatic checkpoint mechanism ** configured by this function. ** ** ^The [wal_autocheckpoint pragma] can be used to invoke this interface ** from SQL. ** ** ^Every new [database connection] defaults to having the auto-checkpoint | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 6450 6451 6452 6453 6454 6455 6456 6457 6458 6459 6460 6461 6462 6463 6464 6465 6466 6467 6468 6469 6470 6471 6472 6473 6474 6475 6476 6477 6478 6479 6480 6481 6482 6483 6484 6485 6486 6487 6488 6489 6490 6491 6492 6493 6494 6495 6496 6497 6498 6499 6500 6501 6502 6503 6504 6505 6506 6507 6508 6509 6510 6511 6512 6513 6514 6515 6516 6517 6518 6519 6520 6521 6522 6523 6524 6525 6526 6527 6528 6529 6530 6531 6532 6533 6534 6535 6536 6537 6538 6539 6540 6541 6542 6543 6544 6545 6546 6547 6548 6549 6550 6551 6552 6553 6554 6555 6556 6557 6558 6559 6560 6561 6562 6563 6564 6565 6566 6567 6568 6569 6570 6571 6572 6573 6574 6575 6576 6577 6578 6579 6580 6581 6582 6583 6584 6585 6586 6587 6588 6589 6590 6591 6592 6593 6594 6595 6596 6597 6598 6599 6600 6601 6602 6603 6604 6605 6606 6607 6608 6609 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 | ** using [sqlite3_wal_hook()] disables the automatic checkpoint mechanism ** configured by this function. ** ** ^The [wal_autocheckpoint pragma] can be used to invoke this interface ** from SQL. ** ** ^Every new [database connection] defaults to having the auto-checkpoint ** enabled with a threshold of 1000 or [SQLITE_DEFAULT_WAL_AUTOCHECKPOINT] ** pages. The use of this interface ** is only necessary if the default setting is found to be suboptimal ** for a particular application. */ SQLITE_API int sqlite3_wal_autocheckpoint(sqlite3 *db, int N); /* ** CAPI3REF: Checkpoint a database ** ** ^The [sqlite3_wal_checkpoint(D,X)] interface causes database named X ** on [database connection] D to be [checkpointed]. ^If X is NULL or an ** empty string, then a checkpoint is run on all databases of ** connection D. ^If the database connection D is not in ** [WAL | write-ahead log mode] then this interface is a harmless no-op. ** ** ^The [wal_checkpoint pragma] can be used to invoke this interface ** from SQL. ^The [sqlite3_wal_autocheckpoint()] interface and the ** [wal_autocheckpoint pragma] can be used to cause this interface to be ** run whenever the WAL reaches a certain size threshold. ** ** See also: [sqlite3_wal_checkpoint_v2()] */ SQLITE_API int sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb); /* ** CAPI3REF: Checkpoint a database ** ** Run a checkpoint operation on WAL database zDb attached to database ** handle db. The specific operation is determined by the value of the ** eMode parameter: ** ** <dl> ** <dt>SQLITE_CHECKPOINT_PASSIVE<dd> ** Checkpoint as many frames as possible without waiting for any database ** readers or writers to finish. Sync the db file if all frames in the log ** are checkpointed. This mode is the same as calling ** sqlite3_wal_checkpoint(). The busy-handler callback is never invoked. ** ** <dt>SQLITE_CHECKPOINT_FULL<dd> ** This mode blocks (calls the busy-handler callback) until there is no ** database writer and all readers are reading from the most recent database ** snapshot. It then checkpoints all frames in the log file and syncs the ** database file. This call blocks database writers while it is running, ** but not database readers. ** ** <dt>SQLITE_CHECKPOINT_RESTART<dd> ** This mode works the same way as SQLITE_CHECKPOINT_FULL, except after ** checkpointing the log file it blocks (calls the busy-handler callback) ** until all readers are reading from the database file only. This ensures ** that the next client to write to the database file restarts the log file ** from the beginning. This call blocks database writers while it is running, ** but not database readers. ** </dl> ** ** If pnLog is not NULL, then *pnLog is set to the total number of frames in ** the log file before returning. If pnCkpt is not NULL, then *pnCkpt is set to ** the total number of checkpointed frames (including any that were already ** checkpointed when this function is called). *pnLog and *pnCkpt may be ** populated even if sqlite3_wal_checkpoint_v2() returns other than SQLITE_OK. ** If no values are available because of an error, they are both set to -1 ** before returning to communicate this to the caller. ** ** All calls obtain an exclusive "checkpoint" lock on the database file. If ** any other process is running a checkpoint operation at the same time, the ** lock cannot be obtained and SQLITE_BUSY is returned. Even if there is a ** busy-handler configured, it will not be invoked in this case. ** ** The SQLITE_CHECKPOINT_FULL and RESTART modes also obtain the exclusive ** "writer" lock on the database file. If the writer lock cannot be obtained ** immediately, and a busy-handler is configured, it is invoked and the writer ** lock retried until either the busy-handler returns 0 or the lock is ** successfully obtained. The busy-handler is also invoked while waiting for ** database readers as described above. If the busy-handler returns 0 before ** the writer lock is obtained or while waiting for database readers, the ** checkpoint operation proceeds from that point in the same way as ** SQLITE_CHECKPOINT_PASSIVE - checkpointing as many frames as possible ** without blocking any further. SQLITE_BUSY is returned in this case. ** ** If parameter zDb is NULL or points to a zero length string, then the ** specified operation is attempted on all WAL databases. In this case the ** values written to output parameters *pnLog and *pnCkpt are undefined. If ** an SQLITE_BUSY error is encountered when processing one or more of the ** attached WAL databases, the operation is still attempted on any remaining ** attached databases and SQLITE_BUSY is returned to the caller. If any other ** error occurs while processing an attached database, processing is abandoned ** and the error code returned to the caller immediately. If no error ** (SQLITE_BUSY or otherwise) is encountered while processing the attached ** databases, SQLITE_OK is returned. ** ** If database zDb is the name of an attached database that is not in WAL ** mode, SQLITE_OK is returned and both *pnLog and *pnCkpt set to -1. If ** zDb is not NULL (or a zero length string) and is not the name of any ** attached database, SQLITE_ERROR is returned to the caller. */ SQLITE_API int sqlite3_wal_checkpoint_v2( sqlite3 *db, /* Database handle */ const char *zDb, /* Name of attached database (or NULL) */ int eMode, /* SQLITE_CHECKPOINT_* value */ int *pnLog, /* OUT: Size of WAL log in frames */ int *pnCkpt /* OUT: Total number of frames checkpointed */ ); /* ** CAPI3REF: Checkpoint operation parameters ** ** These constants can be used as the 3rd parameter to ** [sqlite3_wal_checkpoint_v2()]. See the [sqlite3_wal_checkpoint_v2()] ** documentation for additional information about the meaning and use of ** each of these values. */ #define SQLITE_CHECKPOINT_PASSIVE 0 #define SQLITE_CHECKPOINT_FULL 1 #define SQLITE_CHECKPOINT_RESTART 2 /* ** CAPI3REF: Virtual Table Interface Configuration ** ** This function may be called by either the [xConnect] or [xCreate] method ** of a [virtual table] implementation to configure ** various facets of the virtual table interface. ** ** If this interface is invoked outside the context of an xConnect or ** xCreate virtual table method then the behavior is undefined. ** ** At present, there is only one option that may be configured using ** this function. (See [SQLITE_VTAB_CONSTRAINT_SUPPORT].) Further options ** may be added in the future. */ SQLITE_API int sqlite3_vtab_config(sqlite3*, int op, ...); /* ** CAPI3REF: Virtual Table Configuration Options ** ** These macros define the various options to the ** [sqlite3_vtab_config()] interface that [virtual table] implementations ** can use to customize and optimize their behavior. ** ** <dl> ** <dt>SQLITE_VTAB_CONSTRAINT_SUPPORT ** <dd>Calls of the form ** [sqlite3_vtab_config](db,SQLITE_VTAB_CONSTRAINT_SUPPORT,X) are supported, ** where X is an integer. If X is zero, then the [virtual table] whose ** [xCreate] or [xConnect] method invoked [sqlite3_vtab_config()] does not ** support constraints. In this configuration (which is the default) if ** a call to the [xUpdate] method returns [SQLITE_CONSTRAINT], then the entire ** statement is rolled back as if [ON CONFLICT | OR ABORT] had been ** specified as part of the users SQL statement, regardless of the actual ** ON CONFLICT mode specified. ** ** If X is non-zero, then the virtual table implementation guarantees ** that if [xUpdate] returns [SQLITE_CONSTRAINT], it will do so before ** any modifications to internal or persistent data structures have been made. ** If the [ON CONFLICT] mode is ABORT, FAIL, IGNORE or ROLLBACK, SQLite ** is able to roll back a statement or database transaction, and abandon ** or continue processing the current SQL statement as appropriate. ** If the ON CONFLICT mode is REPLACE and the [xUpdate] method returns ** [SQLITE_CONSTRAINT], SQLite handles this as if the ON CONFLICT mode ** had been ABORT. ** ** Virtual table implementations that are required to handle OR REPLACE ** must do so within the [xUpdate] method. If a call to the ** [sqlite3_vtab_on_conflict()] function indicates that the current ON ** CONFLICT policy is REPLACE, the virtual table implementation should ** silently replace the appropriate rows within the xUpdate callback and ** return SQLITE_OK. Or, if this is not possible, it may return ** SQLITE_CONSTRAINT, in which case SQLite falls back to OR ABORT ** constraint handling. ** </dl> */ #define SQLITE_VTAB_CONSTRAINT_SUPPORT 1 /* ** CAPI3REF: Determine The Virtual Table Conflict Policy ** ** This function may only be called from within a call to the [xUpdate] method ** of a [virtual table] implementation for an INSERT or UPDATE operation. ^The ** value returned is one of [SQLITE_ROLLBACK], [SQLITE_IGNORE], [SQLITE_FAIL], ** [SQLITE_ABORT], or [SQLITE_REPLACE], according to the [ON CONFLICT] mode ** of the SQL statement that triggered the call to the [xUpdate] method of the ** [virtual table]. */ SQLITE_API int sqlite3_vtab_on_conflict(sqlite3 *); /* ** CAPI3REF: Conflict resolution modes ** ** These constants are returned by [sqlite3_vtab_on_conflict()] to ** inform a [virtual table] implementation what the [ON CONFLICT] mode ** is for the SQL statement being evaluated. ** ** Note that the [SQLITE_IGNORE] constant is also used as a potential ** return value from the [sqlite3_set_authorizer()] callback and that ** [SQLITE_ABORT] is also a [result code]. */ #define SQLITE_ROLLBACK 1 /* #define SQLITE_IGNORE 2 // Also used by sqlite3_authorizer() callback */ #define SQLITE_FAIL 3 /* #define SQLITE_ABORT 4 // Also an error code */ #define SQLITE_REPLACE 5 /* ** Undo the hack that converts floating point types to integer for ** builds on processors without floating point support. */ #ifdef SQLITE_OMIT_FLOATING_POINT # undef double |
︙ | ︙ |
Added src/stash.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 | /* ** Copyright (c) 2010 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@sqlite.org ** ******************************************************************************* ** ** This file contains code used to implement the "stash" command. */ #include "config.h" #include "stash.h" #include <assert.h> /* ** SQL code to implement the tables needed by the stash. */ static const char zStashInit[] = @ CREATE TABLE IF NOT EXISTS %s.stash( @ stashid INTEGER PRIMARY KEY, -- Unique stash identifier @ vid INTEGER, -- The baseline check-out for this stash @ comment TEXT, -- Comment for this stash. Or NULL @ ctime TIMESTAMP -- When the stash was created @ ); @ CREATE TABLE IF NOT EXISTS %s.stashfile( @ stashid INTEGER REFERENCES stash, -- Stash that contains this file @ rid INTEGER, -- Baseline content in BLOB table or 0. @ isAdded BOOLEAN, -- True if this is an added file @ isRemoved BOOLEAN, -- True if this file is deleted @ isExec BOOLEAN, -- True if file is executable @ origname TEXT, -- Original filename @ newname TEXT, -- New name for file at next check-in @ delta BLOB, -- Delta from baseline. Content if rid=0 @ PRIMARY KEY(origname, stashid) @ ); @ INSERT OR IGNORE INTO vvar(name, value) VALUES('stash-next', 1); ; /* ** Add zFName to the stash given by stashid. zFName might be the name of a ** file or a directory. If a directory, add all changed files contained ** within that directory. */ static void stash_add_file_or_dir(int stashid, int vid, const char *zFName){ char *zFile; /* Normalized filename */ char *zTreename; /* Name of the file in the tree */ Blob fname; /* Filename relative to root */ Blob sql; /* Query statement text */ Stmt q; /* Query against the vfile table */ Stmt ins; /* Insert statement */ zFile = mprintf("%/", zFName); file_tree_name(zFile, &fname, 1); zTreename = blob_str(&fname); blob_zero(&sql); blob_appendf(&sql, "SELECT deleted, isexe, mrid, pathname, coalesce(origname,pathname)" " FROM vfile" " WHERE vid=%d AND (chnged OR deleted OR origname NOT NULL OR mrid==0)", vid ); if( fossil_strcmp(zTreename,".")!=0 ){ blob_appendf(&sql, " AND (pathname GLOB '%q/*' OR origname GLOB '%q/*'" " OR pathname=%Q OR origname=%Q)", zTreename, zTreename, zTreename, zTreename ); } db_prepare(&q, blob_str(&sql)); blob_reset(&sql); db_prepare(&ins, "INSERT INTO stashfile(stashid, rid, isAdded, isRemoved, isExec," "origname, newname, delta)" "VALUES(%d,:rid,:isadd,:isrm,:isexe,:orig,:new,:content)", stashid ); while( db_step(&q)==SQLITE_ROW ){ int deleted = db_column_int(&q, 0); int rid = db_column_int(&q, 2); const char *zName = db_column_text(&q, 3); const char *zOrig = db_column_text(&q, 4); char *zPath = mprintf("%s%s", g.zLocalRoot, zName); Blob content; db_bind_int(&ins, ":rid", rid); db_bind_int(&ins, ":isadd", rid==0); db_bind_int(&ins, ":isrm", deleted); db_bind_int(&ins, ":isexe", db_column_int(&q, 1)); db_bind_text(&ins, ":orig", zOrig); db_bind_text(&ins, ":new", zName); if( rid==0 ){ /* A new file */ blob_read_from_file(&content, zPath); db_bind_blob(&ins, ":content", &content); }else if( deleted ){ db_bind_null(&ins, ":content"); }else{ /* A modified file */ Blob orig; Blob disk; blob_read_from_file(&disk, zPath); content_get(rid, &orig); blob_delta_create(&orig, &disk, &content); blob_reset(&orig); blob_reset(&disk); db_bind_blob(&ins, ":content", &content); } db_step(&ins); db_reset(&ins); fossil_free(zPath); blob_reset(&content); } db_finalize(&ins); db_finalize(&q); fossil_free(zFile); blob_reset(&fname); } /* ** Create a new stash based on the uncommitted changes currently in ** the working directory. ** ** If the "-m" or "--comment" command-line option is present, gather ** its argument as the stash comment. ** ** If files are named on the command-line, then only stash the named ** files. */ static int stash_create(void){ const char *zComment; /* Comment to add to the stash */ int stashid; /* ID of the new stash */ int vid; /* Current checkout */ zComment = find_option("comment", "m", 1); verify_all_options(); stashid = db_lget_int("stash-next", 1); db_lset_int("stash-next", stashid+1); vid = db_lget_int("checkout", 0); vfile_check_signature(vid, 0, 0); db_multi_exec( "INSERT INTO stash(stashid,vid,comment,ctime)" "VALUES(%d,%d,%Q,julianday('now'))", stashid, vid, zComment ); if( g.argc>3 ){ int i; for(i=3; i<g.argc; i++){ stash_add_file_or_dir(stashid, vid, g.argv[i]); } }else{ stash_add_file_or_dir(stashid, vid, "."); } return stashid; } /* ** Apply a stash to the current check-out. */ static void stash_apply(int stashid, int nConflict){ Stmt q; db_prepare(&q, "SELECT rid, isRemoved, isExec, origname, newname, delta" " FROM stashfile WHERE stashid=%d", stashid ); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); int isRemoved = db_column_int(&q, 1); int isExec = db_column_int(&q, 2); const char *zOrig = db_column_text(&q, 3); const char *zNew = db_column_text(&q, 4); char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); char *zNPath = mprintf("%s%s", g.zLocalRoot, zNew); Blob delta; undo_save(zNew); blob_zero(&delta); if( rid==0 ){ db_ephemeral_blob(&q, 5, &delta); blob_write_to_file(&delta, zNPath); file_setexe(zNPath, isExec); printf("ADD %s\n", zNew); }else if( isRemoved ){ printf("DELETE %s\n", zOrig); unlink(zOPath); }else{ Blob a, b, out, disk; db_ephemeral_blob(&q, 5, &delta); blob_read_from_file(&disk, zOPath); content_get(rid, &a); blob_delta_apply(&a, &delta, &b); if( blob_compare(&disk, &a)==0 ){ blob_write_to_file(&b, zNPath); file_setexe(zNPath, isExec); printf("UPDATE %s\n", zNew); }else{ int rc = merge_3way(&a, zOPath, &b, &out); blob_write_to_file(&out, zNPath); file_setexe(zNPath, isExec); if( rc ){ printf("CONFLICT %s\n", zNew); nConflict++; }else{ printf("MERGE %s\n", zNew); } blob_reset(&out); } blob_reset(&a); blob_reset(&b); blob_reset(&disk); } blob_reset(&delta); if( fossil_strcmp(zOrig,zNew)!=0 ){ undo_save(zOrig); unlink(zOPath); } } db_finalize(&q); if( nConflict ){ printf("WARNING: merge conflicts - see messages above for details.\n"); } } /* ** Show the diffs associate with a single stash. */ static void stash_diff(int stashid, const char *zDiffCmd){ Stmt q; Blob empty; blob_zero(&empty); db_prepare(&q, "SELECT rid, isRemoved, isExec, origname, newname, delta" " FROM stashfile WHERE stashid=%d", stashid ); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); int isRemoved = db_column_int(&q, 1); const char *zOrig = db_column_text(&q, 3); const char *zNew = db_column_text(&q, 4); char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); Blob delta; if( rid==0 ){ db_ephemeral_blob(&q, 5, &delta); printf("ADDED %s\n", zNew); diff_print_index(zNew); diff_file_mem(&empty, &delta, zNew, zDiffCmd, 0); }else if( isRemoved ){ printf("DELETE %s\n", zOrig); blob_read_from_file(&delta, zOPath); diff_print_index(zNew); diff_file_mem(&delta, &empty, zOrig, zDiffCmd, 0); }else{ Blob a, b, disk; db_ephemeral_blob(&q, 5, &delta); blob_read_from_file(&disk, zOPath); content_get(rid, &a); blob_delta_apply(&a, &delta, &b); printf("CHANGED %s\n", zNew); diff_file_mem(&disk, &b, zNew, zDiffCmd, 0); blob_reset(&a); blob_reset(&b); blob_reset(&disk); } blob_reset(&delta); } db_finalize(&q); } /* ** Drop the indicates stash */ static void stash_drop(int stashid){ db_multi_exec( "DELETE FROM stash WHERE stashid=%d;" "DELETE FROM stashfile WHERE stashid=%d;", stashid, stashid ); } /* ** If zStashId is non-NULL then interpret is as a stash number and ** return that number. Or throw a fatal error if it is not a valid ** stash number. If it is NULL, return the most recent stash or ** throw an error if the stash is empty. */ static int stash_get_id(const char *zStashId){ int stashid = 0; if( zStashId==0 ){ stashid = db_int(0, "SELECT max(stashid) FROM stash"); if( stashid==0 ) fossil_fatal("empty stash"); }else{ stashid = atoi(zStashId); if( !db_exists("SELECT 1 FROM stash WHERE stashid=%d", stashid) ){ fossil_fatal("no such stash: %d\n", stashid); } } return stashid; } /* ** COMMAND: stash ** ** Usage: %fossil stash SUBCOMMAND ARGS... ** ** fossil stash ** fossil stash save ?-m COMMENT? ?FILES...? ** ** Save the current changes in the working tree as a new stash. ** Then revert the changes back to the last check-in. If FILES ** are listed, then only stash and revert the named files. The ** "save" verb can be omitted if and only if there are no other ** arguments. ** ** fossil stash list ** fossil stash ls ** ** List all changes sets currently stashed. ** ** fossil stash pop ** ** Apply the most recently create stash to the current working ** check-out. Then delete that stash. This is equivalent to ** doing an "apply" and a "drop" against the most recent stash. ** This command is undoable. ** ** fossil stash apply ?STASHID? ** ** Apply the identified stash to the current working check-out. ** If no STASHID is specified, use the most recent stash. Unlike ** the "pop" command, the stash is retained so that it can be used ** again. This command is undoable. ** ** fossil stash goto ?STASHID? ** ** Update to the baseline checkout for STASHID then apply the ** changes of STASHID. Keep STASHID so that it can be reused ** This command is undoable. ** ** fossil stash drop ?STASHID? ?--all? ** ** Forget everything about STASHID. Forget the whole stash if the ** --all flag is used. Individual drops are undoable but --all is not. ** ** fossil stash snapshot ?-m COMMENT? ?FILES...? ** ** Save the current changes in the working tress as a new stash ** but, unlike "save", do not revert those changes. ** ** fossil stash diff ?STASHID? ** fossil stash gdiff ?STASHID? ** ** Show diffs of the current working directory and what that ** directory would be if STASHID were applied. */ void stash_cmd(void){ const char *zDb; const char *zCmd; int nCmd; int stashid; undo_capture_command_line(); db_must_be_within_tree(); db_begin_transaction(); zDb = db_name("localdb"); db_multi_exec(zStashInit, zDb, zDb); if( g.argc<=2 ){ zCmd = "save"; }else{ zCmd = g.argv[2]; } nCmd = strlen(zCmd); if( memcmp(zCmd, "save", nCmd)==0 ){ stashid = stash_create(); undo_disable(); if( g.argc>=2 ){ int nFile = db_int(0, "SELECT count(*) FROM stashfile WHERE stashid=%d", stashid); char **newArgv = fossil_malloc( sizeof(char*)*(nFile+2) ); int i = 2; Stmt q; db_prepare(&q,"SELECT origname FROM stashfile WHERE stashid=%d", stashid); while( db_step(&q)==SQLITE_ROW ){ newArgv[i++] = mprintf("%s%s", g.zLocalRoot, db_column_text(&q, 0)); } db_finalize(&q); newArgv[0] = g.argv[0]; g.argv = newArgv; g.argc = nFile+2; if( nFile==0 ) return; } g.argv[1] = "revert"; revert_cmd(); }else if( memcmp(zCmd, "snapshot", nCmd)==0 ){ stash_create(); }else if( memcmp(zCmd, "list", nCmd)==0 || memcmp(zCmd, "ls", nCmd)==0 ){ Stmt q; int n = 0; verify_all_options(); db_prepare(&q, "SELECT stashid, (SELECT uuid FROM blob WHERE rid=vid)," " comment, datetime(ctime) FROM stash" " ORDER BY ctime DESC" ); while( db_step(&q)==SQLITE_ROW ){ const char *zCom; n++; printf("%5d: [%.14s] on %s\n", db_column_int(&q, 0), db_column_text(&q, 1), db_column_text(&q, 3) ); zCom = db_column_text(&q, 2); if( zCom && zCom[0] ){ printf(" "); comment_print(zCom, 7, 79); } } db_finalize(&q); if( n==0 ) printf("empty stash\n"); }else if( memcmp(zCmd, "drop", nCmd)==0 ){ int allFlag = find_option("all", 0, 0)!=0; if( g.argc>4 ) usage("stash apply STASHID"); if( allFlag ){ db_multi_exec("DELETE FROM stash; DELETE FROM stashfile;"); }else{ stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); undo_begin(); undo_save_stash(stashid); stash_drop(stashid); undo_finish(); } }else if( memcmp(zCmd, "pop", nCmd)==0 ){ if( g.argc>3 ) usage("stash pop"); stashid = stash_get_id(0); undo_begin(); stash_apply(stashid, 0); undo_save_stash(stashid); undo_finish(); stash_drop(stashid); }else if( memcmp(zCmd, "apply", nCmd)==0 ){ if( g.argc>4 ) usage("stash apply STASHID"); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); undo_begin(); stash_apply(stashid, 0); undo_finish(); }else if( memcmp(zCmd, "goto", nCmd)==0 ){ int nConflict; int vid; if( g.argc>4 ) usage("stash apply STASHID"); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); undo_begin(); vid = db_int(0, "SELECT vid FROM stash WHERE stashid=%d", stashid); nConflict = update_to(vid); stash_apply(stashid, nConflict); db_multi_exec("UPDATE vfile SET mtime=0 WHERE pathname IN " "(SELECT origname FROM stashfile WHERE stashid=%d)", stashid); undo_finish(); }else if( memcmp(zCmd, "diff", nCmd)==0 ){ const char *zDiffCmd = db_get("diff-command", 0); if( g.argc>4 ) usage("stash diff STASHID"); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); stash_diff(stashid, zDiffCmd); }else if( memcmp(zCmd, "gdiff", nCmd)==0 ){ const char *zDiffCmd = db_get("gdiff-command", 0); if( g.argc>4 ) usage("stash diff STASHID"); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); stash_diff(stashid, zDiffCmd); }else { usage("SUBCOMMAND ARGS..."); } db_end_transaction(0); } |
Changes to src/stat.c.
︙ | ︙ | |||
27 28 29 30 31 32 33 34 35 36 37 38 39 40 | ** ** Show statistics and global information about the repository. */ void stat_page(void){ i64 t, fsize; int n, m; int szMax, szAvg; char zBuf[100]; login_check_credentials(); if( !g.okRead ){ login_needed(); return; } style_header("Repository Statistics"); @ <table class="label-value"> @ <tr><th>Repository Size:</th><td> | > | 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | ** ** Show statistics and global information about the repository. */ void stat_page(void){ i64 t, fsize; int n, m; int szMax, szAvg; const char *zDb; char zBuf[100]; login_check_credentials(); if( !g.okRead ){ login_needed(); return; } style_header("Repository Statistics"); @ <table class="label-value"> @ <tr><th>Repository Size:</th><td> |
︙ | ︙ | |||
106 107 108 109 110 111 112 113 114 115 | @ <tr><th>Fossil Version:</th><td> @ %h(MANIFEST_DATE) %h(MANIFEST_VERSION) (%h(COMPILER_NAME)) @ </td></tr> @ <tr><th>SQLite Version:</th><td> sqlite3_snprintf(sizeof(zBuf), zBuf, "%.19s [%.10s] (%s)", SQLITE_SOURCE_ID, &SQLITE_SOURCE_ID[20], SQLITE_VERSION); @ %s(zBuf) @ </td></tr> @ <tr><th>Database Stats:</th><td> | > | | | | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | @ <tr><th>Fossil Version:</th><td> @ %h(MANIFEST_DATE) %h(MANIFEST_VERSION) (%h(COMPILER_NAME)) @ </td></tr> @ <tr><th>SQLite Version:</th><td> sqlite3_snprintf(sizeof(zBuf), zBuf, "%.19s [%.10s] (%s)", SQLITE_SOURCE_ID, &SQLITE_SOURCE_ID[20], SQLITE_VERSION); zDb = db_name("repository"); @ %s(zBuf) @ </td></tr> @ <tr><th>Database Stats:</th><td> @ %d(db_int(0, "PRAGMA %s.page_count", zDb)) pages, @ %d(db_int(0, "PRAGMA %s.page_size", zDb)) bytes/page, @ %d(db_int(0, "PRAGMA %s.freelist_count", zDb)) free pages, @ %s(db_text(0, "PRAGMA %s.encoding", zDb)), @ %s(db_text(0, "PRAGMA %s.journal_mode", zDb)) mode @ </td></tr> @ </table> style_footer(); } |
Changes to src/style.c.
︙ | ︙ | |||
68 69 70 71 72 73 74 | /* ** Compare two submenu items for sorting purposes */ static int submenuCompare(const void *a, const void *b){ const struct Submenu *A = (const struct Submenu*)a; const struct Submenu *B = (const struct Submenu*)b; | | | < < > | 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | /* ** Compare two submenu items for sorting purposes */ static int submenuCompare(const void *a, const void *b){ const struct Submenu *A = (const struct Submenu*)a; const struct Submenu *B = (const struct Submenu*)b; return fossil_strcmp(A->zLabel, B->zLabel); } /* ** Draw the header. */ void style_header(const char *zTitleFormat, ...){ va_list ap; char *zTitle; const char *zHeader = db_get("header", (char*)zDefaultHeader); login_check_credentials(); va_start(ap, zTitleFormat); zTitle = vmprintf(zTitleFormat, ap); va_end(ap); cgi_destination(CGI_HEADER); cgi_printf("%s","<!DOCTYPE html>"); if( g.thTrace ) Th_Trace("BEGIN_HEADER<br />\n", -1); /* Generate the header up through the main menu */ Th_Store("project_name", db_get("project-name","Unnamed Fossil Project")); Th_Store("title", zTitle); Th_Store("baseurl", g.zBaseURL); Th_Store("home", g.zTop); Th_Store("index_page", db_get("index-page","/home")); Th_Store("current_page", g.zPath); Th_Store("manifest_version", MANIFEST_VERSION); Th_Store("manifest_date", MANIFEST_DATE); Th_Store("compiler_name", COMPILER_NAME); if( g.zLogin ){ Th_Store("login", g.zLogin); |
︙ | ︙ | |||
190 191 192 193 194 195 196 | ** The default page header. */ const char zDefaultHeader[] = @ <html> @ <head> @ <title>$<project_name>: $<title></title> @ <link rel="alternate" type="application/rss+xml" title="RSS Feed" | | | | | | | < | | | | | | | | | 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 | ** The default page header. */ const char zDefaultHeader[] = @ <html> @ <head> @ <title>$<project_name>: $<title></title> @ <link rel="alternate" type="application/rss+xml" title="RSS Feed" @ href="$home/timeline.rss" /> @ <link rel="stylesheet" href="$home/style.css?default" type="text/css" @ media="screen" /> @ </head> @ <body> @ <div class="header"> @ <div class="logo"> @ <img src="$home/logo" alt="logo" /> @ </div> @ <div class="title"><small>$<project_name></small><br />$<title></div> @ <div class="status"><th1> @ if {[info exists login]} { @ puts "Logged in as $login" @ } else { @ puts "Not logged in" @ } @ </th1></div> @ </div> @ <div class="mainmenu"><th1> @ html "<a href='$home$index_page'>Home</a> " @ if {[anycap jor]} { @ html "<a href='$home/timeline'>Timeline</a> " @ } @ if {[hascap oh]} { @ html "<a href='$home/dir?ci=tip'>Files</a> " @ } @ if {[hascap o]} { @ html "<a href='$home/brlist'>Branches</a> " @ html "<a href='$home/taglist'>Tags</a> " @ } @ if {[hascap r]} { @ html "<a href='$home/reportlist'>Tickets</a> " @ } @ if {[hascap j]} { @ html "<a href='$home/wiki'>Wiki</a> " @ } @ if {[hascap s]} { @ html "<a href='$home/setup'>Admin</a> " @ } elseif {[hascap a]} { @ html "<a href='$home/setup_ulist'>Users</a> " @ } @ if {[info exists login]} { @ html "<a href='$home/login'>Logout</a> " @ } else { @ html "<a href='$home/login'>Login</a> " @ } @ </th1></div> ; /* ** The default page footer */ |
︙ | ︙ | |||
624 625 626 627 628 629 630 | @ color: black; }, { "span.ueditInheritAnonymous", "color for capabilities, inherited by anonymous", @ color: blue; }, { "span.capability", | | > > > > > > > | 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 | @ color: black; }, { "span.ueditInheritAnonymous", "color for capabilities, inherited by anonymous", @ color: blue; }, { "span.capability", "format for capabilities, mentioned on the user edit page", @ font-weight: bold; }, { "span.usertype", "format for different user types, mentioned on the user edit page", @ font-weight: bold; }, { "span.usertype:before", "leading text for user types, mentioned on the user edit page", @ content:"'"; }, { "span.usertype:after", "trailing text for user types, mentioned on the user edit page", @ content:"'"; }, { "div.selectedText", "selected lines of text within a linenumbered artifact display", @ font-weight: bold; @ color: blue; @ background-color: #d5d5ff; @ border: 1px blue solid; }, { "p.missingPriv", "format for missing priviliges note on user setup page", @ color: blue; }, { "span.wikiruleHead", "format for leading text in wikirules definitions", @ font-weight: bold; |
︙ | ︙ | |||
728 729 730 731 732 733 734 735 736 737 738 739 740 741 | "format for artifact lines, no longer shunned", @ color: blue; }, { "p.shunned", "format for artifact lines beeing shunned", @ color: blue; }, { 0, 0, 0 } }; /* | > > > > > > > > > | 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 | "format for artifact lines, no longer shunned", @ color: blue; }, { "p.shunned", "format for artifact lines beeing shunned", @ color: blue; }, { "span.brokenlink", "a broken hyperlink", @ color: red; }, { "ul.filelist", "List of files in a timeline", @ margin-top: 3px; @ line-height: 100%; }, { 0, 0, 0 } }; /* |
︙ | ︙ | |||
784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 | g.isConst = 1; } /* ** WEBPAGE: test_env */ void page_test_env(void){ style_header("Environment Test"); #if !defined(_WIN32) @ uid=%d(getuid()), gid=%d(getgid())<br /> #endif @ g.zBaseURL = %h(g.zBaseURL)<br /> @ g.zTop = %h(g.zTop)<br /> cgi_print_all(); style_footer(); } | > > > > > > > > > > > > > > > > | 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 | g.isConst = 1; } /* ** WEBPAGE: test_env */ void page_test_env(void){ char c; int i; char zCap[30]; login_check_credentials(); style_header("Environment Test"); #if !defined(_WIN32) @ uid=%d(getuid()), gid=%d(getgid())<br /> #endif @ g.zBaseURL = %h(g.zBaseURL)<br /> @ g.zTop = %h(g.zTop)<br /> for(i=0, c='a'; c<='z'; c++){ if( login_has_capability(&c, 1) ) zCap[i++] = c; } zCap[i] = 0; @ g.userUid = %d(g.userUid)<br /> @ g.zLogin = %h(g.zLogin)<br /> @ capabilities = %s(zCap)<br /> @ <hr> cgi_print_all(); if( g.okSetup ){ const char *zRedir = P("redirect"); if( zRedir ) cgi_redirect(zRedir); } style_footer(); } |
Changes to src/sync.c.
︙ | ︙ | |||
30 31 32 33 34 35 36 37 | #endif /* INTERFACE */ /* ** If the respository is configured for autosyncing, then do an ** autosync. This will be a pull if the argument is true or a push ** if the argument is false. */ | > > | > | | | | > > | > > | > | | < < < < > > > > | > > > | > > | > > > > > | | | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | #endif /* INTERFACE */ /* ** If the respository is configured for autosyncing, then do an ** autosync. This will be a pull if the argument is true or a push ** if the argument is false. ** ** Return the number of errors. */ int autosync(int flags){ const char *zUrl; const char *zAutosync; const char *zPw; int rc; int configSync = 0; /* configuration changes transferred */ if( g.fNoSync ){ return 0; } zAutosync = db_get("autosync", 0); if( zAutosync ){ if( (flags & AUTOSYNC_PUSH)!=0 && memcmp(zAutosync,"pull",4)==0 ){ return 0; /* Do not auto-push when autosync=pullonly */ } if( is_false(zAutosync) ){ return 0; /* Autosync is completely off */ } }else{ /* Autosync defaults on. To make it default off, "return" here. */ } zUrl = db_get("last-sync-url", 0); if( zUrl==0 ){ return 0; /* No default server */ } zPw = unobscure(db_get("last-sync-pw", 0)); url_parse(zUrl); if( g.urlUser!=0 && g.urlPasswd==0 ){ g.urlPasswd = mprintf("%s", zPw); } #if 0 /* Disabled for now */ if( (flags & AUTOSYNC_PULL)!=0 && db_get_boolean("auto-shun",1) ){ /* When doing an automatic pull, also automatically pull shuns from ** the server if pull_shuns is enabled. ** ** TODO: What happens if the shun list gets really big? ** Maybe the shunning list should only be pulled on every 10th ** autosync, or something? */ configSync = CONFIGSET_SHUN; } #endif printf("Autosync: %s\n", g.urlCanonical); url_enable_proxy("via proxy: "); rc = client_sync((flags & AUTOSYNC_PUSH)!=0, 1, 0, 0, configSync, 0); if( rc ) fossil_warning("Autosync failed"); return rc; } /* ** This routine processes the command-line argument for push, pull, ** and sync. If a command-line argument is given, that is the URL ** of a server to sync against. If no argument is given, use the ** most recently synced URL. Remember the current URL for next time. */ static void process_sync_args(int *pConfigSync, int *pPrivate){ const char *zUrl = 0; const char *zPw = 0; int configSync = 0; int urlOptional = find_option("autourl",0,0)!=0; g.dontKeepUrl = find_option("once",0,0)!=0; *pPrivate = find_option("private",0,0)!=0; url_proxy_options(); db_find_and_open_repository(0, 0); db_open_config(0); if( g.argc==2 ){ zUrl = db_get("last-sync-url", 0); zPw = unobscure(db_get("last-sync-pw", 0)); if( db_get_boolean("auto-shun",1) ) configSync = CONFIGSET_SHUN; }else if( g.argc==3 ){ zUrl = g.argv[2]; } if( zUrl==0 ){ if( urlOptional ) fossil_exit(0); usage("URL"); } url_parse(zUrl); if( g.urlUser!=0 && g.urlPasswd==0 ){ if( zPw==0 ){ url_prompt_for_password(); }else{ g.urlPasswd = mprintf("%s", zPw); } } if( !g.dontKeepUrl ){ db_set("last-sync-url", g.urlCanonical, 0); if( g.urlPasswd ) db_set("last-sync-pw", obscure(g.urlPasswd), 0); } user_select(); if( g.argc==2 ){ printf("Server: %s\n", g.urlCanonical); } url_enable_proxy("via proxy: "); *pConfigSync = configSync; } /* ** COMMAND: pull ** ** Usage: %fossil pull ?URL? ?options? ** ** Pull changes from a remote repository into the local repository. ** Use the "-R REPO" or "--repository REPO" command-line options ** to specify an alternative repository file. ** ** If the URL is not specified, then the URL from the most recent ** clone, push, pull, remote-url, or sync command is used. ** ** The URL specified normally becomes the new "remote-url" used for ** subsequent push, pull, and sync operations. However, the "--once" ** command-line option makes the URL a one-time-use URL that is not ** saved. ** ** Use the --private option to pull private branches from the ** remote repository. ** ** See also: clone, push, sync, remote-url */ void pull_cmd(void){ int syncFlags; int bPrivate; process_sync_args(&syncFlags, &bPrivate); client_sync(0,1,0,bPrivate,syncFlags,0); } /* ** COMMAND: push ** ** Usage: %fossil push ?URL? ?options? ** ** Push changes in the local repository over into a remote repository. ** Use the "-R REPO" or "--repository REPO" command-line options ** to specify an alternative repository file. ** ** If the URL is not specified, then the URL from the most recent ** clone, push, pull, remote-url, or sync command is used. ** ** The URL specified normally becomes the new "remote-url" used for ** subsequent push, pull, and sync operations. However, the "--once" ** command-line option makes the URL a one-time-use URL that is not ** saved. ** ** Use the --private option to push private branches to the ** remote repository. ** ** See also: clone, pull, sync, remote-url */ void push_cmd(void){ int syncFlags; int bPrivate; process_sync_args(&syncFlags, &bPrivate); client_sync(1,0,0,bPrivate,0,0); } /* ** COMMAND: sync ** ** Usage: %fossil sync ?URL? ?options? |
︙ | ︙ | |||
190 191 192 193 194 195 196 197 198 199 200 | ** If the URL is not specified, then the URL from the most recent successful ** clone, push, pull, remote-url, or sync command is used. ** ** The URL specified normally becomes the new "remote-url" used for ** subsequent push, pull, and sync operations. However, the "--once" ** command-line option makes the URL a one-time-use URL that is not ** saved. ** ** See also: clone, push, pull, remote-url */ void sync_cmd(void){ | > > > | > > | | | | 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 | ** If the URL is not specified, then the URL from the most recent successful ** clone, push, pull, remote-url, or sync command is used. ** ** The URL specified normally becomes the new "remote-url" used for ** subsequent push, pull, and sync operations. However, the "--once" ** command-line option makes the URL a one-time-use URL that is not ** saved. ** ** Use the --private option to sync private branches with the ** remote repository. ** ** See also: clone, push, pull, remote-url */ void sync_cmd(void){ int syncFlags; int bPrivate; process_sync_args(&syncFlags, &bPrivate); client_sync(1,1,0,bPrivate,syncFlags,0); } /* ** COMMAND: remote-url ** ** Usage: %fossil remote-url ?URL|off? ** ** Query and/or change the default server URL used by the "pull", "push", ** and "sync" commands. ** ** The remote-url is set automatically by a "clone" command or by any ** "sync", "push", or "pull" command that specifies an explicit URL. ** The default remote-url is used by auto-syncing and by "sync", "push", ** "pull" that omit the server URL. ** ** See also: clone, push, pull, sync */ void remote_url_cmd(void){ char *zUrl; db_find_and_open_repository(0, 0); if( g.argc!=2 && g.argc!=3 ){ usage("remote-url ?URL|off?"); } if( g.argc==3 ){ if( fossil_strcmp(g.argv[2],"off")==0 ){ db_unset("last-sync-url", 0); db_unset("last-sync-pw", 0); }else{ url_parse(g.argv[2]); if( g.urlUser && g.urlPasswd==0 ){ url_prompt_for_password(); } |
︙ | ︙ |
Changes to src/tag.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 | /* ** Propagate the tag given by tagid to the children of pid. ** ** This routine assumes that tagid is a tag that should be ** propagated and that the tag is already present in pid. ** ** If tagtype is 2 then the tag is being propagated from an | | | | | > > | | > > > > > | > > > | | > > > | | < > | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | /* ** Propagate the tag given by tagid to the children of pid. ** ** This routine assumes that tagid is a tag that should be ** propagated and that the tag is already present in pid. ** ** If tagtype is 2 then the tag is being propagated from an ** ancestor node. If tagtype is 0 it means a propagating tag is ** being blocked. */ static void tag_propagate( int pid, /* Propagate the tag to children of this node */ int tagid, /* Tag to propagate */ int tagType, /* 2 for a propagating tag. 0 for an antitag */ int origId, /* Artifact of tag, when tagType==2 */ const char *zValue, /* Value of the tag. Might be NULL */ double mtime /* Timestamp on the tag */ ){ PQueue queue; /* Queue of check-ins to be tagged */ Stmt s; /* Query the children of :pid to which to propagate */ Stmt ins; /* INSERT INTO tagxref */ Stmt eventupdate; /* UPDATE event */ assert( tagType==0 || tagType==2 ); pqueue_init(&queue); pqueue_insert(&queue, pid, 0.0, 0); /* Query for children of :pid to which to propagate the tag. ** Three returns: (1) rid of the child. (2) timestamp of child. ** (3) True to propagate or false to block. */ db_prepare(&s, "SELECT cid, plink.mtime," " coalesce(srcid=0 AND tagxref.mtime<:mtime, %d) AS doit" " FROM plink LEFT JOIN tagxref ON cid=rid AND tagid=%d" " WHERE pid=:pid AND isprim", tagType==2, tagid ); db_bind_double(&s, ":mtime", mtime); if( tagType==2 ){ /* Set the propagated tag marker on checkin :rid */ db_prepare(&ins, "REPLACE INTO tagxref(tagid, tagtype, srcid, origid, value, mtime, rid)" "VALUES(%d,2,0,%d,%Q,:mtime,:rid)", tagid, origId, zValue ); db_bind_double(&ins, ":mtime", mtime); }else{ /* Remove all references to the tag from checkin :rid */ zValue = 0; db_prepare(&ins, "DELETE FROM tagxref WHERE tagid=%d AND rid=:rid", tagid ); } if( tagid==TAG_BGCOLOR ){ db_prepare(&eventupdate, "UPDATE event SET bgcolor=%Q WHERE objid=:rid", zValue ); } while( (pid = pqueue_extract(&queue, 0))!=0 ){ db_bind_int(&s, ":pid", pid); while( db_step(&s)==SQLITE_ROW ){ int doit = db_column_int(&s, 2); if( doit ){ int cid = db_column_int(&s, 0); double mtime = db_column_double(&s, 1); pqueue_insert(&queue, cid, mtime, 0); db_bind_int(&ins, ":rid", cid); db_step(&ins); db_reset(&ins); if( tagid==TAG_BGCOLOR ){ db_bind_int(&eventupdate, ":rid", cid); db_step(&eventupdate); db_reset(&eventupdate); } if( tagid==TAG_BRANCH ){ leaf_eventually_check(cid); } } } db_reset(&s); } pqueue_clear(&queue); db_finalize(&ins); db_finalize(&s); if( tagid==TAG_BGCOLOR ){ db_finalize(&eventupdate); } } /* ** Propagate all propagatable tags in pid to the children of pid. */ void tag_propagate_all(int pid){ Stmt q; db_prepare(&q, "SELECT tagid, tagtype, mtime, value, origid FROM tagxref" " WHERE rid=%d", pid ); while( db_step(&q)==SQLITE_ROW ){ int tagid = db_column_int(&q, 0); int tagtype = db_column_int(&q, 1); double mtime = db_column_double(&q, 2); const char *zValue = db_column_text(&q, 3); int origid = db_column_int(&q, 4); if( tagtype==1 ) tagtype = 0; tag_propagate(pid, tagid, tagtype, origid, zValue, mtime); } db_finalize(&q); } /* ** Get a tagid for the given TAG. Create a new tag if necessary |
︙ | ︙ | |||
172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 | "REPLACE INTO tagxref(tagid,tagtype,srcId,origid,value,mtime,rid)" " VALUES(%d,%d,%d,%d,%Q,:mtime,%d)", tagid, tagtype, srcId, rid, zValue, rid ); db_bind_double(&s, ":mtime", mtime); db_step(&s); db_finalize(&s); if( tagtype==0 ){ zValue = 0; } zCol = 0; switch( tagid ){ case TAG_BGCOLOR: { zCol = "bgcolor"; break; } case TAG_COMMENT: { zCol = "ecomment"; break; } case TAG_USER: { zCol = "euser"; break; } } if( zCol ){ db_multi_exec("UPDATE event SET %s=%Q WHERE objid=%d", zCol, zValue, rid); if( tagid==TAG_COMMENT ){ char *zCopy = mprintf("%s", zValue); wiki_extract_links(zCopy, rid, 0, mtime, 1, WIKI_INLINE); free(zCopy); } } if( tagid==TAG_DATE ){ | > > > > > > > | > > > | | < | 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 | "REPLACE INTO tagxref(tagid,tagtype,srcId,origid,value,mtime,rid)" " VALUES(%d,%d,%d,%d,%Q,:mtime,%d)", tagid, tagtype, srcId, rid, zValue, rid ); db_bind_double(&s, ":mtime", mtime); db_step(&s); db_finalize(&s); if( tagid==TAG_BRANCH ) leaf_eventually_check(rid); if( tagtype==0 ){ zValue = 0; } zCol = 0; switch( tagid ){ case TAG_BGCOLOR: { zCol = "bgcolor"; break; } case TAG_COMMENT: { zCol = "ecomment"; break; } case TAG_USER: { zCol = "euser"; break; } case TAG_PRIVATE: { db_multi_exec( "INSERT OR IGNORE INTO private(rid) VALUES(%d);", rid ); } } if( zCol ){ db_multi_exec("UPDATE event SET %s=%Q WHERE objid=%d", zCol, zValue, rid); if( tagid==TAG_COMMENT ){ char *zCopy = mprintf("%s", zValue); wiki_extract_links(zCopy, rid, 0, mtime, 1, WIKI_INLINE); free(zCopy); } } if( tagid==TAG_DATE ){ db_multi_exec("UPDATE event " " SET mtime=julianday(%Q)," " omtime=coalesce(omtime,mtime)" " WHERE objid=%d", zValue, rid); } if( tagtype==1 ) tagtype = 0; tag_propagate(rid, tagid, tagtype, rid, zValue, mtime); return tagid; } /* ** COMMAND: test-tag ** %fossil test-tag (+|*|-)TAGNAME ARTIFACT-ID ?VALUE? |
︙ | ︙ | |||
290 291 292 293 294 295 296 | "invalid tag name \"%s\" - might be confused with" " a hexadecimal artifact ID", zTagname ); } #endif zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); | < | > | 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | "invalid tag name \"%s\" - might be confused with" " a hexadecimal artifact ID", zTagname ); } #endif zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); blob_appendf(&ctrl, "D %s\n", zDate); blob_appendf(&ctrl, "T %c%s%F %b", zTagtype[tagtype], zPrefix, zTagname, &uuid); if( tagtype>0 && zValue && zValue[0] ){ blob_appendf(&ctrl, " %F\n", zValue); }else{ blob_appendf(&ctrl, "\n"); } blob_appendf(&ctrl, "U %F\n", zUserOvrd ? zUserOvrd : g.zLogin); md5sum_blob(&ctrl, &cksum); blob_appendf(&ctrl, "Z %b\n", &cksum); nrid = content_put(&ctrl); manifest_crosslink(nrid, &ctrl); assert( blob_is_reset(&ctrl) ); } /* ** COMMAND: tag ** Usage: %fossil tag SUBCOMMAND ... ** ** Run various subcommands to control tags and properties |
︙ | ︙ | |||
363 364 365 366 367 368 369 | */ void tag_cmd(void){ int n; int fRaw = find_option("raw","",0)!=0; int fPropagate = find_option("propagate","",0)!=0; const char *zPrefix = fRaw ? "" : "sym-"; | | | 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 | */ void tag_cmd(void){ int n; int fRaw = find_option("raw","",0)!=0; int fPropagate = find_option("propagate","",0)!=0; const char *zPrefix = fRaw ? "" : "sym-"; db_find_and_open_repository(0, 0); if( g.argc<3 ){ goto tag_cmd_usage; } n = strlen(g.argv[2]); if( n==0 ){ goto tag_cmd_usage; } |
︙ | ︙ | |||
521 522 523 524 525 526 527 | " AND tagname GLOB 'sym-*'" " ORDER BY tagname" ); @ <ul> while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); if( g.okHistory ){ | | < < < < < < < < < < < < < < < < < < < < < < < < < < < | 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 | " AND tagname GLOB 'sym-*'" " ORDER BY tagname" ); @ <ul> while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); if( g.okHistory ){ @ <li><a class="tagLink" href="%s(g.zTop)/timeline?t=%T(zName)"> @ %h(zName)</a></li> }else{ @ <li><span class="tagDsp">%h(zName)</span></li> } } @ </ul> db_finalize(&q); style_footer(); } /* ** WEBPAGE: /tagtimeline */ void tagtimeline_page(void){ Stmt q; login_check_credentials(); |
︙ | ︙ | |||
580 581 582 583 584 585 586 | "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype=1 AND srcid>0" " AND tagid IN (SELECT tagid FROM tag " " WHERE tagname GLOB 'sym-*'))" " ORDER BY event.mtime DESC", timeline_query_for_www() ); | | | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 | "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype=1 AND srcid>0" " AND tagid IN (SELECT tagid FROM tag " " WHERE tagname GLOB 'sym-*'))" " ORDER BY event.mtime DESC", timeline_query_for_www() ); www_print_timeline(&q, 0, 0, 0, 0); db_finalize(&q); @ <br /> @ <script type="text/JavaScript"> @ function xin(id){ @ } @ function xout(id){ @ } @ </script> style_footer(); } |
Added src/tar.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | /* ** Copyright (c) 2011 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to generate tarballs. */ #include <assert.h> #include <zlib.h> #include "config.h" #include "tar.h" /* ** State information for the tarball builder. */ static struct tarball_t { unsigned char *aHdr; /* Space for building headers */ char *zSpaces; /* Spaces for padding */ char *zPrevDir; /* Name of directory for previous entry */ } tball; /* ** Begin the process of generating a tarball. ** ** Initialize the GZIP compressor and the table of directory names. */ static void tar_begin(void){ assert( tball.aHdr==0 ); tball.aHdr = fossil_malloc(512+512+256); memset(tball.aHdr, 0, 512+512+256); tball.zSpaces = (char*)&tball.aHdr[512]; tball.zPrevDir = (char*)&tball.zSpaces[512]; memcpy(&tball.aHdr[108], "0000000", 8); /* Owner ID */ memcpy(&tball.aHdr[116], "0000000", 8); /* Group ID */ memcpy(&tball.aHdr[257], "ustar ", 7); /* Format */ gzip_begin(); db_multi_exec( "CREATE TEMP TABLE dir(name UNIQUE);" ); } /* ** Build a header for a file or directory and write that header ** into the growing tarball. */ static void tar_add_header( const char *zName, /* Name of the object */ int nName, /* Number of characters in zName */ int iMode, /* Mode. 0644 or 0755 */ unsigned int mTime, /* File modification time */ int iSize, /* Size of the object in bytes */ int iType /* Type of object. 0==file. 5==directory */ ){ unsigned int cksum = 0; int i; if( nName>100 ){ memcpy(&tball.aHdr[345], zName, nName-100); memcpy(tball.aHdr, &zName[nName-100], 100); memset(&tball.aHdr[245+nName], 0, 267-nName); }else{ memcpy(tball.aHdr, zName, nName); memset(&tball.aHdr[nName], 0, 100-nName); memset(&tball.aHdr[345], 0, 167); } sqlite3_snprintf(8, (char*)&tball.aHdr[100], "%07o", iMode); sqlite3_snprintf(12, (char*)&tball.aHdr[124], "%011o", iSize); sqlite3_snprintf(12, (char*)&tball.aHdr[136], "%011o", mTime); memset(&tball.aHdr[148], ' ', 8); tball.aHdr[156] = iType + '0'; for(i=0; i<512; i++) cksum += tball.aHdr[i]; sqlite3_snprintf(7, (char*)&tball.aHdr[148], "%06o", cksum); tball.aHdr[154] = 0; gzip_step((char*)tball.aHdr, 512); } /* ** Recursively add an directory entry for the given file if those ** directories have not previously been seen. */ static void tar_add_directory_of( const char *zName, /* Name of directory including final "/" */ int nName, /* Characters in zName */ unsigned int mTime /* Modification time */ ){ int i; for(i=nName-1; i>0 && zName[i]!='/'; i--){} if( i<=0 ) return; if( tball.zPrevDir[i]==0 && memcmp(tball.zPrevDir, zName, i)==0 ) return; db_multi_exec("INSERT OR IGNORE INTO dir VALUES('%.*q')", i, zName); if( sqlite3_changes(g.db)==0 ) return; tar_add_directory_of(zName, i-1, mTime); tar_add_header(zName, i, 0755, mTime, 0, 5); memcpy(tball.zPrevDir, zName, i); tball.zPrevDir[i] = 0; } /* ** Add a single file to the growing tarball. */ static void tar_add_file( const char *zName, /* Name of the file. nul-terminated */ Blob *pContent, /* Content of the file */ int isExe, /* True for executable files */ unsigned int mTime /* Last modification time of the file */ ){ int nName = strlen(zName); int n = blob_size(pContent); int lastPage; if( nName>=250 ){ fossil_fatal("name too long for ustar format: \"%s\"", zName); } tar_add_directory_of(zName, nName, mTime); tar_add_header(zName, nName, isExe ? 0755 : 0644, mTime, n, 0); if( n ){ gzip_step(blob_buffer(pContent), n); lastPage = n % 512; if( lastPage!=0 ){ gzip_step(tball.zSpaces, 512 - lastPage); } } } /* ** Finish constructing the tarball. Put the content of the tarball ** in Blob pOut. */ static void tar_finish(Blob *pOut){ db_multi_exec("DROP TABLE dir"); gzip_step(tball.zSpaces, 512); gzip_step(tball.zSpaces, 512); gzip_finish(pOut); fossil_free(tball.aHdr); tball.aHdr = 0; } /* ** COMMAND: test-tarball ** ** Generate a GZIP-compresssed tarball in the file given by the first argument ** that contains files given in the second and subsequent arguments. */ void test_tarball_cmd(void){ int i; Blob zip; Blob file; if( g.argc<3 ){ usage("ARCHIVE FILE...."); } sqlite3_open(":memory:", &g.db); tar_begin(); for(i=3; i<g.argc; i++){ blob_zero(&file); blob_read_from_file(&file, g.argv[i]); tar_add_file(g.argv[i], &file, file_isexe(g.argv[i]), file_mtime(g.argv[i])); blob_reset(&file); } tar_finish(&zip); blob_write_to_file(&zip, g.argv[2]); } /* ** Given the RID for a checkin, construct a tarball containing ** all files in that checkin ** ** If RID is for an object that is not a real manifest, then the ** resulting tarball contains a single file which is the RID ** object. ** ** If the RID object does not exist in the repository, then ** pTar is zeroed. ** ** zDir is a "synthetic" subdirectory which all files get ** added to as part of the tarball. It may be 0 or an empty string, in ** which case it is ignored. The intention is to create a tarball which ** politely expands into a subdir instead of filling your current dir ** with source files. For example, pass a UUID or "ProjectName". ** */ void tarball_of_checkin(int rid, Blob *pTar, const char *zDir){ Blob mfile, hash, file; Manifest *pManifest; ManifestFile *pFile; Blob filename; int nPrefix; char *zName; unsigned int mTime; content_get(rid, &mfile); if( blob_size(&mfile)==0 ){ blob_zero(pTar); return; } blob_zero(&hash); blob_zero(&filename); tar_begin(); if( zDir && zDir[0] ){ blob_appendf(&filename, "%s/", zDir); } nPrefix = blob_size(&filename); pManifest = manifest_get(rid, CFTYPE_MANIFEST); if( pManifest ){ mTime = (pManifest->rDate - 2440587.5)*86400.0; if( db_get_boolean("manifest", 0) ){ blob_append(&filename, "manifest", -1); zName = blob_str(&filename); tar_add_file(zName, &mfile, 0, mTime); sha1sum_blob(&mfile, &hash); blob_reset(&mfile); blob_append(&hash, "\n", 1); blob_resize(&filename, nPrefix); blob_append(&filename, "manifest.uuid", -1); zName = blob_str(&filename); tar_add_file(zName, &hash, 0, mTime); blob_reset(&hash); } manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ int fid = uuid_to_rid(pFile->zUuid, 0); if( fid ){ content_get(fid, &file); blob_resize(&filename, nPrefix); blob_append(&filename, pFile->zName, -1); zName = blob_str(&filename); tar_add_file(zName, &file, manifest_file_mperm(pFile), mTime); blob_reset(&file); } } }else{ sha1sum_blob(&mfile, &hash); blob_append(&filename, blob_str(&hash), 16); zName = blob_str(&filename); mTime = db_int64(0, "SELECT (julianday('now') - 2440587.5)*86400.0;"); tar_add_file(zName, &mfile, 0, mTime); } manifest_destroy(pManifest); blob_reset(&mfile); blob_reset(&filename); tar_finish(pTar); } /* ** COMMAND: tarball ** ** Usage: %fossil tarball VERSION OUTPUTFILE [--name DIRECTORYNAME] ** ** Generate a compressed tarball for a specified version. If the --name ** option is used, its argument becomes the name of the top-level directory ** in the resulting tarball. If --name is omitted, the top-level directory ** named is derived from the project name, the check-in date and time, and ** the artifact ID of the check-in. */ void tarball_cmd(void){ int rid; Blob tarball; const char *zName; zName = find_option("name", 0, 1); db_find_and_open_repository(0, 0); if( g.argc!=4 ){ usage("VERSION OUTPUTFILE"); } rid = name_to_rid(g.argv[2]); if( zName==0 ){ zName = db_text("default-name", "SELECT replace(%Q,' ','_') " " || strftime('_%%Y-%%m-%%d_%%H%%M%%S_', event.mtime) " " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } tarball_of_checkin(rid, &tarball, zName); blob_write_to_file(&tarball, g.argv[3]); blob_reset(&tarball); } /* ** WEBPAGE: tarball ** URL: /tarball/RID.tar.gz ** ** Generate a compressed tarball for a checkin. ** Return that tarball as the HTTP reply content. */ void tarball_page(void){ int rid; char *zName, *zRid; int nName, nRid; Blob tarball; login_check_credentials(); if( !g.okZip ){ login_needed(); return; } zName = mprintf("%s", PD("name","")); nName = strlen(zName); zRid = mprintf("%s", PD("uuid","")); nRid = strlen(zRid); if( nName>7 && strcmp(&zName[nName-7], ".tar.gz")==0 ){ /* Special case: Remove the ".tar.gz" suffix. */ nName -= 7; zName[nName] = 0; }else{ /* If the file suffix is not ".tar.gz" then just remove the ** suffix up to and including the last "." */ for(nName=strlen(zName)-1; nName>5; nName--){ if( zName[nName]=='.' ){ zName[nName] = 0; break; } } } rid = name_to_rid(nRid?zRid:zName); if( rid==0 ){ @ Not found return; } if( nRid==0 && nName>10 ) zName[10] = 0; tarball_of_checkin(rid, &tarball, zName); free( zName ); free( zRid ); cgi_set_content(&tarball); cgi_set_content_type("application/x-compressed"); } |
Changes to src/th_main.c.
︙ | ︙ | |||
166 167 168 169 170 171 172 | free(zOut); return TH_OK; } /* ** TH command: date ** | | > > > > > > | > | 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 | free(zOut); return TH_OK; } /* ** TH command: date ** ** Return a string which is the current time and date. If the ** -local option is used, the date appears using localtime instead ** of UTC. */ static int dateCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ char *zOut; if( argc>=2 && argl[1]==6 && memcmp(argv[1],"-local",6)==0 ){ zOut = db_text("??", "SELECT datetime('now','localtime')"); }else{ zOut = db_text("??", "SELECT datetime('now')"); } Th_SetResult(interp, zOut, -1); free(zOut); return TH_OK; } /* ** TH command: hascap STRING |
︙ | ︙ |
Changes to src/timeline.c.
︙ | ︙ | |||
43 44 45 46 47 48 49 | } /* ** Generate a hyperlink to a version. */ void hyperlink_to_uuid(const char *zUuid){ | | | | < | < < < < < < < < < < < < < < < < < < < < < | | | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | } /* ** Generate a hyperlink to a version. */ void hyperlink_to_uuid(const char *zUuid){ char z[UUID_SIZE+1]; shorten_uuid(z, zUuid); if( g.okHistory ){ @ <a class="timelineHistLink" href="%s(g.zTop)/info/%s(z)">[%s(z)]</a> }else{ @ <span class="timelineHistDsp">[%s(z)]</span> } } /* ** Generate a hyperlink to a diff between two versions. */ void hyperlink_to_diff(const char *zV1, const char *zV2){ if( g.okHistory ){ if( zV2==0 ){ @ <a href="%s(g.zTop)/diff?v2=%s(zV1)">[diff]</a> }else{ @ <a href="%s(g.zTop)/diff?v1=%s(zV1)&v2=%s(zV2)">[diff]</a> } } } /* ** Generate a hyperlink to a date & time. */ |
︙ | ︙ | |||
117 118 119 120 121 122 123 | @ <a href="%s(g.zTop)/timeline?u=%T(zU)">%h(zU)</a>%s(zSuf) } }else{ @ %s(zU) } } | < < < < < < < < < < < < < < < < < < < < < < > | > > > > > | | 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | @ <a href="%s(g.zTop)/timeline?u=%T(zU)">%h(zU)</a>%s(zSuf) } }else{ @ %s(zU) } } /* ** Allowed flags for the tmFlags argument to www_print_timeline */ #if INTERFACE #define TIMELINE_ARTID 0x0001 /* Show artifact IDs on non-check-in lines */ #define TIMELINE_LEAFONLY 0x0002 /* Show "Leaf", but not "Merge", "Fork" etc */ #define TIMELINE_BRIEF 0x0004 /* Combine adjacent elements of same object */ #define TIMELINE_GRAPH 0x0008 /* Compute a graph */ #define TIMELINE_DISJOINT 0x0010 /* Elements are not contiguous */ #define TIMELINE_FCHANGES 0x0020 /* Detail file changes */ #endif /* ** Output a timeline in the web format given a query. The query ** should return these columns: ** ** 0. rid ** 1. UUID ** 2. Date/Time ** 3. Comment string ** 4. User ** 5. True if is a leaf ** 6. background color ** 7. type ("ci", "w", "t", "e", "div") ** 8. list of symbolic tags. ** 9. tagid for ticket or wiki or event ** 10. Short comment to user for repeated tickets and wiki */ void www_print_timeline( Stmt *pQuery, /* Query to implement the timeline */ int tmFlags, /* Flags controlling display behavior */ const char *zThisUser, /* Suppress links to this user */ const char *zThisTag, /* Suppress links to this tag */ void (*xExtra)(int) /* Routine to call on each line of display */ ){ int wikiFlags; int mxWikiLen; Blob comment; int prevTagid = 0; int suppressCnt = 0; char zPrevDate[20]; GraphContext *pGraph = 0; int prevWasDivider = 0; /* True if previous output row was <hr> */ int fchngQueryInit = 0; /* True if fchngQuery is initialized */ Stmt fchngQuery; /* Query for file changes on check-ins */ zPrevDate[0] = 0; mxWikiLen = db_get_int("timeline-max-comment", 0); if( db_get_boolean("timeline-block-markup", 0) ){ wikiFlags = WIKI_INLINE; }else{ wikiFlags = WIKI_INLINE | WIKI_NOBLOCK; } if( tmFlags & TIMELINE_GRAPH ){ pGraph = graph_init(); /* style is not moved to css, because this is ** a technical div for the timeline graph */ @ <div id="canvas" style="position:relative;width:1px;height:1px;"></div> } @ <table id="timelineTable" class="timelineTable"> blob_zero(&comment); while( db_step(pQuery)==SQLITE_ROW ){ int rid = db_column_int(pQuery, 0); const char *zUuid = db_column_text(pQuery, 1); int isLeaf = db_column_int(pQuery, 5); const char *zBgClr = db_column_text(pQuery, 6); const char *zDate = db_column_text(pQuery, 2); |
︙ | ︙ | |||
225 226 227 228 229 230 231 | prevTagid = tagid; if( suppressCnt ){ @ <tr><td /><td /><td> @ <span class="timelineDisabled">... %d(suppressCnt) similar @ event%s(suppressCnt>1?"s":"") omitted.</span></td></tr> suppressCnt = 0; } | | > | > > > | | > > | | 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 | prevTagid = tagid; if( suppressCnt ){ @ <tr><td /><td /><td> @ <span class="timelineDisabled">... %d(suppressCnt) similar @ event%s(suppressCnt>1?"s":"") omitted.</span></td></tr> suppressCnt = 0; } if( fossil_strcmp(zType,"div")==0 ){ if( !prevWasDivider ){ @ <tr><td colspan="3"><hr /></td></tr> } prevWasDivider = 1; continue; } prevWasDivider = 0; if( memcmp(zDate, zPrevDate, 10) ){ sqlite3_snprintf(sizeof(zPrevDate), zPrevDate, "%.10s", zDate); @ <tr><td> @ <div class="divider">%s(zPrevDate)</div> @ </td></tr> } memcpy(zTime, &zDate[11], 5); zTime[5] = 0; @ <tr> @ <td class="timelineTime">%s(zTime)</td> @ <td class="timelineGraph"> if( pGraph && zType[0]=='c' ){ int nParent = 0; int aParent[32]; const char *zBr; int gidx; static Stmt qparent; static Stmt qbranch; db_static_prepare(&qparent, "SELECT pid FROM plink" " WHERE cid=:rid AND pid NOT IN phantom" " ORDER BY isprim DESC /*sort*/" ); db_static_prepare(&qbranch, "SELECT value FROM tagxref WHERE tagid=%d AND tagtype>0 AND rid=:rid", TAG_BRANCH ); db_bind_int(&qparent, ":rid", rid); while( db_step(&qparent)==SQLITE_ROW && nParent<32 ){ aParent[nParent++] = db_column_int(&qparent, 0); } db_reset(&qparent); db_bind_int(&qbranch, ":rid", rid); if( db_step(&qbranch)==SQLITE_ROW ){ zBr = db_column_text(&qbranch, 0); }else{ zBr = "trunk"; } gidx = graph_add_row(pGraph, rid, nParent, aParent, zBr, zBgClr, isLeaf); db_reset(&qbranch); @ <div id="m%d(gidx)"></div> } @</td> if( zBgClr && zBgClr[0] ){ @ <td class="timelineTableCell" style="background-color: %h(zBgClr);"> }else{ |
︙ | ︙ | |||
303 304 305 306 307 308 309 | blob_append(&truncated, "...", 3); wiki_convert(&truncated, 0, wikiFlags); blob_reset(&truncated); }else{ wiki_convert(&comment, 0, wikiFlags); } blob_reset(&comment); | > > > > | > > > | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | blob_append(&truncated, "...", 3); wiki_convert(&truncated, 0, wikiFlags); blob_reset(&truncated); }else{ wiki_convert(&comment, 0, wikiFlags); } blob_reset(&comment); /* Generate the "user: USERNAME" at the end of the comment, together ** with a hyperlink to another timeline for that user. */ if( zTagList && zTagList[0]==0 ) zTagList = 0; if( g.okHistory && fossil_strcmp(zUser, zThisUser)!=0 ){ char *zLink = mprintf("%s/timeline?u=%h&c=%t&nd", g.zTop, zUser, zDate); @ (user: <a href="%s(zLink)">%h(zUser)</a>%s(zTagList?",":"\051") fossil_free(zLink); }else{ @ (user: %h(zUser)%s(zTagList?",":"\051") } /* Generate the "tags: TAGLIST" at the end of the comment, together ** with hyperlinks to the tag list. */ if( zTagList ){ if( g.okHistory ){ int i; const char *z = zTagList; Blob links; blob_zero(&links); while( z && z[0] ){ for(i=0; z[i] && (z[i]!=',' || z[i+1]!=' '); i++){} if( zThisTag==0 || memcmp(z, zThisTag, i)!=0 || zThisTag[i]!=0 ){ blob_appendf(&links, "<a href=\"%s/timeline?r=%.*t&nd&c=%s\">%.*h</a>%.2s", g.zTop, i, z, zDate, i, z, &z[i] ); }else{ blob_appendf(&links, "%.*h", i+2, z); } if( z[i]==0 ) break; z += i+2; } @ tags: %s(blob_str(&links))) blob_reset(&links); }else{ @ tags: %h(zTagList)) } } /* Generate extra hyperlinks at the end of the comment */ if( xExtra ){ xExtra(rid); } /* Generate the file-change list if requested */ if( (tmFlags & TIMELINE_FCHANGES)!=0 && zType[0]=='c' && g.okHistory ){ int inUl = 0; if( !fchngQueryInit ){ db_prepare(&fchngQuery, "SELECT (pid==0) AS isnew," " (fid==0) AS isdel," " (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name," " (SELECT uuid FROM blob WHERE rid=fid)," " (SELECT uuid FROM blob WHERE rid=pid)" " FROM mlink" " WHERE mid=:mid AND pid!=fid" " ORDER BY 3" ); fchngQueryInit = 1; } db_bind_int(&fchngQuery, ":mid", rid); while( db_step(&fchngQuery)==SQLITE_ROW ){ const char *zFilename = db_column_text(&fchngQuery, 2); int isNew = db_column_int(&fchngQuery, 0); int isDel = db_column_int(&fchngQuery, 1); const char *zOld = db_column_text(&fchngQuery, 4); const char *zNew = db_column_text(&fchngQuery, 3); if( !inUl ){ @ <ul class="filelist"> inUl = 1; } if( isNew ){ @ <li> %h(zFilename) (new file) @ <a href="%s(g.zTop)/artifact/%S(zNew)" target="diffwindow">[view] @ </a></li> }else if( isDel ){ @ <li> %h(zFilename) (deleted)</li> }else{ @ <li> %h(zFilename) @ <a href="%s(g.zTop)/fdiff?v1=%S(zOld)&v2=%S(zNew)" @ target="diffwindow">[diff]</a></li> } } db_reset(&fchngQuery); if( inUl ){ @ </ul> } } @ </td></tr> } if( suppressCnt ){ @ <tr><td /><td /><td> @ <span class="timelineDisabled">... %d(suppressCnt) similar @ event%s(suppressCnt>1?"s":"") omitted.</span></td></tr> suppressCnt = 0; |
︙ | ︙ | |||
334 335 336 337 338 339 340 | */ @ <tr><td /><td> @ <div id="grbtm" style="width:%d(pGraph->mxRail*20+30)px;"></div> @ </td></tr> } } @ </table> | > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | < > | | > > | | > | > > | | 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 | */ @ <tr><td /><td> @ <div id="grbtm" style="width:%d(pGraph->mxRail*20+30)px;"></div> @ </td></tr> } } @ </table> if( fchngQueryInit ) db_finalize(&fchngQuery); timeline_output_graph_javascript(pGraph, (tmFlags & TIMELINE_DISJOINT)!=0); } /* ** Generate all of the necessary javascript to generate a timeline ** graph. */ void timeline_output_graph_javascript(GraphContext *pGraph, int omitDescenders){ if( pGraph && pGraph->nErr==0 ){ GraphRow *pRow; int i; char cSep; @ <script type="text/JavaScript"> @ /* <![CDATA[ */ /* the rowinfo[] array contains all the information needed to generate ** the graph. Each entry contains information for a single row: ** ** id: The id of the <div> element for the row. This is an integer. ** to get an actual id, prepend "m" to the integer. The top node ** is 1 and numbers increase moving down the timeline. ** bg: The background color for this row ** r: The "rail" that the node for this row sits on. The left-most ** rail is 0 and the number increases to the right. ** d: True if there is a "descender" - an arrow coming from the bottom ** of the page straight up to this node. ** mo: "merge-out". If non-zero, this is one more than the x-coordinate ** for the upward portion of a merge arrow. The merge arrow goes up ** to the row identified by mu:. If this value is zero then ** node has no merge children and no merge-out line is drawn. ** mu: The id of the row which is the top of the merge-out arrow. ** u: Draw a thick child-line out of the top of this node and up to ** the node with an id equal to this value. 0 if there is no ** thick-line riser. ** f: 0x01: a leaf node. ** au: An array of integers that define thick-line risers for branches. ** The integers are in pairs. For each pair, the first integer is ** is the rail on which the riser should run and the second integer ** is the id of the node upto which the riser should run. ** mi: "merge-in". An array of integer x-coordinates from which ** merge arrows should be drawn into this node. If the value is ** negative, then the x-coordinate is the absolute value of mi[] ** and a thin merge-arrow descender is drawn to the bottom of ** the screen. */ cgi_printf("var rowinfo = [\n"); for(pRow=pGraph->pFirst; pRow; pRow=pRow->pNext){ int mo = pRow->mergeOut; if( mo<0 ){ mo = 0; }else{ mo = (mo/4)*20 - 3 + 4*(mo&3); } cgi_printf("{id:%d,bg:\"%s\",r:%d,d:%d,mo:%d,mu:%d,u:%d,f:%d,au:", pRow->idx, /* id */ pRow->zBgClr, /* bg */ pRow->iRail, /* r */ pRow->bDescender, /* d */ mo, /* mo */ pRow->mergeUpto, /* mu */ pRow->aiRiser[pRow->iRail], /* u */ pRow->isLeaf ? 1 : 0 /* f */ ); /* u */ cSep = '['; for(i=0; i<GR_MAX_RAIL; i++){ if( i==pRow->iRail ) continue; if( pRow->aiRiser[i]>0 ){ cgi_printf("%c%d,%d", cSep, i, pRow->aiRiser[i]); cSep = ','; } } if( cSep=='[' ) cgi_printf("["); cgi_printf("],mi:"); /* mi */ cSep = '['; for(i=0; i<GR_MAX_RAIL; i++){ if( pRow->mergeIn[i] ){ int mi = i*20 - 8 + 4*pRow->mergeIn[i]; if( pRow->mergeDown & (1<<i) ) mi = -mi; cgi_printf("%c%d", cSep, mi); cSep = ','; } } if( cSep=='[' ) cgi_printf("["); cgi_printf("]}%s", pRow->pNext ? ",\n" : "];\n"); } cgi_printf("var nrail = %d\n", pGraph->mxRail+1); |
︙ | ︙ | |||
447 448 449 450 451 452 453 | @ } @ function drawThinLine(x0,y0,x1,y1){ @ drawBox("black",x0,y0,x1,y1); @ } @ function drawNode(p, left, btm){ @ drawBox("black",p.x-5,p.y-5,p.x+6,p.y+6); @ drawBox(p.bg,p.x-4,p.y-4,p.x+5,p.y+5); | | | | < | | < > | | > > > | > < > > | > > > > > > > | > > > > > > | | | | | 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | @ } @ function drawThinLine(x0,y0,x1,y1){ @ drawBox("black",x0,y0,x1,y1); @ } @ function drawNode(p, left, btm){ @ drawBox("black",p.x-5,p.y-5,p.x+6,p.y+6); @ drawBox(p.bg,p.x-4,p.y-4,p.x+5,p.y+5); @ if( p.u>0 ) drawUpArrow(p.x, rowinfo[p.u-1].y+6, p.y-5); if( !omitDescenders ){ @ if( p.u==0 ) drawUpArrow(p.x, 0, p.y-5); @ if( p.f&1 ) drawBox("black",p.x-1,p.y-1,p.x+2,p.y+2); @ if( p.d ) drawUpArrow(p.x, p.y+6, btm); } @ if( p.mo>0 ){ @ var x1 = p.mo + left - 1; @ var y1 = p.y-3; @ var x0 = x1>p.x ? p.x+7 : p.x-6; @ var u = rowinfo[p.mu-1]; @ var y0 = u.y+5; @ if( x1>=p.x-5 && x1<=p.x+5 ){ @ y1 = p.y-5; @ }else{ @ drawThinLine(x0,y1,x1,y1); @ } @ drawThinLine(x1,y0,x1,y1); @ } @ var n = p.au.length; @ for(var i=0; i<n; i+=2){ @ var x1 = p.au[i]*20 + left; @ var x0 = x1>p.x ? p.x+7 : p.x-6; @ var u = rowinfo[p.au[i+1]-1]; @ if(u.id<p.id){ @ drawBox("black",x0,p.y,x1,p.y+1); @ drawUpArrow(x1, u.y+6, p.y); @ }else{ @ drawBox("#600000",x0,p.y,x1,p.y+1); @ drawBox("#600000",x1-1,p.y,x1,u.y+1); @ drawBox("#600000",x1,u.y,u.x-6,u.y+1); @ drawBox("#600000",u.x-9,u.y-1,u.x-8,u.y+2); @ drawBox("#600000",u.x-11,u.y-2,u.x-10,u.y+3); @ } @ } @ for(var j in p.mi){ @ var y0 = p.y+5; @ var mx = p.mi[j]; @ if( mx<0 ){ @ mx = left-mx; @ drawThinLine(mx,y0,mx,btm); @ }else{ @ mx += left; @ } @ if( mx>p.x ){ @ drawThinArrow(y0,mx,p.x+6); @ }else{ @ drawThinArrow(y0,mx,p.x-5); @ } @ } @ } @ function renderGraph(){ @ var canvasDiv = document.getElementById("canvas"); @ while( canvasDiv.hasChildNodes() ){ @ canvasDiv.removeChild(canvasDiv.firstChild); @ } @ var canvasY = absoluteY("timelineTable"); @ var left = absoluteX("m"+rowinfo[0].id) - absoluteX("canvas") + 15; @ var width = nrail*20; @ for(var i in rowinfo){ @ rowinfo[i].y = absoluteY("m"+rowinfo[i].id) + 10 - canvasY; @ rowinfo[i].x = left + rowinfo[i].r*20; @ } @ var btm = absoluteY("grbtm") + 10 - canvasY; @ if( btm<32768 ){ @ canvasDiv.innerHTML = '<canvas id="timeline-canvas" '+ @ 'style="position:absolute;left:'+(left-5)+'px;"' + @ ' width="'+width+'" height="'+btm+'"><'+'/canvas>'; |
︙ | ︙ | |||
518 519 520 521 522 523 524 | @ context.fillRect(x0-left+5,y0,x1-x0+1,y1-y0+1); @ }; @ } @ for(var i in rowinfo){ @ drawNode(rowinfo[i], left, btm); @ } @ } | | | 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 | @ context.fillRect(x0-left+5,y0,x1-x0+1,y1-y0+1); @ }; @ } @ for(var i in rowinfo){ @ drawNode(rowinfo[i], left, btm); @ } @ } @ var lastId = "m"+rowinfo[rowinfo.length-1].id; @ var lastY = 0; @ function checkHeight(){ @ var h = absoluteY(lastId); @ if( h!=lastY ){ @ renderGraph(); @ lastY = h; @ } |
︙ | ︙ | |||
550 551 552 553 554 555 556 | @ comment TEXT, @ user TEXT, @ isleaf BOOLEAN, @ bgcolor TEXT, @ etype TEXT, @ taglist TEXT, @ tagid INTEGER, | | > < | < < < < | > | 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 | @ comment TEXT, @ user TEXT, @ isleaf BOOLEAN, @ bgcolor TEXT, @ etype TEXT, @ taglist TEXT, @ tagid INTEGER, @ short TEXT, @ sortby REAL @ ) ; db_multi_exec(zSql); } /* ** Return a pointer to a constant string that forms the basis ** for a timeline query for the WWW interface. */ const char *timeline_query_for_www(void){ static char *zBase = 0; static const char zBaseSql[] = @ SELECT @ blob.rid, @ uuid, @ datetime(event.mtime,'localtime') AS timestamp, @ coalesce(ecomment, comment), @ coalesce(euser, user), @ blob.rid IN leaf, @ bgcolor, @ event.type, @ (SELECT group_concat(substr(tagname,5), ', ') FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0), @ tagid, @ brief, @ event.mtime @ FROM event JOIN blob @ WHERE blob.rid=event.objid ; if( zBase==0 ){ zBase = mprintf(zBaseSql, TAG_BRANCH, TAG_BRANCH); } return zBase; |
︙ | ︙ | |||
609 610 611 612 613 614 615 | url_render(pUrl, zParam, zValue, zRemove, 0)); } /* ** zDate is a localtime date. Insert records into the ** "timeline" table to cause <hr> to be inserted before and after | | > | > > > > > > > | | | | > > > > > > > | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 | url_render(pUrl, zParam, zValue, zRemove, 0)); } /* ** zDate is a localtime date. Insert records into the ** "timeline" table to cause <hr> to be inserted before and after ** entries of that date. If zDate==NULL then put dividers around ** the event identified by rid. */ static void timeline_add_dividers(const char *zDate, int rid){ char *zToDel = 0; if( zDate==0 ){ zToDel = db_text(0,"SELECT julianday(mtime,'localtime') FROM event" " WHERE objid=%d", rid); zDate = zToDel; if( zDate==0 ) zDate = "1"; } db_multi_exec( "INSERT INTO timeline(rid,sortby,etype)" "VALUES(-1,julianday(%Q,'utc')-1.0e-5,'div')", zDate ); db_multi_exec( "INSERT INTO timeline(rid,sortby,etype)" "VALUES(-2,julianday(%Q,'utc')+1.0e-5,'div')", zDate ); fossil_free(zToDel); } /* ** WEBPAGE: timeline ** ** Query parameters: ** ** a=TIMESTAMP after this date ** b=TIMESTAMP before this date. ** c=TIMESTAMP "circa" this date. ** n=COUNT number of events in output ** p=RID artifact RID and up to COUNT parents and ancestors ** d=RID artifact RID and up to COUNT descendants ** t=TAGID show only check-ins with the given tagid ** r=TAGID show check-ins related to tagid ** u=USER only if belonging to this user ** y=TYPE 'ci', 'w', 't', 'e' ** s=TEXT string search (comment and brief) ** ng Suppress the graph if present ** nd Suppress "divider" lines ** fc Show details of files changed ** f=RID Show family (immediate parents and children) of RID ** from=RID Path from... ** to=RID ... to this ** nomerge ... avoid merge links on the path ** ** p= and d= can appear individually or together. If either p= or d= ** appear, then u=, y=, a=, and b= are ignored. ** ** If a= and b= appear, only a= is used. If neither appear, the most ** recent events are choosen. ** ** If n= is missing, the default count is 20. */ void page_timeline(void){ Stmt q; /* Query used to generate the timeline */ Blob sql; /* text of SQL used to generate timeline */ Blob desc; /* Description of the timeline */ int nEntry = atoi(PD("n","20")); /* Max number of entries on timeline */ int p_rid = name_to_rid(P("p")); /* artifact p and its parents */ int d_rid = name_to_rid(P("d")); /* artifact d and its descendants */ int f_rid = name_to_rid(P("f")); /* artifact f and immediate family */ const char *zUser = P("u"); /* All entries by this user if not NULL */ const char *zType = PD("y","all"); /* Type of events. All if NULL */ const char *zAfter = P("a"); /* Events after this time */ const char *zBefore = P("b"); /* Events before this time */ const char *zCirca = P("c"); /* Events near this time */ const char *zTagName = P("t"); /* Show events with this tag */ const char *zBrName = P("r"); /* Show events related to this tag */ const char *zSearch = P("s"); /* Search string */ int useDividers = P("nd")==0; /* Show dividers if "nd" is missing */ int tagid; /* Tag ID */ int tmFlags; /* Timeline flags */ const char *zThisTag = 0; /* Suppress links to this tag */ const char *zThisUser = 0; /* Suppress links to this user */ HQuery url; /* URL for various branch links */ int from_rid = name_to_rid(P("from")); /* from= for path timelines */ int to_rid = name_to_rid(P("to")); /* to= for path timelines */ int noMerge = P("nomerge")!=0; /* Do not follow merge links */ int me_rid = name_to_rid(P("me")); /* me= for common ancestory path */ int you_rid = name_to_rid(P("you"));/* you= for common ancst path */ /* To view the timeline, must have permission to read project data. */ login_check_credentials(); if( !g.okRead && !g.okRdTkt && !g.okRdWiki ){ login_needed(); return; } if( zTagName && g.okRead ){ tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zTagName); zThisTag = zTagName; }else if( zBrName && g.okRead ){ tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'",zBrName); zThisTag = zBrName; }else{ tagid = 0; } if( zType[0]=='a' ){ tmFlags = TIMELINE_BRIEF | TIMELINE_GRAPH; }else{ tmFlags = TIMELINE_GRAPH; } if( P("ng")!=0 || zSearch!=0 ){ tmFlags &= ~TIMELINE_GRAPH; } style_header("Timeline"); login_anonymous_available(); timeline_temp_table(); blob_zero(&sql); blob_zero(&desc); blob_append(&sql, "INSERT OR IGNORE INTO timeline ", -1); blob_append(&sql, timeline_query_for_www(), -1); url_initialize(&url, "timeline"); if( P("fc")!=0 || P("detail")!=0 ){ tmFlags |= TIMELINE_FCHANGES; url_add_parameter(&url, "fc", 0); } if( !useDividers ) url_add_parameter(&url, "nd", 0); if( ((from_rid && to_rid) || (me_rid && you_rid)) && g.okRead ){ /* If from= and to= are present, display all nodes on a path connecting ** the two */ PathNode *p = 0; const char *zFrom = 0; const char *zTo = 0; if( from_rid && to_rid ){ p = path_shortest(from_rid, to_rid, noMerge); zFrom = P("from"); zTo = P("to"); }else{ if( path_common_ancestor(me_rid, you_rid) ){ p = path_first(); } zFrom = P("me"); zTo = P("you"); } blob_append(&sql, " AND event.objid IN (0", -1); while( p ){ blob_appendf(&sql, ",%d", p->rid); p = p->u.pTo; } blob_append(&sql, ")", -1); path_reset(); blob_append(&desc, "All nodes on the path from ", -1); if( g.okHistory ){ blob_appendf(&desc, "<a href='%s/info/%h'>[%h]</a>", g.zTop,zFrom,zFrom); }else{ blob_appendf(&desc, "[%h]", zFrom); } blob_append(&desc, " and ", -1); if( g.okHistory ){ blob_appendf(&desc, "<a href='%s/info/%h'>[%h]</a>.", g.zTop, zTo, zTo); }else{ blob_appendf(&desc, "[%h].", zTo); } tmFlags |= TIMELINE_DISJOINT; db_multi_exec("%s", blob_str(&sql)); }else if( (p_rid || d_rid) && g.okRead ){ /* If p= or d= is present, ignore all other parameters other than n= */ char *zUuid; int np, nd; if( p_rid && d_rid ){ if( p_rid!=d_rid ) p_rid = d_rid; if( P("n")==0 ) nEntry = 10; |
︙ | ︙ | |||
720 721 722 723 724 725 726 | if( d_rid ){ compute_descendants(d_rid, nEntry+1); nd = db_int(0, "SELECT count(*)-1 FROM ok"); if( nd>=0 ){ db_multi_exec("%s", blob_str(&sql)); blob_appendf(&desc, "%d descendant%s", nd,(1==nd)?"":"s"); } | | < < < < | < < < < | > > > > > > > > > > > > > > > > > > | > > > > < < | > < | > > > | | | > > > > > | 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 | if( d_rid ){ compute_descendants(d_rid, nEntry+1); nd = db_int(0, "SELECT count(*)-1 FROM ok"); if( nd>=0 ){ db_multi_exec("%s", blob_str(&sql)); blob_appendf(&desc, "%d descendant%s", nd,(1==nd)?"":"s"); } if( useDividers ) timeline_add_dividers(0, d_rid); db_multi_exec("DELETE FROM ok"); } if( p_rid ){ compute_ancestors(p_rid, nEntry+1); np = db_int(0, "SELECT count(*)-1 FROM ok"); if( np>0 ){ if( nd>0 ) blob_appendf(&desc, " and "); blob_appendf(&desc, "%d ancestors", np); db_multi_exec("%s", blob_str(&sql)); } if( d_rid==0 && useDividers ) timeline_add_dividers(0, p_rid); } if( g.okHistory ){ blob_appendf(&desc, " of <a href='%s/info/%s'>[%.10s]</a>", g.zTop, zUuid, zUuid); }else{ blob_appendf(&desc, " of check-in [%.10s]", zUuid); } }else if( f_rid && g.okRead ){ /* If f= is present, ignore all other parameters other than n= */ char *zUuid; db_multi_exec( "CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY);" "INSERT INTO ok VALUES(%d);" "INSERT OR IGNORE INTO ok SELECT pid FROM plink WHERE cid=%d;" "INSERT OR IGNORE INTO ok SELECT cid FROM plink WHERE pid=%d;", f_rid, f_rid, f_rid ); blob_appendf(&sql, " AND event.objid IN ok"); db_multi_exec("%s", blob_str(&sql)); if( useDividers ) timeline_add_dividers(0, f_rid); blob_appendf(&desc, "Parents and children of check-in "); zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", f_rid); if( g.okHistory ){ blob_appendf(&desc, "<a href='%s/info/%s'>[%.10s]</a>", g.zTop, zUuid, zUuid); }else{ blob_appendf(&desc, "[%.10s]", zUuid); } }else{ /* Otherwise, a timeline based on a span of time */ int n; const char *zEType = "timeline item"; char *zDate; char *zNEntry = mprintf("%d", nEntry); url_add_parameter(&url, "n", zNEntry); if( tagid>0 ){ blob_appendf(&sql, "AND (EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)", tagid); if( zBrName ){ url_add_parameter(&url, "r", zBrName); /* The next two blob_appendf() calls add SQL that causes checkins that ** are not part of the branch which are parents or childen of the branch ** to be included in the report. This related check-ins are useful ** in helping to visualize what has happened on a quiescent branch ** that is infrequently merged with a much more activate branch. */ blob_appendf(&sql, " OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid" " WHERE tagid=%d AND tagtype>0 AND pid=blob.rid)", tagid ); if( P("mionly")==0 ){ blob_appendf(&sql, " OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=pid" " WHERE tagid=%d AND tagtype>0 AND cid=blob.rid)", tagid ); }else{ url_add_parameter(&url, "mionly", "1"); } }else{ url_add_parameter(&url, "t", zTagName); } blob_appendf(&sql, ")"); } if( (zType[0]=='w' && !g.okRdWiki) || (zType[0]=='t' && !g.okRdTkt) |
︙ | ︙ | |||
820 821 822 823 824 825 826 827 828 829 830 831 832 833 | }else if( zType[0]=='e' ){ zEType = "event"; } } if( zUser ){ blob_appendf(&sql, " AND event.user=%Q", zUser); url_add_parameter(&url, "u", zUser); } if ( zSearch ){ blob_appendf(&sql, " AND (event.comment LIKE '%%%q%%' OR event.brief LIKE '%%%q%%')", zSearch, zSearch); url_add_parameter(&url, "s", zSearch); } | > | 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 | }else if( zType[0]=='e' ){ zEType = "event"; } } if( zUser ){ blob_appendf(&sql, " AND event.user=%Q", zUser); url_add_parameter(&url, "u", zUser); zThisUser = zUser; } if ( zSearch ){ blob_appendf(&sql, " AND (event.comment LIKE '%%%q%%' OR event.brief LIKE '%%%q%%')", zSearch, zSearch); url_add_parameter(&url, "s", zSearch); } |
︙ | ︙ | |||
865 866 867 868 869 870 871 | db_multi_exec("%s", blob_str(&sql2)); blob_reset(&sql2); blob_appendf(&sql, " AND event.mtime>=%f ORDER BY event.mtime ASC", rCirca ); nEntry -= (nEntry+1)/2; | | | 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 | db_multi_exec("%s", blob_str(&sql2)); blob_reset(&sql2); blob_appendf(&sql, " AND event.mtime>=%f ORDER BY event.mtime ASC", rCirca ); nEntry -= (nEntry+1)/2; if( useDividers ) timeline_add_dividers(zCirca, 0); url_add_parameter(&url, "c", zCirca); }else{ zCirca = 0; } }else{ blob_appendf(&sql, " ORDER BY event.mtime DESC"); } |
︙ | ︙ | |||
939 940 941 942 943 944 945 946 947 948 949 950 951 | } if( nEntry>20 ){ timeline_submenu(&url, "20 Entries", "n", "20", 0); } if( nEntry<200 ){ timeline_submenu(&url, "200 Entries", "n", "200", 0); } } } if( P("showsql") ){ @ <blockquote>%h(blob_str(&sql))</blockquote> } blob_zero(&sql); | > > > > > > > | | | 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 | } if( nEntry>20 ){ timeline_submenu(&url, "20 Entries", "n", "20", 0); } if( nEntry<200 ){ timeline_submenu(&url, "200 Entries", "n", "200", 0); } if( zType[0]=='a' || zType[0]=='c' ){ if( tmFlags & TIMELINE_FCHANGES ){ timeline_submenu(&url, "Hide Files", "fc", 0, 0); }else{ timeline_submenu(&url, "Show Files", "fc", "", 0); } } } } if( P("showsql") ){ @ <blockquote>%h(blob_str(&sql))</blockquote> } blob_zero(&sql); db_prepare(&q, "SELECT * FROM timeline ORDER BY sortby DESC /*scan*/"); @ <h2>%b(&desc)</h2> blob_reset(&desc); www_print_timeline(&q, tmFlags, zThisUser, zThisTag, 0); db_finalize(&q); style_footer(); } /* ** The input query q selects various records. Print a human-readable ** summary of those records. |
︙ | ︙ | |||
991 992 993 994 995 996 997 | int nChild = db_column_int(q, 4); int nParent = db_column_int(q, 5); char *zFree = 0; int n = 0; char zPrefix[80]; char zUuid[UUID_SIZE+1]; | | | 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 | int nChild = db_column_int(q, 4); int nParent = db_column_int(q, 5); char *zFree = 0; int n = 0; char zPrefix[80]; char zUuid[UUID_SIZE+1]; sqlite3_snprintf(sizeof(zUuid), zUuid, "%.10s", zId); if( memcmp(zDate, zPrevDate, 10) ){ printf("=== %.10s ===\n", zDate); memcpy(zPrevDate, zDate, 10); nLine++; } if( zCom==0 ) zCom = ""; printf("%.8s ", &zDate[11]); |
︙ | ︙ | |||
1014 1015 1016 1017 1018 1019 1020 | zBrType = "*FORK* "; }else{ zBrType = "*BRANCH* "; } sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], zBrType); n = strlen(zPrefix); } | | | 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 | zBrType = "*FORK* "; }else{ zBrType = "*BRANCH* "; } sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], zBrType); n = strlen(zPrefix); } if( fossil_strcmp(zCurrentUuid,zId)==0 ){ sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], "*CURRENT* "); n += strlen(zPrefix); } zFree = sqlite3_mprintf("[%.10s] %s%s", zUuid, zPrefix, zCom); nLine += comment_print(zFree, 9, 79); sqlite3_free(zFree); } |
︙ | ︙ | |||
1043 1044 1045 1046 1047 1048 1049 | @ || (SELECT case when length(x)>0 then ' tags: ' || x else '' end @ FROM (SELECT group_concat(substr(tagname,5), ', ') AS x @ FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0)) @ || ')', @ (SELECT count(*) FROM plink WHERE pid=blob.rid AND isprim), | | > | 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 | @ || (SELECT case when length(x)>0 then ' tags: ' || x else '' end @ FROM (SELECT group_concat(substr(tagname,5), ', ') AS x @ FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0)) @ || ')', @ (SELECT count(*) FROM plink WHERE pid=blob.rid AND isprim), @ (SELECT count(*) FROM plink WHERE cid=blob.rid), @ event.mtime @ FROM event, blob @ WHERE blob.rid=event.objid ; return zBaseSql; } /* |
︙ | ︙ | |||
1101 1102 1103 1104 1105 1106 1107 | const char *zType; char *zOrigin; char *zDate; Blob sql; int objid = 0; Blob uuid; int mode = 0 ; /* 0:none 1: before 2:after 3:children 4:parents */ | | | 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 | const char *zType; char *zOrigin; char *zDate; Blob sql; int objid = 0; Blob uuid; int mode = 0 ; /* 0:none 1: before 2:after 3:children 4:parents */ db_find_and_open_repository(0, 0); zCount = find_option("count","n",1); zType = find_option("type","t",1); if( zCount ){ n = atoi(zCount); }else{ n = 20; } |
︙ | ︙ | |||
1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 | if( g.fTimeFormat==0 ){ if( db_get_int("timeline-utc", 1) ){ g.fTimeFormat = 1; }else{ g.fTimeFormat = 2; } } if( g.fTimeFormat==1 ){ return gmtime(clock); }else{ return localtime(clock); } } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 | if( g.fTimeFormat==0 ){ if( db_get_int("timeline-utc", 1) ){ g.fTimeFormat = 1; }else{ g.fTimeFormat = 2; } } if( clock==0 ) return 0; if( g.fTimeFormat==1 ){ return gmtime(clock); }else{ return localtime(clock); } } /* ** COMMAND: test-timewarp-list ** ** Usage: %fossil test-timewarp-list ?--detail? ** ** Display all instances of child checkins that appear earlier in time ** than their parent. If the --detail option is provided, both the ** parent and child checking and their times are shown. */ void test_timewarp_cmd(void){ Stmt q; int showDetail; db_find_and_open_repository(0, 0); showDetail = find_option("detail", 0, 0)!=0; db_prepare(&q, "SELECT (SELECT uuid FROM blob WHERE rid=p.cid)," " (SELECT uuid FROM blob WHERE rid=c.cid)," " datetime(p.mtime), datetime(c.mtime)" " FROM plink p, plink c" " WHERE p.cid=c.pid AND p.mtime>c.mtime" ); while( db_step(&q)==SQLITE_ROW ){ if( !showDetail ){ printf("%s\n", db_column_text(&q, 1)); }else{ printf("%.14s -> %.14s %s -> %s\n", db_column_text(&q, 0), db_column_text(&q, 1), db_column_text(&q, 2), db_column_text(&q, 3)); } } db_finalize(&q); } /* ** WEBPAGE: test_timewarps */ void test_timewarp_page(void){ Stmt q; login_check_credentials(); if( !g.okRead || !g.okHistory ){ login_needed(); return; } style_header("Instances of timewarp"); @ <ul> db_prepare(&q, "SELECT blob.uuid " " FROM plink p, plink c, blob" " WHERE p.cid=c.pid AND p.mtime>c.mtime" " AND blob.rid=c.cid" ); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); @ <li> @ <a href="%s(g.zTop)/timeline?p=%S(zUuid)&d=%S(zUuid)">%S(zUuid)</a> } db_finalize(&q); style_footer(); } |
Changes to src/tkt.c.
︙ | ︙ | |||
427 428 429 430 431 432 433 | int i; int rid; Blob tktchng, cksum; login_verify_csrf_secret(); zUuid = (const char *)pUuid; blob_zero(&tktchng); | | < < < > > | > > > | | | | | | | | | < | 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 | int i; int rid; Blob tktchng, cksum; login_verify_csrf_secret(); zUuid = (const char *)pUuid; blob_zero(&tktchng); zDate = date_in_standard_format("now"); blob_appendf(&tktchng, "D %s\n", zDate); free(zDate); for(i=0; i<nField; i++){ if( azAppend[i] ){ blob_appendf(&tktchng, "J +%s %z\n", azField[i], fossilize(azAppend[i], -1)); } } for(i=0; i<nField; i++){ const char *zValue; int nValue; if( azAppend[i] ) continue; zValue = Th_Fetch(azField[i], &nValue); if( zValue ){ while( nValue>0 && fossil_isspace(zValue[nValue-1]) ){ nValue--; } if( strncmp(zValue, azValue[i], nValue) || strlen(azValue[i])!=nValue ){ if( strncmp(azField[i], "private_", 8)==0 ){ zValue = db_conceal(zValue, nValue); blob_appendf(&tktchng, "J %s %s\n", azField[i], zValue); }else{ blob_appendf(&tktchng, "J %s %#F\n", azField[i], nValue, zValue); } } } } if( *(char**)pUuid ){ zUuid = db_text(0, "SELECT tkt_uuid FROM ticket WHERE tkt_uuid GLOB '%s*'", P("name") |
︙ | ︙ | |||
476 477 478 479 480 481 482 | @ <hr /></font> return TH_OK; }else if( g.thTrace ){ Th_Trace("submit_ticket {\n<blockquote><pre>\n%h\n</pre></blockquote>\n" "}<br />\n", blob_str(&tktchng)); }else{ | | > | 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 | @ <hr /></font> return TH_OK; }else if( g.thTrace ){ Th_Trace("submit_ticket {\n<blockquote><pre>\n%h\n</pre></blockquote>\n" "}<br />\n", blob_str(&tktchng)); }else{ rid = content_put(&tktchng); if( rid==0 ){ fossil_panic("trouble committing ticket: %s", g.zErrMsg); } manifest_crosslink_begin(); manifest_crosslink(rid, &tktchng); assert( blob_is_reset(&tktchng) ); manifest_crosslink_end(); } return TH_RETURN; } /* |
︙ | ︙ | |||
515 516 517 518 519 520 521 | } style_header("New Ticket"); if( g.thTrace ) Th_Trace("BEGIN_TKTNEW<br />\n", -1); ticket_init(); getAllTicketFields(); initializeVariablesFromDb(); initializeVariablesFromCGI(); | | | | 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 | } style_header("New Ticket"); if( g.thTrace ) Th_Trace("BEGIN_TKTNEW<br />\n", -1); ticket_init(); getAllTicketFields(); initializeVariablesFromDb(); initializeVariablesFromCGI(); @ <form method="post" action="%s(g.zTop)/%s(g.zPath)"><p> login_insert_csrf_secret(); @ </p> zScript = ticket_newpage_code(); Th_Store("login", g.zLogin); Th_Store("date", db_text(0, "SELECT datetime('now')")); Th_CreateCommand(g.interp, "submit_ticket", submitTicketCmd, (void*)&zNewUuid, 0); if( g.thTrace ) Th_Trace("BEGIN_TKTNEW_SCRIPT<br />\n", -1); if( Th_Render(zScript)==TH_RETURN && !g.thTrace && zNewUuid ){ cgi_redirect(mprintf("%s/tktview/%s", g.zTop, zNewUuid)); return; } @ </form> if( g.thTrace ) Th_Trace("END_TKTVIEW<br />\n", -1); style_footer(); } |
︙ | ︙ | |||
581 582 583 584 585 586 587 | return; } if( g.thTrace ) Th_Trace("BEGIN_TKTEDIT<br />\n", -1); ticket_init(); getAllTicketFields(); initializeVariablesFromCGI(); initializeVariablesFromDb(); | | | | 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 | return; } if( g.thTrace ) Th_Trace("BEGIN_TKTEDIT<br />\n", -1); ticket_init(); getAllTicketFields(); initializeVariablesFromCGI(); initializeVariablesFromDb(); @ <form method="post" action="%s(g.zTop)/%s(g.zPath)"><p> @ <input type="hidden" name="name" value="%s(zName)" /> login_insert_csrf_secret(); @ </p> zScript = ticket_editpage_code(); Th_Store("login", g.zLogin); Th_Store("date", db_text(0, "SELECT datetime('now')")); Th_CreateCommand(g.interp, "append_field", appendRemarkCmd, 0, 0); Th_CreateCommand(g.interp, "submit_ticket", submitTicketCmd, (void*)&zName,0); if( g.thTrace ) Th_Trace("BEGIN_TKTEDIT_SCRIPT<br />\n", -1); if( Th_Render(zScript)==TH_RETURN && !g.thTrace && zName ){ cgi_redirect(mprintf("%s/tktview/%s", g.zTop, zName)); return; } @ </form> if( g.thTrace ) Th_Trace("BEGIN_TKTEDIT<br />\n", -1); style_footer(); } |
︙ | ︙ | |||
701 702 703 704 705 706 707 | " WHERE target=%Q) " "ORDER BY mtime DESC", timeline_query_for_www(), tagid, zFullUuid, zFullUuid, zFullUuid ); } db_prepare(&q, zSQL); free(zSQL); | | > | 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 | " WHERE target=%Q) " "ORDER BY mtime DESC", timeline_query_for_www(), tagid, zFullUuid, zFullUuid, zFullUuid ); } db_prepare(&q, zSQL); free(zSQL); www_print_timeline(&q, TIMELINE_ARTID|TIMELINE_DISJOINT|TIMELINE_GRAPH, 0, 0, 0); db_finalize(&q); style_footer(); } /* ** WEBPAGE: tkthistory ** URL: /tkthistory?name=TICKETUUID |
︙ | ︙ | |||
847 848 849 850 851 852 853 | ** options can be: ** ?-l|--limit LIMITCHAR? ** ?-q|--quote? ** ?-R|--repository FILE? ** ** Run the ticket report, identified by the report format title ** used in the gui. The data is written as flat file on stdout, | | | 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 | ** options can be: ** ?-l|--limit LIMITCHAR? ** ?-q|--quote? ** ?-R|--repository FILE? ** ** Run the ticket report, identified by the report format title ** used in the gui. The data is written as flat file on stdout, ** using "," as separator. The separator "," can be changed using ** the -l or --limit option. ** If TICKETFILTER is given on the commandline, the query is ** limited with a new WHERE-condition. ** example: Report lists a column # with the uuid ** TICKETFILTER may be [#]='uuuuuuuuu' ** example: Report only lists rows with status not open ** TICKETFILTER: status != 'open' |
︙ | ︙ | |||
899 900 901 902 903 904 905 | ** The values in set|add are not validated against the definitions ** given in "Ticket Common Script". */ void ticket_cmd(void){ int n; /* do some ints, we want to be inside a checkout */ | | | 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 | ** The values in set|add are not validated against the definitions ** given in "Ticket Common Script". */ void ticket_cmd(void){ int n; /* do some ints, we want to be inside a checkout */ db_find_and_open_repository(0, 0); user_select(); /* ** Check that the user exists. */ if( !db_exists("SELECT 1 FROM user WHERE login=%Q", g.zLogin) ){ fossil_fatal("no such user: %s", g.zLogin); } |
︙ | ︙ | |||
1025 1026 1027 1028 1029 1030 1031 | } /* now add the needed artifacts to the repository */ blob_zero(&tktchng); { /* add the time to the ticket manifest */ char *zDate; | | < | 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 | } /* now add the needed artifacts to the repository */ blob_zero(&tktchng); { /* add the time to the ticket manifest */ char *zDate; zDate = date_in_standard_format("now"); blob_appendf(&tktchng, "D %s\n", zDate); free(zDate); } /* append defined elements */ for(i=0; i<nField; i++){ char *zValue; |
︙ | ︙ | |||
1052 1053 1054 1055 1056 1057 1058 | } } } blob_appendf(&tktchng, "K %s\n", zTktUuid); blob_appendf(&tktchng, "U %F\n", g.zLogin); md5sum_blob(&tktchng, &cksum); blob_appendf(&tktchng, "Z %b\n", &cksum); | | > | 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 | } } } blob_appendf(&tktchng, "K %s\n", zTktUuid); blob_appendf(&tktchng, "U %F\n", g.zLogin); md5sum_blob(&tktchng, &cksum); blob_appendf(&tktchng, "Z %b\n", &cksum); rid = content_put(&tktchng); if( rid==0 ){ fossil_panic("trouble committing ticket: %s", g.zErrMsg); } manifest_crosslink_begin(); manifest_crosslink(rid, &tktchng); manifest_crosslink_end(); assert( blob_is_reset(&tktchng) ); printf("ticket %s succeeded for UID %s\n", (eCmd==set?"set":"add"),zTktUuid); } } } } |
Changes to src/tktsetup.c.
︙ | ︙ | |||
128 129 130 131 132 133 134 | @ <p class="tktsetupError">ERROR: %h(zErr)</p> }else{ db_set(zDbField, z, 0); if( xRebuild ) xRebuild(); cgi_redirect("tktsetup"); } } | | | 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | @ <p class="tktsetupError">ERROR: %h(zErr)</p> }else{ db_set(zDbField, z, 0); if( xRebuild ) xRebuild(); cgi_redirect("tktsetup"); } } @ <form action="%s(g.zTop)/%s(g.zPath)" method="post"><div> login_insert_csrf_secret(); @ <p>%s(zDesc)</p> @ <textarea name="x" rows="%d(height)" cols="80">%h(z)</textarea> @ <blockquote><p> @ <input type="submit" name="submit" value="Apply Changes" /> @ <input type="submit" name="clear" value="Revert To Default" /> @ <input type="submit" name="setup" value="Cancel" /> |
︙ | ︙ | |||
432 433 434 435 436 437 438 | @ if {[info exists submit]} { @ if {[info exists cmappnd]} { @ if {[string length $cmappnd]>0} { @ set ctxt "\n\n<hr /><i>[htmlize $login]" @ if {$username ne $login} { @ set ctxt "$ctxt claiming to be [htmlize $username]" @ } | | | 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 | @ if {[info exists submit]} { @ if {[info exists cmappnd]} { @ if {[string length $cmappnd]>0} { @ set ctxt "\n\n<hr /><i>[htmlize $login]" @ if {$username ne $login} { @ set ctxt "$ctxt claiming to be [htmlize $username]" @ } @ set ctxt "$ctxt added on [date] UTC:</i><br />\n$cmappnd" @ append_field comment $ctxt @ } @ } @ submit_ticket @ } @ </th1> @ <table cellpadding="5"> |
︙ | ︙ | |||
697 698 699 700 701 702 703 | } if( P("setup") ){ cgi_redirect("tktsetup"); } style_header("Ticket Display On Timelines"); db_begin_transaction(); | | | 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 | } if( P("setup") ){ cgi_redirect("tktsetup"); } style_header("Ticket Display On Timelines"); db_begin_transaction(); @ <form action="%s(g.zTop)/tktsetup_timeline" method="post"><div> login_insert_csrf_secret(); @ <hr /> entry_attribute("Ticket Title", 40, "ticket-title-expr", "t", "title"); @ <p>An SQL expression in a query against the TICKET table that will @ return the title of the ticket for display purposes.</p> |
︙ | ︙ |
Changes to src/translate.c.
︙ | ︙ | |||
11 12 13 14 15 16 17 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** | | > | | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** Input lines that begin with the "@" character are translated into ** either cgi_printf() statements or string literals and the ** translated code is written on standard output. ** ** The problem this program is attempt to solve is as follows: When ** writing CGI programs in C, we typically want to output a lot of HTML ** text to standard output. In pure C code, this involves doing a ** printf() with a big string containing all that text. But we have ** to insert special codes (ex: \n and \") for many common characters, ** which interferes with the readability of the HTML. |
︙ | ︙ |
Changes to src/undo.c.
︙ | ︙ | |||
28 29 30 31 32 33 34 | ** true the redo a change. If there is nothing to undo (or redo) then ** this routine is a noop. */ static void undo_one(const char *zPathname, int redoFlag){ Stmt q; char *zFullname; db_prepare(&q, | | > > > > > > > | > | | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | ** true the redo a change. If there is nothing to undo (or redo) then ** this routine is a noop. */ static void undo_one(const char *zPathname, int redoFlag){ Stmt q; char *zFullname; db_prepare(&q, "SELECT content, existsflag, isExe FROM undo" " WHERE pathname=%Q AND redoflag=%d", zPathname, redoFlag ); if( db_step(&q)==SQLITE_ROW ){ int old_exists; int new_exists; int old_exe; int new_exe; Blob current; Blob new; zFullname = mprintf("%s/%s", g.zLocalRoot, zPathname); new_exists = file_size(zFullname)>=0; if( new_exists ){ blob_read_from_file(¤t, zFullname); new_exe = file_isexe(zFullname); }else{ blob_zero(¤t); new_exe = 0; } blob_zero(&new); old_exists = db_column_int(&q, 1); old_exe = db_column_int(&q, 2); if( old_exists ){ db_ephemeral_blob(&q, 0, &new); } if( old_exists ){ if( new_exists ){ printf("%s %s\n", redoFlag ? "REDO" : "UNDO", zPathname); }else{ printf("NEW %s\n", zPathname); } blob_write_to_file(&new, zFullname); file_setexe(zFullname, old_exe); }else{ printf("DELETE %s\n", zPathname); unlink(zFullname); } blob_reset(&new); free(zFullname); db_finalize(&q); db_prepare(&q, "UPDATE undo SET content=:c, existsflag=%d, isExe=%d," " redoflag=NOT redoflag" " WHERE pathname=%Q", new_exists, new_exe, zPathname ); if( new_exists ){ db_bind_blob(&q, ":c", ¤t); } db_step(&q); blob_reset(¤t); } |
︙ | ︙ | |||
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | /* ** Undo or redo all undoable or redoable changes. */ static void undo_all(int redoFlag){ int ucid; int ncid; undo_all_filesystem(redoFlag); db_multi_exec( "CREATE TEMP TABLE undo_vfile_2 AS SELECT * FROM vfile;" "DELETE FROM vfile;" "INSERT INTO vfile SELECT * FROM undo_vfile;" "DELETE FROM undo_vfile;" "INSERT INTO undo_vfile SELECT * FROM undo_vfile_2;" "DROP TABLE undo_vfile_2;" "CREATE TEMP TABLE undo_vmerge_2 AS SELECT * FROM vmerge;" "DELETE FROM vmerge;" "INSERT INTO vmerge SELECT * FROM undo_vmerge;" "DELETE FROM undo_vmerge;" "INSERT INTO undo_vmerge SELECT * FROM undo_vmerge_2;" "DROP TABLE undo_vmerge_2;" ); ncid = db_lget_int("undo_checkout", 0); ucid = db_lget_int("checkout", 0); db_lset_int("undo_checkout", ucid); db_lset_int("checkout", ncid); } /* ** Reset the undo memory. */ void undo_reset(void){ static const char zSql[] = @ DROP TABLE IF EXISTS undo; @ DROP TABLE IF EXISTS undo_vfile; @ DROP TABLE IF EXISTS undo_vmerge; | > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | | < > | > > > > > > > > | 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 | /* ** Undo or redo all undoable or redoable changes. */ static void undo_all(int redoFlag){ int ucid; int ncid; const char *zDb = db_name("localdb"); undo_all_filesystem(redoFlag); db_multi_exec( "CREATE TEMP TABLE undo_vfile_2 AS SELECT * FROM vfile;" "DELETE FROM vfile;" "INSERT INTO vfile SELECT * FROM undo_vfile;" "DELETE FROM undo_vfile;" "INSERT INTO undo_vfile SELECT * FROM undo_vfile_2;" "DROP TABLE undo_vfile_2;" "CREATE TEMP TABLE undo_vmerge_2 AS SELECT * FROM vmerge;" "DELETE FROM vmerge;" "INSERT INTO vmerge SELECT * FROM undo_vmerge;" "DELETE FROM undo_vmerge;" "INSERT INTO undo_vmerge SELECT * FROM undo_vmerge_2;" "DROP TABLE undo_vmerge_2;" ); if(db_exists("SELECT 1 FROM %s.sqlite_master WHERE name='undo_stash'", zDb) ){ if( redoFlag ){ db_multi_exec( "DELETE FROM stash WHERE stashid IN (SELECT stashid FROM undo_stash);" "DELETE FROM stashfile" " WHERE stashid NOT IN (SELECT stashid FROM stash);" ); }else{ db_multi_exec( "INSERT OR IGNORE INTO stash SELECT * FROM undo_stash;" "INSERT OR IGNORE INTO stashfile SELECT * FROM undo_stashfile;" ); } } ncid = db_lget_int("undo_checkout", 0); ucid = db_lget_int("checkout", 0); db_lset_int("undo_checkout", ucid); db_lset_int("checkout", ncid); } /* ** Reset the undo memory. */ void undo_reset(void){ static const char zSql[] = @ DROP TABLE IF EXISTS undo; @ DROP TABLE IF EXISTS undo_vfile; @ DROP TABLE IF EXISTS undo_vmerge; @ DROP TABLE IF EXISTS undo_stash; @ DROP TABLE IF EXISTS undo_stashfile; ; db_multi_exec(zSql); db_lset_int("undo_available", 0); db_lset_int("undo_checkout", 0); } /* ** The following variable stores the original command-line of the ** command that is a candidate to be undone. */ static char *undoCmd = 0; /* ** This flag is true if we are in the process of collecting file changes ** for undo. When this flag is false, undo_save() is a no-op. ** ** The undoDisable flag, if set, prevents undo from being activated. */ static int undoActive = 0; static int undoDisable = 0; /* ** Capture the current command-line and store it as part of the undo ** state. This routine is called before options are extracted from the ** command-line so that we can record the complete command-line. */ void undo_capture_command_line(void){ Blob cmdline; int i; if( undoCmd!=0 || undoDisable ) return; blob_zero(&cmdline); for(i=1; i<g.argc; i++){ if( i>1 ) blob_append(&cmdline, " ", 1); blob_append(&cmdline, g.argv[i], -1); } undoCmd = blob_str(&cmdline); } /* ** Begin capturing a snapshot that can be undone. */ void undo_begin(void){ int cid; const char *zDb = db_name("localdb"); static const char zSql[] = @ CREATE TABLE %s.undo( @ pathname TEXT UNIQUE, -- Name of the file @ redoflag BOOLEAN, -- 0 for undoable. 1 for redoable @ existsflag BOOLEAN, -- True if the file exists @ isExe BOOLEAN, -- True if the file is executable @ content BLOB -- Saved content @ ); @ CREATE TABLE %s.undo_vfile AS SELECT * FROM vfile; @ CREATE TABLE %s.undo_vmerge AS SELECT * FROM vmerge; ; if( undoDisable ) return; undo_reset(); db_multi_exec(zSql, zDb, zDb, zDb); cid = db_lget_int("checkout", 0); db_lset_int("undo_checkout", cid); db_lset_int("undo_available", 1); db_lset("undo_cmdline", undoCmd); undoActive = 1; } /* ** Permanently disable undo */ void undo_disable(void){ undoDisable = 1; } /* ** This flag is true if one or more files have changed and have been ** recorded in the undo log but the undo log has not yet been committed. ** ** If a fatal error occurs and this flag is set, that means we should ** rollback all the filesystem changes. |
︙ | ︙ | |||
189 190 191 192 193 194 195 | void undo_save(const char *zPathname){ char *zFullname; Blob content; int existsFlag; Stmt q; if( !undoActive ) return; | | | | | > > > > > > > > > > > > > > > > > > > > > > > | 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 | void undo_save(const char *zPathname){ char *zFullname; Blob content; int existsFlag; Stmt q; if( !undoActive ) return; zFullname = mprintf("%s%s", g.zLocalRoot, zPathname); existsFlag = file_size(zFullname)>=0; db_prepare(&q, "INSERT OR IGNORE INTO undo(pathname,redoflag,existsflag,isExe,content)" " VALUES(%Q,0,%d,%d,:c)", zPathname, existsFlag, file_isexe(zFullname) ); if( existsFlag ){ blob_read_from_file(&content, zFullname); db_bind_blob(&q, ":c", &content); } free(zFullname); db_step(&q); db_finalize(&q); if( existsFlag ){ blob_reset(&content); } undoNeedRollback = 1; } /* ** Make the current state of stashid undoable. */ void undo_save_stash(int stashid){ const char *zDb = db_name("localdb"); db_multi_exec( "DROP TABLE IF EXISTS undo_stash;" "CREATE TABLE %s.undo_stash AS" " SELECT * FROM stash WHERE stashid=%d;", zDb, stashid ); db_multi_exec( "DROP TABLE IF EXISTS undo_stashfile;" "CREATE TABLE %s.undo_stashfile AS" " SELECT * FROM stashfile WHERE stashid=%d;", zDb, stashid ); } /* ** Complete the undo process is one is currently in process. */ void undo_finish(void){ if( undoActive ){ if( undoNeedRollback ){ printf("\"fossil undo\" is available to undo changes" " to the working checkout.\n"); } undoActive = 0; undoNeedRollback = 0; } } /* ** This routine is called when the process aborts due to an error. |
︙ | ︙ | |||
239 240 241 242 243 244 245 246 | undoActive = 0; printf("Rolling back prior filesystem changes...\n"); undo_all_filesystem(0); } /* ** COMMAND: undo ** | > | > | > > > > > > > | | > > > > > > > > > | | < < < | | > | | | < < | | | < | < < | > > > | < < | < < < < < < < < < < < < | > > | > | > > > > > | > | | | | | | | | | | | | | | | | | > > > > > > | 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | undoActive = 0; printf("Rolling back prior filesystem changes...\n"); undo_all_filesystem(0); } /* ** COMMAND: undo ** COMMAND: redo ** ** Usage: %fossil undo ?--explain? ?FILENAME...? ** or: %fossil redo ?--explain? ?FILENAME...? ** ** Undo the changes to the working checkout caused by the most recent ** of the following operations: ** ** (1) fossil update (5) fossil stash apply ** (2) fossil merge (6) fossil stash drop ** (3) fossil revert (7) fossil stash goto ** (4) fossil stash pop ** ** If FILENAME is specified then restore the content of the named ** file(s) but otherwise leave the update or merge or revert in effect. ** The redo command undoes the effect of the most recent undo. ** ** If the --explain option is present, no changes are made and instead ** the undo or redo command explains what actions the undo or redo would ** have done had the --explain been omitted. ** ** A single level of undo/redo is supported. The undo/redo stack ** is cleared by the commit and checkout commands. */ void undo_cmd(void){ int isRedo = g.argv[1][0]=='r'; int undo_available; int explainFlag = find_option("explain", 0, 0)!=0; const char *zCmd = isRedo ? "redo" : "undo"; db_must_be_within_tree(); verify_all_options(); db_begin_transaction(); undo_available = db_lget_int("undo_available", 0); if( explainFlag ){ if( undo_available==0 ){ printf("No undo or redo is available\n"); }else{ Stmt q; int nChng = 0; zCmd = undo_available==1 ? "undo" : "redo"; printf("A %s is available for the following command:\n\n %s %s\n\n", zCmd, g.argv[0], db_lget("undo_cmdline", "???")); db_prepare(&q, "SELECT existsflag, pathname FROM undo ORDER BY pathname" ); while( db_step(&q)==SQLITE_ROW ){ if( nChng==0 ){ printf("The following file changes would occur if the " "command above is %sne:\n\n", zCmd); } nChng++; printf("%s %s\n", db_column_int(&q,0) ? "UPDATE" : "DELETE", db_column_text(&q, 1) ); } db_finalize(&q); if( nChng==0 ){ printf("No file changes would occur with this undo/redo.\n"); } } }else{ int vid1 = db_lget_int("checkout", 0); int vid2; if( g.argc==2 ){ if( undo_available!=(1+isRedo) ){ fossil_fatal("nothing to %s", zCmd); } undo_all(isRedo); db_lset_int("undo_available", 2-isRedo); }else if( g.argc>=3 ){ int i; if( undo_available==0 ){ fossil_fatal("nothing to %s", zCmd); } for(i=2; i<g.argc; i++){ const char *zFile = g.argv[i]; Blob path; file_tree_name(zFile, &path, 1); undo_one(blob_str(&path), isRedo); blob_reset(&path); } } vid2 = db_lget_int("checkout", 0); if( vid1!=vid2 ){ printf("--------------------\n"); show_common_info(vid2, "updated-to:", 1, 0); } } db_end_transaction(0); } |
Changes to src/update.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 31 32 33 34 35 36 37 | /* ** Return true if artifact rid is a version */ int is_a_version(int rid){ return db_exists("SELECT 1 FROM event WHERE objid=%d AND type='ci'", rid); } /* ** COMMAND: update ** ** Usage: %fossil update ?VERSION? ?FILES...? ** ** Change the version of the current checkout to VERSION. Any uncommitted | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | /* ** Return true if artifact rid is a version */ int is_a_version(int rid){ return db_exists("SELECT 1 FROM event WHERE objid=%d AND type='ci'", rid); } /* This variable is set if we are doing an internal update. It is clear ** when running the "update" command. */ static int internalUpdate = 0; static int internalConflictCnt = 0; /* ** Do an update to version vid. ** ** Start an undo session but do not terminate it. Do not autosync. */ int update_to(int vid){ int savedArgc; char **savedArgv; char *newArgv[3]; newArgv[0] = g.argv[0]; newArgv[1] = "update"; newArgv[2] = 0; savedArgv = g.argv; savedArgc = g.argc; g.argc = 2; g.argv = newArgv; internalUpdate = vid; internalConflictCnt = 0; update_cmd(); g.argc = savedArgc; g.argv = savedArgv; return internalConflictCnt; } /* ** COMMAND: update ** ** Usage: %fossil update ?VERSION? ?FILES...? ** ** Change the version of the current checkout to VERSION. Any uncommitted |
︙ | ︙ | |||
59 60 61 62 63 64 65 66 | void update_cmd(void){ int vid; /* Current version */ int tid=0; /* Target version - version we are changing to */ Stmt q; int latestFlag; /* --latest. Pick the latest version if true */ int nochangeFlag; /* -n or --nochange. Do a dry run */ int verboseFlag; /* -v or --verbose. Output extra information */ | > > > > > > > > | > > | > > | > > > > > > > | > > > > > > > > > > > > > > | | | | | | | | | > | | | > > < < | > > > > | | > | < > | < > > > > | | > > | | | > > | > > > | > | | > > > > > > > > > > > > > > > > > | > | | | | | < < < > > | | > | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 | void update_cmd(void){ int vid; /* Current version */ int tid=0; /* Target version - version we are changing to */ Stmt q; int latestFlag; /* --latest. Pick the latest version if true */ int nochangeFlag; /* -n or --nochange. Do a dry run */ int verboseFlag; /* -v or --verbose. Output extra information */ int debugFlag; /* --debug option */ int nChng; /* Number of file renames */ int *aChng; /* Array of file renames */ int i; /* Loop counter */ int nConflict = 0; /* Number of merge conflicts */ Stmt mtimeXfer; /* Statment to transfer mtimes */ if( !internalUpdate ){ undo_capture_command_line(); url_proxy_options(); } latestFlag = find_option("latest",0, 0)!=0; nochangeFlag = find_option("nochange","n",0)!=0; verboseFlag = find_option("verbose","v",0)!=0; debugFlag = find_option("debug",0,0)!=0; db_must_be_within_tree(); vid = db_lget_int("checkout", 0); if( vid==0 ){ fossil_fatal("cannot find current version"); } if( !nochangeFlag && db_exists("SELECT 1 FROM vmerge") ){ fossil_fatal("cannot update an uncommitted merge"); } if( !nochangeFlag && !internalUpdate ) autosync(AUTOSYNC_PULL); if( internalUpdate ){ tid = internalUpdate; }else if( g.argc>=3 ){ if( strcmp(g.argv[2], "current")==0 ){ /* If VERSION is "current", then use the same algorithm to find the ** target as if VERSION were omitted. */ }else if( strcmp(g.argv[2], "latest")==0 ){ /* If VERSION is "latest", then use the same algorithm to find the ** target as if VERSION were omitted and the --latest flag is present. */ latestFlag = 1; }else{ tid = name_to_rid(g.argv[2]); if( tid==0 ){ fossil_fatal("no such version: %s", g.argv[2]); }else if( !is_a_version(tid) ){ fossil_fatal("no such version: %s", g.argv[2]); } } } /* If no VERSION is specified on the command-line, then look for a ** descendent of the current version. If there are multiple descendents, ** look for one from the same branch as the current version. If there ** are still multiple descendents, show them all and refuse to update ** until the user selects one. */ if( tid==0 ){ int closeCode = 1; compute_leaves(vid, closeCode); if( !db_exists("SELECT 1 FROM leaves") ){ closeCode = 0; compute_leaves(vid, closeCode); } if( !latestFlag && db_int(0, "SELECT count(*) FROM leaves")>1 ){ db_multi_exec( "DELETE FROM leaves WHERE rid NOT IN" " (SELECT leaves.rid FROM leaves, tagxref" " WHERE leaves.rid=tagxref.rid AND tagxref.tagid=%d" " AND tagxref.value==(SELECT value FROM tagxref" " WHERE tagid=%d AND rid=%d))", TAG_BRANCH, TAG_BRANCH, vid ); if( db_int(0, "SELECT count(*) FROM leaves")>1 ){ compute_leaves(vid, closeCode); db_prepare(&q, "%s " " AND event.objid IN leaves" " ORDER BY event.mtime DESC", timeline_query_for_tty() ); print_timeline(&q, 100); db_finalize(&q); fossil_fatal("Multiple descendants"); } } tid = db_int(0, "SELECT rid FROM leaves, event" " WHERE event.objid=leaves.rid" " ORDER BY event.mtime DESC"); } if( !verboseFlag && (tid==vid)) return; /* Nothing to update */ db_begin_transaction(); vfile_check_signature(vid, 1, 0); if( !nochangeFlag && !internalUpdate ) undo_begin(); load_vfile_from_rid(tid); /* ** The record.fn field is used to match files against each other. The ** FV table contains one row for each each unique filename in ** in the current checkout, the pivot, and the version being merged. */ db_multi_exec( "DROP TABLE IF EXISTS fv;" "CREATE TEMP TABLE fv(" " fn TEXT PRIMARY KEY," /* The filename relative to root */ " idv INTEGER," /* VFILE entry for current version */ " idt INTEGER," /* VFILE entry for target version */ " chnged BOOLEAN," /* True if current version has been edited */ " ridv INTEGER," /* Record ID for current version */ " ridt INTEGER," /* Record ID for target */ " isexe BOOLEAN," /* Does target have execute permission? */ " fnt TEXT" /* Filename of same file on target version */ ");" ); /* Add files found in the current version */ db_multi_exec( "INSERT OR IGNORE INTO fv(fn,fnt,idv,idt,ridv,ridt,isexe,chnged)" " SELECT pathname, pathname, id, 0, rid, 0, isexe, chnged" " FROM vfile WHERE vid=%d", vid ); /* Compute file name changes on V->T. Record name changes in files that ** have changed locally. */ find_filename_changes(vid, tid, &nChng, &aChng); if( nChng ){ for(i=0; i<nChng; i++){ db_multi_exec( "UPDATE fv" " SET fnt=(SELECT name FROM filename WHERE fnid=%d)" " WHERE fn=(SELECT name FROM filename WHERE fnid=%d) AND chnged", aChng[i*2+1], aChng[i*2] ); } fossil_free(aChng); } /* Add files found in the target version T but missing from the current ** version V. */ db_multi_exec( "INSERT OR IGNORE INTO fv(fn,fnt,idv,idt,ridv,ridt,isexe,chnged)" " SELECT pathname, pathname, 0, 0, 0, 0, isexe, 0 FROM vfile" " WHERE vid=%d" " AND pathname NOT IN (SELECT fnt FROM fv)", tid ); /* ** Compute the file version ids for T */ db_multi_exec( "UPDATE fv SET" " idt=coalesce((SELECT id FROM vfile WHERE vid=%d AND pathname=fnt),0)," " ridt=coalesce((SELECT rid FROM vfile WHERE vid=%d AND pathname=fnt),0)", tid, tid ); if( debugFlag ){ db_prepare(&q, "SELECT rowid, fn, fnt, chnged, ridv, ridt, isexe FROM fv" ); while( db_step(&q)==SQLITE_ROW ){ printf("%3d: ridv=%-4d ridt=%-4d chnged=%d isexe=%d\n", db_column_int(&q, 0), db_column_int(&q, 4), db_column_int(&q, 5), db_column_int(&q, 3), db_column_int(&q, 6)); printf(" fnv = [%s]\n", db_column_text(&q, 1)); printf(" fnt = [%s]\n", db_column_text(&q, 2)); } db_finalize(&q); } /* If FILES appear on the command-line, remove from the "fv" table ** every entry that is not named on the command-line or which is not ** in a directory named on the command-line. */ if( g.argc>=4 ){ Blob sql; /* SQL statement to purge unwanted entries */ |
︙ | ︙ | |||
197 198 199 200 201 202 203 204 | zSep = "AND "; blob_reset(&treename); } db_multi_exec(blob_str(&sql)); blob_reset(&sql); } db_prepare(&q, | > > > > | > > > > > > > > > > > < > | > | > < < < | > > > | > < | < | > > > | > > > > > | > > < | | > > > > | > > > > > > > > > > > > > > > | > < | | | > | | < > > > | 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 | zSep = "AND "; blob_reset(&treename); } db_multi_exec(blob_str(&sql)); blob_reset(&sql); } /* ** Alter the content of the checkout so that it conforms with the ** target */ db_prepare(&q, "SELECT fn, idv, ridv, idt, ridt, chnged, fnt, isexe FROM fv ORDER BY 1" ); db_prepare(&mtimeXfer, "UPDATE vfile SET mtime=(SELECT mtime FROM vfile WHERE id=:idv)" " WHERE id=:idt" ); assert( g.zLocalRoot!=0 ); assert( strlen(g.zLocalRoot)>1 ); assert( g.zLocalRoot[strlen(g.zLocalRoot)-1]=='/' ); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); /* The filename from root */ int idv = db_column_int(&q, 1); /* VFILE entry for current */ int ridv = db_column_int(&q, 2); /* RecordID for current */ int idt = db_column_int(&q, 3); /* VFILE entry for target */ int ridt = db_column_int(&q, 4); /* RecordID for target */ int chnged = db_column_int(&q, 5); /* Current is edited */ const char *zNewName = db_column_text(&q,6);/* New filename */ int isexe = db_column_int(&q, 6); /* EXE perm for new file */ char *zFullPath; /* Full pathname of the file */ char *zFullNewPath; /* Full pathname of dest */ char nameChng; /* True if the name changed */ zFullPath = mprintf("%s%s", g.zLocalRoot, zName); zFullNewPath = mprintf("%s%s", g.zLocalRoot, zNewName); nameChng = fossil_strcmp(zName, zNewName); if( idv>0 && ridv==0 && idt>0 && ridt>0 ){ /* Conflict. This file has been added to the current checkout ** but also exists in the target checkout. Use the current version. */ printf("CONFLICT %s\n", zName); nConflict++; }else if( idt>0 && idv==0 ){ /* File added in the target. */ printf("ADD %s\n", zName); undo_save(zName); if( !nochangeFlag ) vfile_to_disk(0, idt, 0, 0); }else if( idt>0 && idv>0 && ridt!=ridv && chnged==0 ){ /* The file is unedited. Change it to the target version */ undo_save(zName); printf("UPDATE %s\n", zName); if( !nochangeFlag ) vfile_to_disk(0, idt, 0, 0); }else if( idt>0 && idv>0 && file_size(zFullPath)<0 ){ /* The file missing from the local check-out. Restore it to the ** version that appears in the target. */ printf("UPDATE %s\n", zName); undo_save(zName); if( !nochangeFlag ) vfile_to_disk(0, idt, 0, 0); }else if( idt==0 && idv>0 ){ if( ridv==0 ){ /* Added in current checkout. Continue to hold the file as ** as an addition */ db_multi_exec("UPDATE vfile SET vid=%d WHERE id=%d", tid, idv); }else if( chnged ){ /* Edited locally but deleted from the target. Do not track the ** file but keep the edited version around. */ printf("CONFLICT %s - edited locally but deleted by update\n", zName); nConflict++; }else{ printf("REMOVE %s\n", zName); undo_save(zName); if( !nochangeFlag ) unlink(zFullPath); } }else if( idt>0 && idv>0 && ridt!=ridv && chnged ){ /* Merge the changes in the current tree into the target version */ Blob r, t, v; int rc; if( nameChng ){ printf("MERGE %s -> %s\n", zName, zNewName); }else{ printf("MERGE %s\n", zName); } undo_save(zName); content_get(ridt, &t); content_get(ridv, &v); rc = merge_3way(&v, zFullPath, &t, &r); if( rc>=0 ){ if( !nochangeFlag ){ blob_write_to_file(&r, zFullNewPath); file_setexe(zFullNewPath, isexe); } if( rc>0 ){ printf("***** %d merge conflicts in %s\n", rc, zNewName); nConflict++; } }else{ if( !nochangeFlag ){ blob_write_to_file(&t, zFullNewPath); file_setexe(zFullNewPath, isexe); } printf("***** Cannot merge binary file %s\n", zNewName); nConflict++; } if( nameChng && !nochangeFlag ) unlink(zFullPath); blob_reset(&v); blob_reset(&t); blob_reset(&r); }else{ if( chnged ){ if( verboseFlag ) printf("EDITED %s\n", zName); }else{ db_bind_int(&mtimeXfer, ":idv", idv); db_bind_int(&mtimeXfer, ":idt", idt); db_step(&mtimeXfer); db_reset(&mtimeXfer); if( verboseFlag ) printf("UNCHANGED %s\n", zName); } } free(zFullPath); free(zFullNewPath); } db_finalize(&q); db_finalize(&mtimeXfer); printf("--------------\n"); show_common_info(tid, "updated-to:", 1, 0); /* Report on conflicts */ if( nConflict && !nochangeFlag ){ if( internalUpdate ){ internalConflictCnt = nConflict; }else{ printf("WARNING: %d merge conflicts - see messages above for details.\n", nConflict); } } /* ** Clean up the mid and pid VFILE entries. Then commit the changes. */ if( nochangeFlag ){ db_end_transaction(1); /* With --nochange, rollback changes */ }else{ if( g.argc<=3 ){ /* All files updated. Shift the current checkout to the target. */ db_multi_exec("DELETE FROM vfile WHERE vid!=%d", tid); checkout_set_all_exe(vid); manifest_to_disk(tid); db_lset_int("checkout", tid); }else{ /* A subset of files have been checked out. Keep the current ** checkout unchanged. */ db_multi_exec("DELETE FROM vfile WHERE vid!=%d", vid); } if( !internalUpdate ) undo_finish(); db_end_transaction(0); } } /* ** Get the contents of a file within the checking "revision". If ** revision==NULL then get the file content for the current checkout. */ int historical_version_of_file( const char *revision, /* The checkin containing the file */ const char *file, /* Full treename of the file */ Blob *content, /* Put the content here */ int *pIsExe, /* Set to true if file is executable */ int errCode /* Error code if file not found. Panic if 0. */ ){ Manifest *pManifest; ManifestFile *pFile; int rid=0; if( revision ){ rid = name_to_rid(revision); }else{ rid = db_lget_int("checkout", 0); } if( !is_a_version(rid) ){ if( errCode>0 ) return errCode; fossil_fatal("no such checkin: %s", revision); } pManifest = manifest_get(rid, CFTYPE_MANIFEST); if( pManifest ){ pFile = manifest_file_seek(pManifest, file); if( pFile ){ rid = uuid_to_rid(pFile->zUuid, 0); if( pIsExe ) *pIsExe = manifest_file_mperm(pFile); manifest_destroy(pManifest); return content_get(rid, content); } manifest_destroy(pManifest); if( errCode<=0 ){ fossil_fatal("file %s does not exist in checkin: %s", file, revision); } }else if( errCode<=0 ){ if( revision==0 ){ revision = db_text("current", "SELECT uuid FROM blob WHERE rid=%d", rid); } fossil_panic("could not parse manifest for checkin: %s", revision); } return errCode; } /* |
︙ | ︙ | |||
372 373 374 375 376 377 378 | */ void revert_cmd(void){ const char *zFile; const char *zRevision; Blob record; int i; int errCode; | < | > | 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 | */ void revert_cmd(void){ const char *zFile; const char *zRevision; Blob record; int i; int errCode; Stmt q; undo_capture_command_line(); zRevision = find_option("revision", "r", 1); verify_all_options(); if( g.argc<2 ){ usage("?OPTIONS? [FILE] ..."); } if( zRevision && g.argc<3 ){ |
︙ | ︙ | |||
400 401 402 403 404 405 406 | file_tree_name(zFile, &fname, 1); db_multi_exec("REPLACE INTO torevert VALUES(%B)", &fname); blob_reset(&fname); } }else{ int vid; vid = db_lget_int("checkout", 0); | | > > > > > > < > | < > | < < > < < > > > < | < < < > > | > > > > > > > > < | 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 | file_tree_name(zFile, &fname, 1); db_multi_exec("REPLACE INTO torevert VALUES(%B)", &fname); blob_reset(&fname); } }else{ int vid; vid = db_lget_int("checkout", 0); vfile_check_signature(vid, 0, 0); db_multi_exec( "DELETE FROM vmerge;" "INSERT INTO torevert " "SELECT pathname" " FROM vfile " " WHERE chnged OR deleted OR rid=0 OR pathname!=origname;" ); } blob_zero(&record); db_prepare(&q, "SELECT name FROM torevert"); if( zRevision==0 ){ int vid = db_lget_int("checkout", 0); zRevision = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); } while( db_step(&q)==SQLITE_ROW ){ int isExe = 0; char *zFull; zFile = db_column_text(&q, 0); zFull = mprintf("%/%/", g.zLocalRoot, zFile); errCode = historical_version_of_file(zRevision, zFile, &record, &isExe,2); if( errCode==2 ){ if( db_int(0, "SELECT rid FROM vfile WHERE pathname=%Q", zFile)==0 ){ printf("UNMANAGE: %s\n", zFile); }else{ undo_save(zFile); unlink(zFull); printf("DELETE: %s\n", zFile); } db_multi_exec("DELETE FROM vfile WHERE pathname=%Q", zFile); }else{ sqlite3_int64 mtime; undo_save(zFile); blob_write_to_file(&record, zFull); file_setexe(zFull, isExe); printf("REVERTED: %s\n", zFile); mtime = file_mtime(zFull); db_multi_exec( "UPDATE vfile" " SET mtime=%lld, chnged=0, deleted=0, isexe=%d, mrid=rid," " pathname=coalesce(origname,pathname), origname=NULL" " WHERE pathname=%Q", mtime, isExe, zFile ); } blob_reset(&record); free(zFull); } db_finalize(&q); undo_finish(); db_end_transaction(0); } |
Changes to src/url.c.
︙ | ︙ | |||
139 140 141 142 143 144 145 | zValue = &g.urlPath[i]; while( g.urlPath[i] && g.urlPath[i]!='&' ){ i++; } } if( g.urlPath[i] ){ g.urlPath[i] = 0; i++; } | | | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | zValue = &g.urlPath[i]; while( g.urlPath[i] && g.urlPath[i]!='&' ){ i++; } } if( g.urlPath[i] ){ g.urlPath[i] = 0; i++; } if( fossil_strcmp(zName,"fossil")==0 ){ g.urlFossil = zValue; dehttpize(g.urlFossil); zExe = mprintf("?fossil=%T", g.urlFossil); } } dehttpize(g.urlPath); |
︙ | ︙ | |||
296 297 298 299 300 301 302 | /* ** An instance of this object is used to build a URL with query parameters. */ struct HQuery { Blob url; /* The URL */ const char *zBase; /* The base URL */ int nParam; /* Number of parameters. Max 10 */ | | | | 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 | /* ** An instance of this object is used to build a URL with query parameters. */ struct HQuery { Blob url; /* The URL */ const char *zBase; /* The base URL */ int nParam; /* Number of parameters. Max 10 */ const char *azName[15]; /* Parameter names */ const char *azValue[15]; /* Parameter values */ }; #endif /* ** Initialize the URL object. */ void url_initialize(HQuery *p, const char *zBase){ |
︙ | ︙ | |||
335 336 337 338 339 340 341 | const char *zName2, /* Second override */ const char *zValue2 /* Second override value */ ){ const char *zSep = "?"; int i; blob_reset(&p->url); | | | > | > | > | > > > > > > > > > > > > | 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | const char *zName2, /* Second override */ const char *zValue2 /* Second override value */ ){ const char *zSep = "?"; int i; blob_reset(&p->url); blob_appendf(&p->url, "%s/%s", g.zTop, p->zBase); for(i=0; i<p->nParam; i++){ const char *z = p->azValue[i]; if( zName1 && strcmp(zName1,p->azName[i])==0 ){ zName1 = 0; z = zValue1; if( z==0 ) continue; } if( zName2 && strcmp(zName2,p->azName[i])==0 ){ zName2 = 0; z = zValue2; if( z==0 ) continue; } blob_appendf(&p->url, "%s%s", zSep, p->azName[i]); if( z && z[0] ) blob_appendf(&p->url, "=%T", z); zSep = "&"; } if( zName1 && zValue1 ){ blob_appendf(&p->url, "%s%s", zSep, zName1); if( zValue1[0] ) blob_appendf(&p->url, "=%T", zValue1); } if( zName2 && zValue2 ){ blob_appendf(&p->url, "%s%s", zSep, zName2); if( zValue2[0] ) blob_appendf(&p->url, "=%T", zValue2); } return blob_str(&p->url); } /* ** Prompt the user for the password for g.urlUser. Store the result ** in g.urlPasswd. */ void url_prompt_for_password(void){ if( isatty(fileno(stdin)) ){ char *zPrompt = mprintf("\rpassword for %s: ", g.urlUser); Blob x; prompt_for_password(zPrompt, &x, 0); free(zPrompt); g.urlPasswd = mprintf("%b", &x); blob_reset(&x); }else{ fossil_fatal("missing or incorrect password for user \"%s\"", g.urlUser); } } /* Preemptively prompt for a password if a username is given in the ** URL but no password. */ void url_get_password_if_needed(void){ if( (g.urlUser && g.urlUser[0]) && (g.urlPasswd==0 || g.urlPasswd[0]==0) && isatty(fileno(stdin)) ){ url_prompt_for_password(); } } |
Changes to src/user.c.
︙ | ︙ | |||
113 114 115 116 117 118 119 | Blob secondTry; blob_zero(pPassphrase); blob_zero(&secondTry); while(1){ prompt_for_passphrase(zPrompt, pPassphrase); if( verify==0 ) break; if( verify==1 && blob_size(pPassphrase)==0 ) break; | | | 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | Blob secondTry; blob_zero(pPassphrase); blob_zero(&secondTry); while(1){ prompt_for_passphrase(zPrompt, pPassphrase); if( verify==0 ) break; if( verify==1 && blob_size(pPassphrase)==0 ) break; prompt_for_passphrase("Retype new password: ", &secondTry); if( blob_compare(pPassphrase, &secondTry) ){ printf("Passphrases do not match. Try again...\n"); }else{ break; } } blob_reset(&secondTry); |
︙ | ︙ | |||
172 173 174 175 176 177 178 | ** ** %fossil user password USERNAME ?PASSWORD? ** ** Change the web access password for a user. */ void user_cmd(void){ int n; | | | > | | | | | 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 | ** ** %fossil user password USERNAME ?PASSWORD? ** ** Change the web access password for a user. */ void user_cmd(void){ int n; db_find_and_open_repository(0, 0); if( g.argc<3 ){ usage("capabilities|default|list|new|password ..."); } n = strlen(g.argv[2]); if( n>=2 && strncmp(g.argv[2],"new",n)==0 ){ Blob passwd, login, caps, contact; char *zPw; blob_init(&caps, db_get("default-perms", "u"), -1); if( g.argc>=4 ){ blob_init(&login, g.argv[3], -1); }else{ prompt_user("login: ", &login); } if( db_exists("SELECT 1 FROM user WHERE login=%B", &login) ){ fossil_fatal("user %b already exists", &login); } if( g.argc>=5 ){ blob_init(&contact, g.argv[4], -1); }else{ prompt_user("contact-info: ", &contact); } if( g.argc>=6 ){ blob_init(&passwd, g.argv[5], -1); }else{ prompt_for_password("password: ", &passwd, 1); } zPw = sha1_shared_secret(blob_str(&passwd), blob_str(&login), 0); db_multi_exec( "INSERT INTO user(login,pw,cap,info,mtime)" "VALUES(%B,%Q,%B,%B,now())", &login, zPw, &caps, &contact ); free(zPw); }else if( n>=2 && strncmp(g.argv[2],"default",n)==0 ){ user_select(); if( g.argc==3 ){ printf("%s\n", g.zLogin); }else{ |
︙ | ︙ | |||
239 240 241 242 243 244 245 | uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", g.argv[3]); if( uid==0 ){ fossil_fatal("no such user: %s", g.argv[3]); } if( g.argc==5 ){ blob_init(&pw, g.argv[4], -1); }else{ | | | | > | | | 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 | uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", g.argv[3]); if( uid==0 ){ fossil_fatal("no such user: %s", g.argv[3]); } if( g.argc==5 ){ blob_init(&pw, g.argv[4], -1); }else{ zPrompt = mprintf("New password for %s: ", g.argv[3]); prompt_for_password(zPrompt, &pw, 1); } if( blob_size(&pw)==0 ){ printf("password unchanged\n"); }else{ char *zSecret = sha1_shared_secret(blob_str(&pw), g.argv[3], 0); db_multi_exec("UPDATE user SET pw=%Q, mtime=now() WHERE uid=%d", zSecret, uid); free(zSecret); } }else if( n>=2 && strncmp(g.argv[2],"capabilities",2)==0 ){ int uid; if( g.argc!=4 && g.argc!=5 ){ usage("user capabilities USERNAME ?PERMISSIONS?"); } uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", g.argv[3]); if( uid==0 ){ fossil_fatal("no such user: %s", g.argv[3]); } if( g.argc==5 ){ db_multi_exec( "UPDATE user SET cap=%Q, mtime=now() WHERE uid=%d", g.argv[4], uid ); } printf("%s\n", db_text(0, "SELECT cap FROM user WHERE uid=%d", uid)); }else{ fossil_panic("user subcommand should be one of: " "capabilities default list new password"); } |
︙ | ︙ | |||
337 338 339 340 341 342 343 | g.zLogin = mprintf("%s", db_column_text(&s, 1)); } db_finalize(&s); } if( g.userUid==0 ){ db_multi_exec( | | | < < < < < < < < < < < < < < < < < | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 | g.zLogin = mprintf("%s", db_column_text(&s, 1)); } db_finalize(&s); } if( g.userUid==0 ){ db_multi_exec( "INSERT INTO user(login, pw, cap, info, mtime)" "VALUES('anonymous', '', 'cfghjkmnoqw', '', now())" ); g.userUid = db_last_insert_rowid(); g.zLogin = "anonymous"; } } /* ** COMMAND: test-hash-passwords ** ** Usage: %fossil test-hash-passwords REPOSITORY ** ** Convert all local password storage to use a SHA1 hash of the password ** rather than cleartext. Passwords that are already stored as the SHA1 ** has are unchanged. */ void user_hash_passwords_cmd(void){ if( g.argc!=3 ) usage("REPOSITORY"); db_open_repository(g.argv[2]); sqlite3_create_function(g.db, "shared_secret", 2, SQLITE_UTF8, 0, sha1_shared_secret_sql_function, 0, 0); db_multi_exec( "UPDATE user SET pw=shared_secret(pw,login), mtime=now()" " WHERE length(pw)>0 AND length(pw)!=40" ); } /* ** WEBPAGE: access_log ** ** y=N 1: success only. 2: failure only. 3: both ** n=N Number of entries to show ** o=N Skip this many entries */ void access_log_page(void){ int y = atoi(PD("y","3")); int n = atoi(PD("n","50")); int skip = atoi(PD("o","0")); Blob sql; Stmt q; int cnt = 0; int rc; login_check_credentials(); if( !g.okAdmin ){ login_needed(); return; } create_accesslog_table(); if( P("delall") && P("delallbtn") ){ db_multi_exec("DELETE FROM accesslog"); cgi_redirectf("%s/access_log?y=%d&n=%d&o=%o", g.zTop, y, n, skip); return; } if( P("delanon") && P("delanonbtn") ){ db_multi_exec("DELETE FROM accesslog WHERE uname='anonymous'"); cgi_redirectf("%s/access_log?y=%d&n=%d&o=%o", g.zTop, y, n, skip); return; } if( P("delfail") && P("delfailbtn") ){ db_multi_exec("DELETE FROM accesslog WHERE NOT success"); cgi_redirectf("%s/access_log?y=%d&n=%d&o=%o", g.zTop, y, n, skip); return; } if( P("delold") && P("deloldbtn") ){ db_multi_exec("DELETE FROM accesslog WHERE rowid in" "(SELECT rowid FROM accesslog ORDER BY rowid DESC" " LIMIT -1 OFFSET 200)"); cgi_redirectf("%s/access_log?y=%d&n=%d", g.zTop, y, n); return; } style_header("Access Log"); blob_zero(&sql); blob_append(&sql, "SELECT uname, ipaddr, datetime(mtime, 'localtime'), success" " FROM accesslog", -1 ); if( y==1 ){ blob_append(&sql, " WHERE success", -1); }else if( y==2 ){ blob_append(&sql, " WHERE NOT success", -1); } blob_appendf(&sql," ORDER BY rowid DESC LIMIT %d OFFSET %d", n+1, skip); if( skip ){ style_submenu_element("Newer", "Newer entries", "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip>=n ? skip-n : 0, n, y); } rc = db_prepare_ignore_error(&q, blob_str(&sql)); @ <center><table border="1" cellpadding="5"> @ <tr><th width="33%%">Date</th><th width="34%%">User</th> @ <th width="33%%">IP Address</th></tr> while( rc==SQLITE_OK && db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zIP = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); int bSuccess = db_column_int(&q, 3); cnt++; if( cnt>n ){ style_submenu_element("Older", "Older entries", "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip+n, n, y); break; } if( bSuccess ){ @ <tr> }else{ @ <tr bgcolor="#ffacc0"> } @ <td>%s(zDate)</td><td>%h(zName)</td><td>%h(zIP)</td></tr> } if( skip>0 || cnt>n ){ style_submenu_element("All", "All entries", "%s/access_log?n=10000000", g.zTop); } @ </table></center> db_finalize(&q); @ <hr> @ <form method="post" action="%s(g.zTop)/access_log"> @ <input type="checkbox" name="delold"> @ Delete all but the most recent 200 entries</input> @ <input type="submit" name="deloldbtn" value="Delete"></input> @ </form> @ <form method="post" action="%s(g.zTop)/access_log"> @ <input type="checkbox" name="delanon"> @ Delete all entries for user "anonymous"</input> @ <input type="submit" name="delanonbtn" value="Delete"></input> @ </form> @ <form method="post" action="%s(g.zTop)/access_log"> @ <input type="checkbox" name="delfail"> @ Delete all failed login attempts</input> @ <input type="submit" name="delfailbtn" value="Delete"></input> @ </form> @ <form method="post" action="%s(g.zTop)/access_log"> @ <input type="checkbox" name="delall"> @ Delete all entries</input> @ <input type="submit" name="delallbtn" value="Delete"></input> @ </form> style_footer(); } |
Changes to src/verify.c.
︙ | ︙ | |||
33 34 35 36 37 38 39 | ** without error. ** ** Panic if anything goes wrong. If this procedure returns it means ** that everything is OK. */ static void verify_rid(int rid){ Blob uuid, hash, content; | | | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | ** without error. ** ** Panic if anything goes wrong. If this procedure returns it means ** that everything is OK. */ static void verify_rid(int rid){ Blob uuid, hash, content; if( content_size(rid, 0)<0 ){ return; /* No way to verify phantoms */ } blob_zero(&uuid); db_blob(&uuid, "SELECT uuid FROM blob WHERE rid=%d", rid); if( blob_size(&uuid)!=UUID_SIZE ){ fossil_panic("not a valid rid: %d", rid); } |
︙ | ︙ |
Changes to src/vfile.c.
︙ | ︙ | |||
48 49 50 51 52 53 54 | /* ** Given a UUID, return the corresponding record ID. If the UUID ** does not exist, then return 0. ** ** For this routine, the UUID must be exact. For a match against ** user input with mixed case, use resolve_uuid(). ** | | | > | | < < < < < < < < < < < < < < < < < < < < < < < < | | < | | > > > > > > > | > > > > | > > > > > > > > > > | | | | > > > > > > | | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | /* ** Given a UUID, return the corresponding record ID. If the UUID ** does not exist, then return 0. ** ** For this routine, the UUID must be exact. For a match against ** user input with mixed case, use resolve_uuid(). ** ** If the UUID is not found and phantomize is 1 or 2, then attempt to ** create a phantom record. A private phantom is created for 2 and ** a public phantom is created for 1. */ int uuid_to_rid(const char *zUuid, int phantomize){ int rid, sz; char z[UUID_SIZE+1]; sz = strlen(zUuid); if( sz!=UUID_SIZE || !validate16(zUuid, sz) ){ return 0; } memcpy(z, zUuid, UUID_SIZE+1); canonical16(z, sz); rid = fast_uuid_to_rid(z); if( rid==0 && phantomize ){ rid = content_new(zUuid, phantomize-1); } return rid; } /* ** Build a catalog of all files in a checkin. */ void vfile_build(int vid){ int rid, size; Stmt ins, ridq; Manifest *p; ManifestFile *pFile; db_begin_transaction(); p = manifest_get(vid, CFTYPE_MANIFEST); if( p==0 ) return; db_multi_exec("DELETE FROM vfile WHERE vid=%d", vid); db_prepare(&ins, "INSERT INTO vfile(vid,isexe,rid,mrid,pathname) " " VALUES(:vid,:isexe,:id,:id,:name)"); db_prepare(&ridq, "SELECT rid,size FROM blob WHERE uuid=:uuid"); db_bind_int(&ins, ":vid", vid); manifest_file_rewind(p); while( (pFile = manifest_file_next(p,0))!=0 ){ if( pFile->zUuid==0 || uuid_is_shunned(pFile->zUuid) ) continue; db_bind_text(&ridq, ":uuid", pFile->zUuid); if( db_step(&ridq)==SQLITE_ROW ){ rid = db_column_int(&ridq, 0); size = db_column_int(&ridq, 0); }else{ rid = 0; size = 0; } db_reset(&ridq); if( rid==0 || size<0 ){ fossil_warning("content missing for %s", pFile->zName); continue; } db_bind_int(&ins, ":isexe", manifest_file_mperm(pFile)); db_bind_int(&ins, ":id", rid); db_bind_text(&ins, ":name", pFile->zName); db_step(&ins); db_reset(&ins); } db_finalize(&ridq); db_finalize(&ins); manifest_destroy(p); db_end_transaction(0); } /* ** Check the file signature of the disk image for every VFILE of vid. ** ** Set the VFILE.CHNGED field on every file that has changed. Also ** set VFILE.CHNGED on every folder that contains a file or folder ** that has changed. ** ** If VFILE.DELETED is null or if VFILE.RID is zero, then we can assume ** the file has changed without having the check the on-disk image. ** ** If the size of the file has changed, then we assume that it has ** changed. If the mtime of the file has not changed and useSha1sum is false ** and the mtime-changes setting is true (the default) then we assume that ** the file has not changed. If the mtime has changed, we go ahead and ** double-check that the file has changed by looking at its SHA1 sum. */ void vfile_check_signature(int vid, int notFileIsFatal, int useSha1sum){ int nErr = 0; Stmt q; Blob fileCksum, origCksum; int checkMtime = useSha1sum==0 && db_get_boolean("mtime-changes", 1); db_begin_transaction(); db_prepare(&q, "SELECT id, %Q || pathname," " vfile.mrid, deleted, chnged, uuid, size, mtime" " FROM vfile LEFT JOIN blob ON vfile.mrid=blob.rid" " WHERE vid=%d ", g.zLocalRoot, vid); while( db_step(&q)==SQLITE_ROW ){ int id, rid, isDeleted; const char *zName; int chnged = 0; int oldChnged; i64 oldMtime; i64 currentMtime; id = db_column_int(&q, 0); zName = db_column_text(&q, 1); rid = db_column_int(&q, 2); isDeleted = db_column_int(&q, 3); oldChnged = db_column_int(&q, 4); oldMtime = db_column_int64(&q, 7); if( isDeleted ){ chnged = 1; }else if( !file_isfile(zName) && file_size(0)>=0 ){ if( notFileIsFatal ){ fossil_warning("not an ordinary file: %s", zName); nErr++; } chnged = 1; }else if( oldChnged>=2 ){ chnged = oldChnged; }else if( rid==0 ){ chnged = 1; } if( chnged!=1 ){ i64 origSize = db_column_int64(&q, 6); currentMtime = file_mtime(0); if( origSize!=file_size(0) ){ /* A file size change is definitive - the file has changed. No ** need to check the sha1sum */ chnged = 1; } } if( chnged!=1 && (checkMtime==0 || currentMtime!=oldMtime) ){ db_ephemeral_blob(&q, 5, &origCksum); if( sha1sum_file(zName, &fileCksum) ){ blob_zero(&fileCksum); } if( blob_compare(&fileCksum, &origCksum) ){ chnged = 1; }else if( currentMtime!=oldMtime ){ db_multi_exec("UPDATE vfile SET mtime=%lld WHERE id=%d", currentMtime, id); } blob_reset(&origCksum); blob_reset(&fileCksum); } if( chnged!=oldChnged && (chnged || !checkMtime) ){ db_multi_exec("UPDATE vfile SET chnged=%d WHERE id=%d", chnged, id); } } db_finalize(&q); if( nErr ) fossil_fatal("abort due to prior errors"); db_end_transaction(0); } |
︙ | ︙ | |||
213 214 215 216 217 218 219 | int promptFlag /* Prompt user to confirm overwrites */ ){ Stmt q; Blob content; int nRepos = strlen(g.zLocalRoot); if( vid>0 && id==0 ){ | | | | > > > > > > > > > > | < | | | | | | | | | | | | | > > < > | 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | int promptFlag /* Prompt user to confirm overwrites */ ){ Stmt q; Blob content; int nRepos = strlen(g.zLocalRoot); if( vid>0 && id==0 ){ db_prepare(&q, "SELECT id, %Q || pathname, mrid, isexe" " FROM vfile" " WHERE vid=%d AND mrid>0", g.zLocalRoot, vid); }else{ assert( vid==0 && id>0 ); db_prepare(&q, "SELECT id, %Q || pathname, mrid, isexe" " FROM vfile" " WHERE id=%d AND mrid>0", g.zLocalRoot, id); } while( db_step(&q)==SQLITE_ROW ){ int id, rid, isExe; const char *zName; id = db_column_int(&q, 0); zName = db_column_text(&q, 1); rid = db_column_int(&q, 2); isExe = db_column_int(&q, 3); content_get(rid, &content); if( file_is_the_same(&content, zName) ){ blob_reset(&content); if( file_setexe(zName, isExe) ){ db_multi_exec("UPDATE vfile SET mtime=%lld WHERE id=%d", file_mtime(zName), id); } continue; } if( promptFlag && file_size(zName)>=0 ){ Blob ans; char *zMsg; char cReply; zMsg = mprintf("overwrite %s (a=always/y/N)? ", zName); prompt_user(zMsg, &ans); free(zMsg); cReply = blob_str(&ans)[0]; blob_reset(&ans); if( cReply=='a' || cReply=='A' ){ promptFlag = 0; cReply = 'y'; } if( cReply=='n' || cReply=='N' ){ blob_reset(&content); continue; } } if( verbose ) printf("%s\n", &zName[nRepos]); blob_write_to_file(&content, zName); file_setexe(zName, isExe); blob_reset(&content); db_multi_exec("UPDATE vfile SET mtime=%lld WHERE id=%d", file_mtime(zName), id); } db_finalize(&q); } |
︙ | ︙ | |||
275 276 277 278 279 280 281 282 283 284 285 286 287 288 | zName = db_column_text(&q, 0); unlink(zName); } db_finalize(&q); db_multi_exec("UPDATE vfile SET mtime=NULL WHERE vid=%d AND mrid>0", vid); } /* ** Load into table SFILE the name of every ordinary file in ** the directory pPath. Omit the first nPrefix characters of ** of pPath when inserting into the SFILE table. ** ** Subdirectories are scanned recursively. | > > > > > > > > > > > > > > > > > > > > > | > > > > > > | > | < > > > > > > > > > > > > > > > > > > | > | > | | > > > | > > > > | > | | > | > > > > > > | > | | | | > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | > > | > | > > | | > > | > > > > | | 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 | zName = db_column_text(&q, 0); unlink(zName); } db_finalize(&q); db_multi_exec("UPDATE vfile SET mtime=NULL WHERE vid=%d AND mrid>0", vid); } /* ** Check to see if the directory named in zPath is the top of a checkout. ** In other words, check to see if directory pPath contains a file named ** "_FOSSIL_" or ".fos". Return true or false. */ int vfile_top_of_checkout(const char *zPath){ char *zFile; int fileFound = 0; zFile = mprintf("%s/_FOSSIL_", zPath); fileFound = file_size(zFile)>=1024; fossil_free(zFile); if( !fileFound ){ zFile = mprintf("%s/.fos", zPath); fileFound = file_size(zFile)>=1024; fossil_free(zFile); } return fileFound; } /* ** Load into table SFILE the name of every ordinary file in ** the directory pPath. Omit the first nPrefix characters of ** of pPath when inserting into the SFILE table. ** ** Subdirectories are scanned recursively. ** Omit files named in VFILE. ** ** Files whose names begin with "." are omitted unless allFlag is true. ** ** Any files or directories that match the glob pattern pIgnore are ** excluded from the scan. Name matching occurs after the first ** nPrefix characters are elided from the filename. */ void vfile_scan(Blob *pPath, int nPrefix, int allFlag, Glob *pIgnore){ DIR *d; int origSize; const char *zDir; struct dirent *pEntry; int skipAll = 0; static Stmt ins; static int depth = 0; origSize = blob_size(pPath); if( pIgnore ){ blob_appendf(pPath, "/"); if( glob_match(pIgnore, &blob_str(pPath)[nPrefix+1]) ) skipAll = 1; blob_resize(pPath, origSize); } if( skipAll ) return; if( depth==0 ){ db_prepare(&ins, "INSERT OR IGNORE INTO sfile(x) SELECT :file" " WHERE NOT EXISTS(SELECT 1 FROM vfile WHERE pathname=:file)" ); } depth++; zDir = blob_str(pPath); d = opendir(zDir); if( d ){ while( (pEntry=readdir(d))!=0 ){ char *zPath; if( pEntry->d_name[0]=='.' ){ if( !allFlag ) continue; if( pEntry->d_name[1]==0 ) continue; if( pEntry->d_name[1]=='.' && pEntry->d_name[2]==0 ) continue; } blob_appendf(pPath, "/%s", pEntry->d_name); zPath = blob_str(pPath); if( glob_match(pIgnore, &zPath[nPrefix+1]) ){ /* do nothing */ }else if( file_isdir(zPath)==1 ){ if( !vfile_top_of_checkout(zPath) ){ vfile_scan(pPath, nPrefix, allFlag, pIgnore); } }else if( file_isfile(zPath) ){ db_bind_text(&ins, ":file", &zPath[nPrefix+1]); db_step(&ins); db_reset(&ins); } blob_resize(pPath, origSize); } closedir(d); } depth--; if( depth==0 ){ db_finalize(&ins); } } /* ** Compute an aggregate MD5 checksum over the disk image of every ** file in vid. The file names are part of the checksum. The resulting ** checksum is the same as is expected on the R-card of a manifest. ** ** This function operates differently if the Global.aCommitFile ** variable is not NULL. In that case, the disk image is used for ** each file in aCommitFile[] and the repository image ** is used for all others). ** ** Newly added files that are not contained in the repository are ** omitted from the checksum if they are not in Global.aCommitFile[]. ** ** Newly deleted files are included in the checksum if they are not ** part of Global.aCommitFile[] ** ** Renamed files use their new name if they are in Global.aCommitFile[] ** and their original name if they are not in Global.aCommitFile[] ** ** Return the resulting checksum in blob pOut. */ void vfile_aggregate_checksum_disk(int vid, Blob *pOut){ FILE *in; Stmt q; char zBuf[4096]; db_must_be_within_tree(); db_prepare(&q, "SELECT %Q || pathname, pathname, origname, file_is_selected(id), rid" " FROM vfile" " WHERE (NOT deleted OR NOT file_is_selected(id)) AND vid=%d" " ORDER BY pathname /*scan*/", g.zLocalRoot, vid ); md5sum_init(); while( db_step(&q)==SQLITE_ROW ){ const char *zFullpath = db_column_text(&q, 0); const char *zName = db_column_text(&q, 1); int isSelected = db_column_int(&q, 3); if( isSelected ){ md5sum_step_text(zName, -1); in = fopen(zFullpath,"rb"); if( in==0 ){ md5sum_step_text(" 0\n", -1); continue; } fseek(in, 0L, SEEK_END); sqlite3_snprintf(sizeof(zBuf), zBuf, " %ld\n", ftell(in)); fseek(in, 0L, SEEK_SET); md5sum_step_text(zBuf, -1); /*printf("%s %s %s",md5sum_current_state(),zName,zBuf); fflush(stdout);*/ for(;;){ int n; n = fread(zBuf, 1, sizeof(zBuf), in); if( n<=0 ) break; md5sum_step_text(zBuf, n); } fclose(in); }else{ int rid = db_column_int(&q, 4); const char *zOrigName = db_column_text(&q, 2); char zBuf[100]; Blob file; if( zOrigName ) zName = zOrigName; if( rid>0 ){ md5sum_step_text(zName, -1); blob_zero(&file); content_get(rid, &file); sqlite3_snprintf(sizeof(zBuf), zBuf, " %d\n", blob_size(&file)); md5sum_step_text(zBuf, -1); md5sum_step_blob(&file); blob_reset(&file); } } } db_finalize(&q); md5sum_finish(pOut); } /* ** Do a file-by-file comparison of the content of the repository and ** the working check-out on disk. Report any errors. */ void vfile_compare_repository_to_disk(int vid){ int rc; Stmt q; Blob disk, repo; db_must_be_within_tree(); db_prepare(&q, "SELECT %Q || pathname, pathname, rid FROM vfile" " WHERE NOT deleted AND vid=%d AND file_is_selected(id)", g.zLocalRoot, vid ); md5sum_init(); while( db_step(&q)==SQLITE_ROW ){ const char *zFullpath = db_column_text(&q, 0); const char *zName = db_column_text(&q, 1); int rid = db_column_int(&q, 2); blob_zero(&disk); rc = blob_read_from_file(&disk, zFullpath); if( rc<0 ){ printf("ERROR: cannot read file [%s]\n", zFullpath); blob_reset(&disk); continue; } blob_zero(&repo); content_get(rid, &repo); if( blob_size(&repo)!=blob_size(&disk) ){ printf("ERROR: [%s] is %d bytes on disk but %d in the repository\n", zName, blob_size(&disk), blob_size(&repo)); blob_reset(&disk); blob_reset(&repo); continue; } if( blob_compare(&repo, &disk) ){ printf("ERROR: [%s] is different on disk compared to the repository\n", zName); } blob_reset(&disk); blob_reset(&repo); } db_finalize(&q); } /* ** Compute an aggregate MD5 checksum over the repository image of every ** file in vid. The file names are part of the checksum. The resulting ** checksum is suitable for the R-card of a manifest. ** ** Return the resulting checksum in blob pOut. */ void vfile_aggregate_checksum_repository(int vid, Blob *pOut){ Blob file; Stmt q; char zBuf[100]; db_must_be_within_tree(); db_prepare(&q, "SELECT pathname, origname, rid, file_is_selected(id)" " FROM vfile" " WHERE (NOT deleted OR NOT file_is_selected(id))" " AND rid>0 AND vid=%d" " ORDER BY pathname /*scan*/", vid); blob_zero(&file); md5sum_init(); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zOrigName = db_column_text(&q, 1); int rid = db_column_int(&q, 2); int isSelected = db_column_int(&q, 3); if( zOrigName && !isSelected ) zName = zOrigName; md5sum_step_text(zName, -1); content_get(rid, &file); sqlite3_snprintf(sizeof(zBuf), zBuf, " %d\n", blob_size(&file)); md5sum_step_text(zBuf, -1); /*printf("%s %s %s",md5sum_current_state(),zName,zBuf); fflush(stdout);*/ md5sum_step_blob(&file); blob_reset(&file); } db_finalize(&q); md5sum_finish(pOut); } /* ** Compute an aggregate MD5 checksum over the repository image of every ** file in manifest vid. The file names are part of the checksum. The ** resulting checksum is suitable for use as the R-card of a manifest. ** ** Return the resulting checksum in blob pOut. ** ** If pManOut is not NULL then fill it with the checksum found in the ** "R" card near the end of the manifest. ** ** In a well-formed manifest, the two checksums computed here, pOut and ** pManOut, should be identical. */ void vfile_aggregate_checksum_manifest(int vid, Blob *pOut, Blob *pManOut){ int fid; Blob file; Manifest *pManifest; ManifestFile *pFile; char zBuf[100]; blob_zero(pOut); if( pManOut ){ blob_zero(pManOut); } db_must_be_within_tree(); pManifest = manifest_get(vid, CFTYPE_MANIFEST); if( pManifest==0 ){ fossil_panic("manifest file (%d) is malformed", vid); } manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ if( pFile->zUuid==0 ) continue; fid = uuid_to_rid(pFile->zUuid, 0); md5sum_step_text(pFile->zName, -1); content_get(fid, &file); sqlite3_snprintf(sizeof(zBuf), zBuf, " %d\n", blob_size(&file)); md5sum_step_text(zBuf, -1); md5sum_step_blob(&file); blob_reset(&file); } if( pManOut ){ if( pManifest->zRepoCksum ){ blob_append(pManOut, pManifest->zRepoCksum, -1); |
︙ | ︙ |
Changes to src/wiki.c.
︙ | ︙ | |||
79 80 81 82 83 84 85 | ** WEBPAGE: index ** WEBPAGE: not_found */ void home_page(void){ char *zPageName = db_get("project-name",0); char *zIndexPage = db_get("index-page",0); login_check_credentials(); | < < < > | > > > | | | | 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | ** WEBPAGE: index ** WEBPAGE: not_found */ void home_page(void){ char *zPageName = db_get("project-name",0); char *zIndexPage = db_get("index-page",0); login_check_credentials(); if( zIndexPage ){ const char *zPathInfo = P("PATH_INFO"); while( zIndexPage[0]=='/' ) zIndexPage++; while( zPathInfo[0]=='/' ) zPathInfo++; if( strcmp(zIndexPage, zPathInfo)==0 ) zIndexPage = 0; } if( zIndexPage ){ cgi_redirectf("%s/%s", g.zTop, zIndexPage); } if( !g.okRdWiki ){ cgi_redirectf("%s/login?g=%s/home", g.zTop, g.zTop); } if( zPageName ){ login_check_credentials(); g.zExtra = zPageName; cgi_set_parameter_nocopy("name", g.zExtra); g.isHome = 1; wiki_page(); return; } style_header("Home"); @ <p>This is a stub home-page for the project. @ To fill in this page, first go to @ <a href="%s(g.zTop)/setup_config">setup/config</a> @ and establish a "Project Name". Then create a @ wiki page with that name. The content of that wiki page @ will be displayed in place of this message.</p> style_footer(); } /* ** Return true if the given pagename is the name of the sandbox */ static int is_sandbox(const char *zPagename){ return fossil_stricmp(zPagename,"sandbox")==0 || fossil_stricmp(zPagename,"sand box")==0; } /* ** WEBPAGE: wiki ** URL: /wiki?name=PAGENAME */ void wiki_page(void){ |
︙ | ︙ | |||
139 140 141 142 143 144 145 | if( !g.okRdWiki ){ login_needed(); return; } zPageName = P("name"); if( zPageName==0 ){ style_header("Wiki"); @ <ul> { char *zHomePageName = db_get("project-name",0); if( zHomePageName ){ | | | | | | | | | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | if( !g.okRdWiki ){ login_needed(); return; } zPageName = P("name"); if( zPageName==0 ){ style_header("Wiki"); @ <ul> { char *zHomePageName = db_get("project-name",0); if( zHomePageName ){ @ <li> <a href="%s(g.zTop)/wiki?name=%t(zHomePageName)"> @ %h(zHomePageName)</a> wiki home page.</li> } } @ <li> <a href="%s(g.zTop)/timeline?y=w">Recent changes</a> to wiki @ pages. </li> @ <li> <a href="%s(g.zTop)/wiki_rules">Formatting rules</a> for @ wiki.</li> @ <li> Use the <a href="%s(g.zTop)/wiki?name=Sandbox">Sandbox</a> @ to experiment.</li> if( g.okNewWiki ){ @ <li> Create a <a href="%s(g.zTop)/wikinew">new wiki page</a>.</li> if( g.okWrite ){ @ <li> Create a <a href="%s(g.zTop)/eventedit">new event</a>.</li> } } @ <li> <a href="%s(g.zTop)/wcontent">List of All Wiki Pages</a> @ available on this server.</li> @ <li> <form method="get" action="%s(g.zTop)/wfind"><div> @ Search wiki titles: <input type="text" name="title"/> @ <input type="submit" /></div></form> @ </li> @ </ul> style_footer(); return; } |
︙ | ︙ | |||
302 303 304 305 306 307 308 | int nrid; blob_zero(&wiki); db_begin_transaction(); if( isSandbox ){ db_set("sandbox",zBody,0); }else{ login_verify_csrf_secret(); | | < | | | 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 | int nrid; blob_zero(&wiki); db_begin_transaction(); if( isSandbox ){ db_set("sandbox",zBody,0); }else{ login_verify_csrf_secret(); zDate = date_in_standard_format("now"); blob_appendf(&wiki, "D %s\n", zDate); free(zDate); blob_appendf(&wiki, "L %F\n", zPageName); if( rid ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_appendf(&wiki, "P %s\n", zUuid); free(zUuid); } if( g.zLogin ){ blob_appendf(&wiki, "U %F\n", g.zLogin); } blob_appendf(&wiki, "W %d\n%s\n", strlen(zBody), zBody); md5sum_blob(&wiki, &cksum); blob_appendf(&wiki, "Z %b\n", &cksum); blob_reset(&cksum); nrid = content_put(&wiki); db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); manifest_crosslink(nrid, &wiki); assert( blob_is_reset(&wiki) ); content_deltify(rid, nrid, 0); } db_end_transaction(0); cgi_redirectf("wiki?name=%T", zPageName); } if( P("cancel")!=0 ){ cgi_redirectf("wiki?name=%T", zPageName); |
︙ | ︙ | |||
350 351 352 353 354 355 356 | blob_reset(&wiki); } for(n=2, z=zBody; z[0]; z++){ if( z[0]=='\n' ) n++; } if( n<20 ) n = 20; if( n>40 ) n = 40; | | | 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 | blob_reset(&wiki); } for(n=2, z=zBody; z[0]; z++){ if( z[0]=='\n' ) n++; } if( n<20 ) n = 20; if( n>40 ) n = 40; @ <form method="post" action="%s(g.zTop)/wikiedit"><div> login_insert_csrf_secret(); @ <input type="hidden" name="name" value="%h(zPageName)" /> @ <textarea name="w" class="wikiedit" cols="80" @ rows="%d(n)" wrap="virtual">%h(zBody)</textarea> @ <br /> @ <input type="submit" name="preview" value="Preview Your Changes" /> @ <input type="submit" name="submit" value="Apply These Changes" /> |
︙ | ︙ | |||
385 386 387 388 389 390 391 | zName = PD("name",""); if( zName[0] && wiki_name_is_wellformed((const unsigned char *)zName) ){ cgi_redirectf("wikiedit?name=%T", zName); } style_header("Create A New Wiki Page"); @ <p>Rules for wiki page names:</p> well_formed_wiki_name_rules(); | | | 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 | zName = PD("name",""); if( zName[0] && wiki_name_is_wellformed((const unsigned char *)zName) ){ cgi_redirectf("wikiedit?name=%T", zName); } style_header("Create A New Wiki Page"); @ <p>Rules for wiki page names:</p> well_formed_wiki_name_rules(); @ <form method="post" action="%s(g.zTop)/wikinew"> @ <p>Name of new wiki page: @ <input style="width: 35;" type="text" name="name" value="%h(zName)" /> @ <input type="submit" value="Create" /> @ </p></form> if( zName[0] ){ @ <p><span class="wikiError"> @ "%h(zName)" is not a valid wiki page name!</span></p> |
︙ | ︙ | |||
475 476 477 478 479 480 481 | pWiki = manifest_get(rid, CFTYPE_WIKI); if( pWiki ){ blob_append(&body, pWiki->zWiki, -1); manifest_destroy(pWiki); } blob_zero(&wiki); db_begin_transaction(); | | < | | | 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 | pWiki = manifest_get(rid, CFTYPE_WIKI); if( pWiki ){ blob_append(&body, pWiki->zWiki, -1); manifest_destroy(pWiki); } blob_zero(&wiki); db_begin_transaction(); zDate = date_in_standard_format("now"); blob_appendf(&wiki, "D %s\n", zDate); blob_appendf(&wiki, "L %F\n", zPageName); if( rid ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_appendf(&wiki, "P %s\n", zUuid); free(zUuid); } if( g.zLogin ){ blob_appendf(&wiki, "U %F\n", g.zLogin); } appendRemark(&body); blob_appendf(&wiki, "W %d\n%s\n", blob_size(&body), blob_str(&body)); md5sum_blob(&wiki, &cksum); blob_appendf(&wiki, "Z %b\n", &cksum); blob_reset(&cksum); nrid = content_put(&wiki); db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); manifest_crosslink(nrid, &wiki); assert( blob_is_reset(&wiki) ); content_deltify(rid, nrid, 0); db_end_transaction(0); } cgi_redirectf("wiki?name=%T", zPageName); } if( P("cancel")!=0 ){ cgi_redirectf("wiki?name=%T", zPageName); |
︙ | ︙ | |||
517 518 519 520 521 522 523 | appendRemark(&preview); @ Preview:<hr> wiki_convert(&preview, 0, 0); @ <hr> blob_reset(&preview); } zUser = PD("u", g.zLogin); | | | 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 | appendRemark(&preview); @ Preview:<hr> wiki_convert(&preview, 0, 0); @ <hr> blob_reset(&preview); } zUser = PD("u", g.zLogin); @ <form method="post" action="%s(g.zTop)/wikiappend"> login_insert_csrf_secret(); @ <input type="hidden" name="name" value="%h(zPageName)" /> @ Your Name: @ <input type="text" name="u" size="20" value="%h(zUser)" /><br /> @ Comment to append:<br /> @ <textarea name="r" class="wikiedit" cols="80" @ rows="10" wrap="virtual">%h(PD("r",""))</textarea> |
︙ | ︙ | |||
576 577 578 579 580 581 582 | " UNION SELECT attachid FROM attachment" " WHERE target=%Q)" "ORDER BY mtime DESC", timeline_query_for_www(), zPageName, zPageName); db_prepare(&q, zSQL); free(zSQL); zWikiPageName = zPageName; | | | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 | " UNION SELECT attachid FROM attachment" " WHERE target=%Q)" "ORDER BY mtime DESC", timeline_query_for_www(), zPageName, zPageName); db_prepare(&q, zSQL); free(zSQL); zWikiPageName = zPageName; www_print_timeline(&q, TIMELINE_ARTID, 0, 0, wiki_history_extra); db_finalize(&q); style_footer(); } /* ** WEBPAGE: wdiff ** URL: /whistory?name=PAGENAME&a=RID1&b=RID2 |
︙ | ︙ | |||
691 692 693 694 695 696 697 | @ <ul> db_prepare(&q, "SELECT substr(tagname, 6, 1000) FROM tag WHERE tagname like 'wiki-%%%q%%'" " ORDER BY lower(tagname) /*sort*/" , zTitle); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); | | | 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 | @ <ul> db_prepare(&q, "SELECT substr(tagname, 6, 1000) FROM tag WHERE tagname like 'wiki-%%%q%%'" " ORDER BY lower(tagname) /*sort*/" , zTitle); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); @ <li><a href="%s(g.zTop)/wiki?name=%T(zName)">%h(zName)</a></li> } db_finalize(&q); @ </ul> style_footer(); } /* |
︙ | ︙ | |||
797 798 799 800 801 802 803 | fossil_fatal("no such wiki page: %s", zPageName); } if( rid!=0 && isNew ){ fossil_fatal("wiki page %s already exists", zPageName); } blob_zero(&wiki); | | < | | < | 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 | fossil_fatal("no such wiki page: %s", zPageName); } if( rid!=0 && isNew ){ fossil_fatal("wiki page %s already exists", zPageName); } blob_zero(&wiki); zDate = date_in_standard_format("now"); blob_appendf(&wiki, "D %s\n", zDate); free(zDate); blob_appendf(&wiki, "L %F\n", zPageName ); if( rid ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_appendf(&wiki, "P %s\n", zUuid); free(zUuid); } user_select(); if( g.zLogin ){ blob_appendf(&wiki, "U %F\n", g.zLogin); } blob_appendf( &wiki, "W %d\n%s\n", blob_size(pContent), blob_str(pContent) ); md5sum_blob(&wiki, &cksum); blob_appendf(&wiki, "Z %b\n", &cksum); blob_reset(&cksum); db_begin_transaction(); nrid = content_put( &wiki); db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); manifest_crosslink(nrid,&wiki); assert( blob_is_reset(&wiki) ); content_deltify(rid,nrid,0); db_end_transaction(0); return 1; } /* ** COMMAND: wiki ** ** Usage: %fossil wiki (export|create|commit|list) WikiName |
︙ | ︙ | |||
877 878 879 880 881 882 883 | ** %fossil wiki diff ?ARTIFACT? ?-f infile[=stdin]? EntryName ** ** Diffs the local copy of a page with a given version (defaulting ** to the head version). */ void wiki_cmd(void){ int n; | | | 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 | ** %fossil wiki diff ?ARTIFACT? ?-f infile[=stdin]? EntryName ** ** Diffs the local copy of a page with a given version (defaulting ** to the head version). */ void wiki_cmd(void){ int n; db_find_and_open_repository(0, 0); if( g.argc<3 ){ goto wiki_cmd_usage; } n = strlen(g.argv[2]); if( n==0 ){ goto wiki_cmd_usage; } |
︙ | ︙ |
Changes to src/wikiformat.c.
︙ | ︙ | |||
52 53 54 55 56 57 58 | #define ATTR_HSPACE 15 #define ATTR_ID 16 #define ATTR_NAME 17 #define ATTR_ROWSPAN 18 #define ATTR_SIZE 19 #define ATTR_SRC 20 #define ATTR_START 21 | > | | | | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | #define ATTR_HSPACE 15 #define ATTR_ID 16 #define ATTR_NAME 17 #define ATTR_ROWSPAN 18 #define ATTR_SIZE 19 #define ATTR_SRC 20 #define ATTR_START 21 #define ATTR_TARGET 22 #define ATTR_TYPE 23 #define ATTR_VALIGN 24 #define ATTR_VALUE 25 #define ATTR_VSPACE 26 #define ATTR_WIDTH 27 #define AMSK_ALIGN 0x0000001 #define AMSK_ALT 0x0000002 #define AMSK_BGCOLOR 0x0000004 #define AMSK_BORDER 0x0000008 #define AMSK_CELLPADDING 0x0000010 #define AMSK_CELLSPACING 0x0000020 #define AMSK_CLEAR 0x0000040 |
︙ | ︙ | |||
83 84 85 86 87 88 89 90 91 92 93 94 95 96 | #define AMSK_START 0x0080000 #define AMSK_TYPE 0x0100000 #define AMSK_VALIGN 0x0200000 #define AMSK_VALUE 0x0400000 #define AMSK_VSPACE 0x0800000 #define AMSK_WIDTH 0x1000000 #define AMSK_CLASS 0x2000000 static const struct AllowedAttribute { const char *zName; unsigned int iMask; } aAttribute[] = { { 0, 0 }, { "align", AMSK_ALIGN, }, | > | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | #define AMSK_START 0x0080000 #define AMSK_TYPE 0x0100000 #define AMSK_VALIGN 0x0200000 #define AMSK_VALUE 0x0400000 #define AMSK_VSPACE 0x0800000 #define AMSK_WIDTH 0x1000000 #define AMSK_CLASS 0x2000000 #define AMSK_TARGET 0x4000000 static const struct AllowedAttribute { const char *zName; unsigned int iMask; } aAttribute[] = { { 0, 0 }, { "align", AMSK_ALIGN, }, |
︙ | ︙ | |||
110 111 112 113 114 115 116 117 118 119 120 121 122 123 | { "hspace", AMSK_HSPACE, }, { "id", AMSK_ID, }, { "name", AMSK_NAME, }, { "rowspan", AMSK_ROWSPAN, }, { "size", AMSK_SIZE, }, { "src", AMSK_SRC, }, { "start", AMSK_START, }, { "type", AMSK_TYPE, }, { "valign", AMSK_VALIGN, }, { "value", AMSK_VALUE, }, { "vspace", AMSK_VSPACE, }, { "width", AMSK_WIDTH, }, }; | > | 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | { "hspace", AMSK_HSPACE, }, { "id", AMSK_ID, }, { "name", AMSK_NAME, }, { "rowspan", AMSK_ROWSPAN, }, { "size", AMSK_SIZE, }, { "src", AMSK_SRC, }, { "start", AMSK_START, }, { "target", AMSK_TARGET, }, { "type", AMSK_TYPE, }, { "valign", AMSK_VALIGN, }, { "value", AMSK_VALUE, }, { "vspace", AMSK_VSPACE, }, { "width", AMSK_WIDTH, }, }; |
︙ | ︙ | |||
236 237 238 239 240 241 242 | const char *zName; /* Name of the markup */ char iCode; /* The MARKUP_* code */ short int iType; /* The MUTYPE_* code */ int allowedAttr; /* Allowed attributes on this markup */ } aMarkup[] = { { 0, MARKUP_INVALID, 0, 0 }, { "a", MARKUP_A, MUTYPE_HYPERLINK, | | | 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | const char *zName; /* Name of the markup */ char iCode; /* The MARKUP_* code */ short int iType; /* The MUTYPE_* code */ int allowedAttr; /* Allowed attributes on this markup */ } aMarkup[] = { { 0, MARKUP_INVALID, 0, 0 }, { "a", MARKUP_A, MUTYPE_HYPERLINK, AMSK_HREF|AMSK_NAME|AMSK_CLASS|AMSK_TARGET }, { "address", MARKUP_ADDRESS, MUTYPE_BLOCK, 0 }, { "b", MARKUP_B, MUTYPE_FONT, 0 }, { "big", MARKUP_BIG, MUTYPE_FONT, 0 }, { "blockquote", MARKUP_BLOCKQUOTE, MUTYPE_BLOCK, 0 }, { "br", MARKUP_BR, MUTYPE_SINGLE, AMSK_CLEAR }, { "center", MARKUP_CENTER, MUTYPE_BLOCK, 0 }, { "cite", MARKUP_CITE, MUTYPE_FONT, 0 }, |
︙ | ︙ | |||
299 300 301 302 303 304 305 | AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS }, { "tfoot", MARKUP_TFOOT, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, { "th", MARKUP_TH, MUTYPE_TD, AMSK_ALIGN|AMSK_BGCOLOR|AMSK_COLSPAN| AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS }, { "thead", MARKUP_THEAD, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, { "tr", MARKUP_TR, MUTYPE_TR, | | | 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS }, { "tfoot", MARKUP_TFOOT, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, { "th", MARKUP_TH, MUTYPE_TD, AMSK_ALIGN|AMSK_BGCOLOR|AMSK_COLSPAN| AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS }, { "thead", MARKUP_THEAD, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, { "tr", MARKUP_TR, MUTYPE_TR, AMSK_ALIGN|AMSK_BGCOLOR|AMSK_VALIGN|AMSK_CLASS }, { "tt", MARKUP_TT, MUTYPE_FONT, 0 }, { "u", MARKUP_U, MUTYPE_FONT, 0 }, { "ul", MARKUP_UL, MUTYPE_LIST, AMSK_TYPE|AMSK_COMPACT }, { "var", MARKUP_VAR, MUTYPE_FONT, 0 }, { "verbatim", MARKUP_VERBATIM, MUTYPE_SPECIAL, AMSK_ID|AMSK_TYPE }, }; |
︙ | ︙ | |||
772 773 774 775 776 777 778 | }else{ blob_appendf(pOut, "<%s", aMarkup[p->iCode].zName); for(i=0; i<p->nAttr; i++){ blob_appendf(pOut, " %s", aAttribute[p->aAttr[i].iACode].zName); if( p->aAttr[i].zValue ){ const char *zVal = p->aAttr[i].zValue; if( p->aAttr[i].iACode==ATTR_SRC && zVal[0]=='/' ){ | | | 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 | }else{ blob_appendf(pOut, "<%s", aMarkup[p->iCode].zName); for(i=0; i<p->nAttr; i++){ blob_appendf(pOut, " %s", aAttribute[p->aAttr[i].iACode].zName); if( p->aAttr[i].zValue ){ const char *zVal = p->aAttr[i].zValue; if( p->aAttr[i].iACode==ATTR_SRC && zVal[0]=='/' ){ blob_appendf(pOut, "=\"%s%s\"", g.zTop, zVal); }else{ blob_appendf(pOut, "=\"%s\"", zVal); } } } if (p->iType & MUTYPE_SINGLE){ blob_append(pOut, " /", 2); |
︙ | ︙ | |||
934 935 936 937 938 939 940 941 942 943 944 945 946 947 | */ static int is_valid_uuid(const char *z){ int n = strlen(z); if( n<4 || n>UUID_SIZE ) return 0; if( !validate16(z, n) ) return 0; return 1; } /* ** zTarget is guaranteed to be a UUID. It might be the UUID of a ticket. ** If it is, store in *pClosed a true or false depending on whether or not ** the ticket is closed and return true. If zTarget ** is not the UUID of a ticket, return false. */ | > > > > > > > > > > > > > > > > | 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 | */ static int is_valid_uuid(const char *z){ int n = strlen(z); if( n<4 || n>UUID_SIZE ) return 0; if( !validate16(z, n) ) return 0; return 1; } /* ** Return TRUE if a UUID corresponds to an artifact in this ** repository. */ static int in_this_repo(const char *zUuid){ static Stmt q; int rc; db_static_prepare(&q, "SELECT 1 FROM blob WHERE uuid>=:u AND +uuid GLOB (:u || '*')" ); db_bind_text(&q, ":u", zUuid); rc = db_step(&q); db_reset(&q); return rc==SQLITE_ROW; } /* ** zTarget is guaranteed to be a UUID. It might be the UUID of a ticket. ** If it is, store in *pClosed a true or false depending on whether or not ** the ticket is closed and return true. If zTarget ** is not the UUID of a ticket, return false. */ |
︙ | ︙ | |||
1023 1024 1025 1026 1027 1028 1029 | || strncmp(zTarget, "ftp:", 4)==0 || strncmp(zTarget, "mailto:", 7)==0 ){ blob_appendf(p->pOut, "<a href=\"%s\">", zTarget); /* zTerm = "⟾</a>"; // doesn't work on windows */ }else if( zTarget[0]=='/' ){ if( 1 /* g.okHistory */ ){ | | | | | | | | | > > | > > > | > | | | | | | 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 | || strncmp(zTarget, "ftp:", 4)==0 || strncmp(zTarget, "mailto:", 7)==0 ){ blob_appendf(p->pOut, "<a href=\"%s\">", zTarget); /* zTerm = "⟾</a>"; // doesn't work on windows */ }else if( zTarget[0]=='/' ){ if( 1 /* g.okHistory */ ){ blob_appendf(p->pOut, "<a href=\"%s%h\">", g.zTop, zTarget); }else{ zTerm = ""; } }else if( zTarget[0]=='.' || zTarget[0]=='#' ){ if( 1 /* g.okHistory */ ){ blob_appendf(p->pOut, "<a href=\"%h\">", zTarget); }else{ zTerm = ""; } }else if( is_valid_uuid(zTarget) ){ int isClosed = 0; if( is_ticket(zTarget, &isClosed) ){ /* Special display processing for tickets. Display the hyperlink ** as crossed out if the ticket is closed. */ if( isClosed ){ if( g.okHistory ){ blob_appendf(p->pOut, "<a href=\"%s/info/%s\"><span class=\"wikiTagCancelled\">[", g.zTop, zTarget ); zTerm = "]</span></a>"; }else{ blob_appendf(p->pOut,"<span class=\"wikiTagCancelled\">["); zTerm = "]</span>"; } }else{ if( g.okHistory ){ blob_appendf(p->pOut,"<a href=\"%s/info/%s\">[", g.zTop, zTarget ); zTerm = "]</a>"; }else{ blob_appendf(p->pOut, "["); zTerm = "]"; } } }else if( !in_this_repo(zTarget) ){ blob_appendf(p->pOut, "<span class=\"brokenlink\">[", zTarget); zTerm = "]</span>"; }else if( g.okHistory ){ blob_appendf(p->pOut, "<a href=\"%s/info/%s\">[", g.zTop, zTarget); zTerm = "]</a>"; } }else if( strlen(zTarget)>=10 && fossil_isdigit(zTarget[0]) && zTarget[4]=='-' && db_int(0, "SELECT datetime(%Q) NOT NULL", zTarget) ){ blob_appendf(p->pOut, "<a href=\"%s/timeline?c=%T\">", g.zTop, zTarget); }else if( strncmp(zTarget, "wiki:", 5)==0 && wiki_name_is_wellformed((const unsigned char*)zTarget) ){ zTarget += 5; blob_appendf(p->pOut, "<a href=\"%s/wiki?name=%T\">", g.zTop, zTarget); }else if( wiki_name_is_wellformed((const unsigned char *)zTarget) ){ blob_appendf(p->pOut, "<a href=\"%s/wiki?name=%T\">", g.zTop, zTarget); }else{ blob_appendf(p->pOut, "<span class=\"brokenlink\">[%h]</span>", zTarget); zTerm = ""; } assert( strlen(zTerm)<nClose ); sqlite3_snprintf(nClose, zClose, "%s", zTerm); } /* ** Check to see if the given parsed markup is the correct ** </verbatim> tag. */ static int endVerbatim(Renderer *p, ParsedMarkup *pMarkup){ |
︙ | ︙ |
Changes to src/winhttp.c.
︙ | ︙ | |||
26 27 28 29 30 31 32 | /* ** The HttpRequest structure holds information about each incoming ** HTTP request. */ typedef struct HttpRequest HttpRequest; struct HttpRequest { | | | | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | /* ** The HttpRequest structure holds information about each incoming ** HTTP request. */ typedef struct HttpRequest HttpRequest; struct HttpRequest { int id; /* ID counter */ SOCKET s; /* Socket on which to receive data */ SOCKADDR_IN addr; /* Address from which data is coming */ const char *zOptions; /* --notfound and/or --localauth options */ }; /* ** Prefix for a temporary file. */ static char *zTempPrefix; /* ** Look at the HTTP header contained in zHdr. Find the content ** length and return it. Return 0 if there is no Content-Length: ** header line. */ static int find_content_length(const char *zHdr){ while( *zHdr ){ if( zHdr[0]=='\n' ){ if( zHdr[1]=='\r' ) return 0; if( fossil_strnicmp(&zHdr[1], "content-length:", 15)==0 ){ return atoi(&zHdr[17]); } } zHdr++; } return 0; } |
︙ | ︙ | |||
69 70 71 72 73 74 75 | int wanted = 0; char *z; char zRequestFName[100]; char zReplyFName[100]; char zCmd[2000]; /* Command-line to process the request */ char zHdr[2000]; /* The HTTP request header */ | > | > | | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | int wanted = 0; char *z; char zRequestFName[100]; char zReplyFName[100]; char zCmd[2000]; /* Command-line to process the request */ char zHdr[2000]; /* The HTTP request header */ sqlite3_snprintf(sizeof(zRequestFName), zRequestFName, "%s_in%d.txt", zTempPrefix, p->id); sqlite3_snprintf(sizeof(zReplyFName), zReplyFName, "%s_out%d.txt", zTempPrefix, p->id); amt = 0; while( amt<sizeof(zHdr) ){ got = recv(p->s, &zHdr[amt], sizeof(zHdr)-1-amt, 0); if( got==SOCKET_ERROR ) goto end_request; if( got==0 ){ wanted = 0; break; |
︙ | ︙ | |||
103 104 105 106 107 108 109 | }else{ break; } wanted -= got; } fclose(out); out = 0; | | | | | 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | }else{ break; } wanted -= got; } fclose(out); out = 0; sqlite3_snprintf(sizeof(zCmd), zCmd, "\"%s\" http \"%s\" %s %s %s --nossl%s", fossil_nameofexe(), g.zRepositoryName, zRequestFName, zReplyFName, inet_ntoa(p->addr.sin_addr), p->zOptions ); fossil_system(zCmd); in = fopen(zReplyFName, "rb"); if( in ){ while( (got = fread(zHdr, 1, sizeof(zHdr), in))>0 ){ send(p->s, zHdr, got, 0); } |
︙ | ︙ | |||
140 141 142 143 144 145 146 | int flags /* One or more HTTP_SERVER_ flags */ ){ WSADATA wd; SOCKET s = INVALID_SOCKET; SOCKADDR_IN addr; int idCnt = 0; int iPort = mnPort; | | > | > | | | 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | int flags /* One or more HTTP_SERVER_ flags */ ){ WSADATA wd; SOCKET s = INVALID_SOCKET; SOCKADDR_IN addr; int idCnt = 0; int iPort = mnPort; Blob options; if( zStopper ) unlink(zStopper); blob_zero(&options); if( zNotFound ){ blob_appendf(&options, " --notfound %s", zNotFound); } if( g.useLocalauth ){ blob_appendf(&options, " --localauth"); } if( WSAStartup(MAKEWORD(1,1), &wd) ){ fossil_fatal("unable to initialize winsock"); } while( iPort<=mxPort ){ s = socket(AF_INET, SOCK_STREAM, 0); if( s==INVALID_SOCKET ){ |
︙ | ︙ | |||
209 210 211 212 213 214 215 | closesocket(s); fossil_fatal("error from accept()"); } p = fossil_malloc( sizeof(*p) ); p->id = ++idCnt; p->s = client; p->addr = client_addr; | | | 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 | closesocket(s); fossil_fatal("error from accept()"); } p = fossil_malloc( sizeof(*p) ); p->id = ++idCnt; p->s = client; p->addr = client_addr; p->zOptions = blob_str(&options); _beginthread(win32_process_one_http_request, 0, (void*)p); } closesocket(s); WSACleanup(); } #endif /* _WIN32 -- This code is for win32 only */ |
Changes to src/xfer.c.
︙ | ︙ | |||
25 26 27 28 29 30 31 | ** a client or a server that is participating in xfer. */ typedef struct Xfer Xfer; struct Xfer { Blob *pIn; /* Input text from the other side */ Blob *pOut; /* Compose our reply here */ Blob line; /* The current line of input */ | | > > | | | 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | ** a client or a server that is participating in xfer. */ typedef struct Xfer Xfer; struct Xfer { Blob *pIn; /* Input text from the other side */ Blob *pOut; /* Compose our reply here */ Blob line; /* The current line of input */ Blob aToken[6]; /* Tokenized version of line */ Blob err; /* Error message text */ int nToken; /* Number of tokens in line */ int nIGotSent; /* Number of "igot" cards sent */ int nGimmeSent; /* Number of gimme cards sent */ int nFileSent; /* Number of files sent */ int nDeltaSent; /* Number of deltas sent */ int nFileRcvd; /* Number of files received */ int nDeltaRcvd; /* Number of deltas received */ int nDanglingFile; /* Number of dangling deltas received */ int mxSend; /* Stop sending "file" with pOut reaches this size */ u8 syncPrivate; /* True to enable syncing private content */ u8 nextIsPrivate; /* If true, next "file" received is a private */ }; /* ** The input blob contains a UUID. Convert it into a record ID. ** Create a phantom record if no prior record exists and ** phantomize is true. ** ** Compare to uuid_to_rid(). This routine takes a blob argument ** and does less error checking. */ static int rid_from_uuid(Blob *pUuid, int phantomize, int isPrivate){ static Stmt q; int rid; db_static_prepare(&q, "SELECT rid FROM blob WHERE uuid=:uuid"); db_bind_str(&q, ":uuid", pUuid); if( db_step(&q)==SQLITE_ROW ){ rid = db_column_int(&q, 0); }else{ rid = 0; } db_reset(&q); if( rid==0 && phantomize ){ rid = content_new(blob_str(pUuid), isPrivate); } return rid; } /* ** Remember that the other side of the connection already has a copy ** of the file rid. |
︙ | ︙ | |||
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | ** be public and is therefore removed from the "private" table. */ static void xfer_accept_file(Xfer *pXfer, int cloneFlag){ int n; int rid; int srcid = 0; Blob content, hash; if( pXfer->nToken<3 || pXfer->nToken>4 || !blob_is_uuid(&pXfer->aToken[1]) || !blob_is_int(&pXfer->aToken[pXfer->nToken-1], &n) || n<0 || (pXfer->nToken==4 && !blob_is_uuid(&pXfer->aToken[2])) ){ blob_appendf(&pXfer->err, "malformed file line"); return; } blob_zero(&content); blob_zero(&hash); blob_extract(pXfer->pIn, n, &content); if( !cloneFlag && uuid_is_shunned(blob_str(&pXfer->aToken[1])) ){ /* Ignore files that have been shunned */ return; } if( cloneFlag ){ if( pXfer->nToken==4 ){ | > > > > > > > | | > | | > | | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | ** be public and is therefore removed from the "private" table. */ static void xfer_accept_file(Xfer *pXfer, int cloneFlag){ int n; int rid; int srcid = 0; Blob content, hash; int isPriv; isPriv = pXfer->nextIsPrivate; pXfer->nextIsPrivate = 0; if( pXfer->nToken<3 || pXfer->nToken>4 || !blob_is_uuid(&pXfer->aToken[1]) || !blob_is_int(&pXfer->aToken[pXfer->nToken-1], &n) || n<0 || (pXfer->nToken==4 && !blob_is_uuid(&pXfer->aToken[2])) ){ blob_appendf(&pXfer->err, "malformed file line"); return; } blob_zero(&content); blob_zero(&hash); blob_extract(pXfer->pIn, n, &content); if( !cloneFlag && uuid_is_shunned(blob_str(&pXfer->aToken[1])) ){ /* Ignore files that have been shunned */ return; } if( isPriv && !g.okPrivate ){ /* Do not accept private files if not authorized */ return; } if( cloneFlag ){ if( pXfer->nToken==4 ){ srcid = rid_from_uuid(&pXfer->aToken[2], 1, isPriv); pXfer->nDeltaRcvd++; }else{ srcid = 0; pXfer->nFileRcvd++; } rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, 0, isPriv); remote_has(rid); blob_reset(&content); return; } if( pXfer->nToken==4 ){ Blob src, next; srcid = rid_from_uuid(&pXfer->aToken[2], 1, isPriv); if( content_get(srcid, &src)==0 ){ rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, 0, isPriv); pXfer->nDanglingFile++; db_multi_exec("DELETE FROM phantom WHERE rid=%d", rid); if( !isPriv ) content_make_public(rid); return; } pXfer->nDeltaRcvd++; blob_delta_apply(&src, &content, &next); blob_reset(&src); blob_reset(&content); content = next; }else{ pXfer->nFileRcvd++; } sha1sum_blob(&content, &hash); if( !blob_eq_str(&pXfer->aToken[1], blob_str(&hash), -1) ){ blob_appendf(&pXfer->err, "content does not match sha1 hash"); } rid = content_put_ex(&content, blob_str(&hash), 0, 0, isPriv); blob_reset(&hash); if( rid==0 ){ blob_appendf(&pXfer->err, "%s", g.zErrMsg); blob_reset(&content); }else{ if( !isPriv ) content_make_public(rid); manifest_crosslink(rid, &content); } assert( blob_is_reset(&content) ); remote_has(rid); } /* ** The aToken[0..nToken-1] blob array is a parse of a "cfile" line ** message. This routine finishes parsing that message and does ** a record insert of the file. The difference between "file" and ** "cfile" is that with "cfile" the content is already compressed. ** ** The file line is in one of the following two forms: ** ** cfile UUID USIZE CSIZE \n CONTENT ** cfile UUID DELTASRC USIZE CSIZE \n CONTENT ** ** The content is CSIZE bytes immediately following the newline. ** If DELTASRC exists, then the CONTENT is a delta against the ** content of DELTASRC. ** ** The original size of the UUID artifact is USIZE. ** ** If any error occurs, write a message into pErr which has already ** be initialized to an empty string. ** ** Any artifact successfully received by this routine is considered to ** be public and is therefore removed from the "private" table. */ static void xfer_accept_compressed_file(Xfer *pXfer){ int szC; /* CSIZE */ int szU; /* USIZE */ int rid; int srcid = 0; Blob content; int isPriv; isPriv = pXfer->nextIsPrivate; pXfer->nextIsPrivate = 0; if( pXfer->nToken<4 || pXfer->nToken>5 || !blob_is_uuid(&pXfer->aToken[1]) || !blob_is_int(&pXfer->aToken[pXfer->nToken-2], &szU) || !blob_is_int(&pXfer->aToken[pXfer->nToken-1], &szC) || szC<0 || szU<0 || (pXfer->nToken==5 && !blob_is_uuid(&pXfer->aToken[2])) ){ blob_appendf(&pXfer->err, "malformed cfile line"); return; } if( isPriv && !g.okPrivate ){ /* Do not accept private files if not authorized */ return; } blob_zero(&content); blob_extract(pXfer->pIn, szC, &content); if( uuid_is_shunned(blob_str(&pXfer->aToken[1])) ){ /* Ignore files that have been shunned */ return; } if( pXfer->nToken==5 ){ srcid = rid_from_uuid(&pXfer->aToken[2], 1, isPriv); pXfer->nDeltaRcvd++; }else{ srcid = 0; pXfer->nFileRcvd++; } rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, szC, isPriv); remote_has(rid); blob_reset(&content); } /* ** Try to send a file as a delta against its parent. ** If successful, return the number of bytes in the delta. ** If we cannot generate an appropriate delta, then send ** nothing and return zero. ** ** Never send a delta against a private artifact. */ static int send_delta_parent( Xfer *pXfer, /* The transfer context */ int rid, /* record id of the file to send */ int isPrivate, /* True if rid is a private artifact */ Blob *pContent, /* The content of the file to send */ Blob *pUuid /* The UUID of the file to send */ ){ static const char *azQuery[] = { "SELECT pid FROM plink x" " WHERE cid=%d" " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=pid)" |
︙ | ︙ | |||
203 204 205 206 207 208 209 | Blob src, delta; int size = 0; int srcId = 0; for(i=0; srcId==0 && i<count(azQuery); i++){ srcId = db_int(0, azQuery[i], rid); } | > > | > > < > | > > > | 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 | Blob src, delta; int size = 0; int srcId = 0; for(i=0; srcId==0 && i<count(azQuery); i++){ srcId = db_int(0, azQuery[i], rid); } if( srcId>0 && (pXfer->syncPrivate || !content_is_private(srcId)) && content_get(srcId, &src) ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", srcId); blob_delta_create(&src, pContent, &delta); size = blob_size(&delta); if( size>=blob_size(pContent)-50 ){ size = 0; }else if( uuid_is_shunned(zUuid) ){ size = 0; }else{ if( isPrivate ) blob_append(pXfer->pOut, "private\n", -1); blob_appendf(pXfer->pOut, "file %b %s %d\n", pUuid, zUuid, size); blob_append(pXfer->pOut, blob_buffer(&delta), size); } blob_reset(&delta); free(zUuid); blob_reset(&src); } return size; } /* ** Try to send a file as a native delta. ** If successful, return the number of bytes in the delta. ** If we cannot generate an appropriate delta, then send ** nothing and return zero. ** ** Never send a delta against a private artifact. */ static int send_delta_native( Xfer *pXfer, /* The transfer context */ int rid, /* record id of the file to send */ int isPrivate, /* True if rid is a private artifact */ Blob *pUuid /* The UUID of the file to send */ ){ Blob src, delta; int size = 0; int srcId; srcId = db_int(0, "SELECT srcid FROM delta WHERE rid=%d", rid); if( srcId>0 && (pXfer->syncPrivate || !content_is_private(srcId)) ){ blob_zero(&src); db_blob(&src, "SELECT uuid FROM blob WHERE rid=%d", srcId); if( uuid_is_shunned(blob_str(&src)) ){ blob_reset(&src); return 0; } blob_zero(&delta); db_blob(&delta, "SELECT content FROM blob WHERE rid=%d", rid); blob_uncompress(&delta, &delta); if( isPrivate ) blob_append(pXfer->pOut, "private\n", -1); blob_appendf(pXfer->pOut, "file %b %b %d\n", pUuid, &src, blob_size(&delta)); blob_append(pXfer->pOut, blob_buffer(&delta), blob_size(&delta)); size = blob_size(&delta); blob_reset(&delta); blob_reset(&src); }else{ |
︙ | ︙ | |||
279 280 281 282 283 284 285 286 | ** It should never be the case that rid is a private artifact. But ** as a precaution, this routine does check on rid and if it is private ** this routine becomes a no-op. */ static void send_file(Xfer *pXfer, int rid, Blob *pUuid, int nativeDelta){ Blob content, uuid; int size = 0; | > | | 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | ** It should never be the case that rid is a private artifact. But ** as a precaution, this routine does check on rid and if it is private ** this routine becomes a no-op. */ static void send_file(Xfer *pXfer, int rid, Blob *pUuid, int nativeDelta){ Blob content, uuid; int size = 0; int isPriv = content_is_private(rid); if( pXfer->syncPrivate==0 && isPriv ) return; if( db_exists("SELECT 1 FROM onremote WHERE rid=%d", rid) ){ return; } blob_zero(&uuid); db_blob(&uuid, "SELECT uuid FROM blob WHERE rid=%d AND size>=0", rid); if( blob_size(&uuid)==0 ){ return; |
︙ | ︙ | |||
302 303 304 305 306 307 308 | pUuid = &uuid; } if( uuid_is_shunned(blob_str(pUuid)) ){ blob_reset(&uuid); return; } if( pXfer->mxSend<=blob_size(pXfer->pOut) ){ | > | | | > > > > | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > < > | | > | | 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 | pUuid = &uuid; } if( uuid_is_shunned(blob_str(pUuid)) ){ blob_reset(&uuid); return; } if( pXfer->mxSend<=blob_size(pXfer->pOut) ){ const char *zFormat = isPriv ? "igot %b 1\n" : "igot %b\n"; blob_appendf(pXfer->pOut, zFormat, pUuid); pXfer->nIGotSent++; blob_reset(&uuid); return; } if( nativeDelta ){ size = send_delta_native(pXfer, rid, isPriv, pUuid); if( size ){ pXfer->nDeltaSent++; } } if( size==0 ){ content_get(rid, &content); if( !nativeDelta && blob_size(&content)>100 ){ size = send_delta_parent(pXfer, rid, isPriv, &content, pUuid); } if( size==0 ){ int size = blob_size(&content); if( isPriv ) blob_append(pXfer->pOut, "private\n", -1); blob_appendf(pXfer->pOut, "file %b %d\n", pUuid, size); blob_append(pXfer->pOut, blob_buffer(&content), size); pXfer->nFileSent++; }else{ pXfer->nDeltaSent++; } } remote_has(rid); blob_reset(&uuid); #if 0 if( blob_buffer(pXfer->pOut)[blob_size(pXfer->pOut)-1]!='\n' ){ blob_appendf(pXfer->pOut, "\n", 1); } #endif } /* ** Send the file identified by rid as a compressed artifact. Basically, ** send the content exactly as it appears in the BLOB table using ** a "cfile" card. */ static void send_compressed_file(Xfer *pXfer, int rid){ const char *zContent; const char *zUuid; const char *zDelta; int szU; int szC; int rc; int isPrivate; static Stmt q1; isPrivate = content_is_private(rid); if( isPrivate && pXfer->syncPrivate==0 ) return; db_static_prepare(&q1, "SELECT uuid, size, content," " (SELECT uuid FROM delta, blob" " WHERE delta.rid=:rid AND delta.srcid=blob.rid)" " FROM blob" " WHERE rid=:rid" " AND size>=0" " AND NOT EXISTS(SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)" ); db_bind_int(&q1, ":rid", rid); rc = db_step(&q1); if( rc==SQLITE_ROW ){ zUuid = db_column_text(&q1, 0); szU = db_column_int(&q1, 1); szC = db_column_bytes(&q1, 2); zContent = db_column_raw(&q1, 2); zDelta = db_column_text(&q1, 3); if( isPrivate ) blob_append(pXfer->pOut, "private\n", -1); blob_appendf(pXfer->pOut, "cfile %s ", zUuid); if( zDelta ){ blob_appendf(pXfer->pOut, "%s ", zDelta); pXfer->nDeltaSent++; }else{ pXfer->nFileSent++; } blob_appendf(pXfer->pOut, "%d %d\n", szU, szC); blob_append(pXfer->pOut, zContent, szC); if( blob_buffer(pXfer->pOut)[blob_size(pXfer->pOut)-1]!='\n' ){ blob_appendf(pXfer->pOut, "\n", 1); } } db_reset(&q1); } /* ** Send a gimme message for every phantom. ** ** Except: do not request shunned artifacts. And do not request ** private artifacts if we are not doing a private transfer. */ static void request_phantoms(Xfer *pXfer, int maxReq){ Stmt q; db_prepare(&q, "SELECT uuid FROM phantom JOIN blob USING(rid)" " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid) %s", (pXfer->syncPrivate ? "" : " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)") ); while( db_step(&q)==SQLITE_ROW && maxReq-- > 0 ){ const char *zUuid = db_column_text(&q, 0); blob_appendf(pXfer->pOut, "gimme %s\n", zUuid); pXfer->nGimmeSent++; } db_finalize(&q); |
︙ | ︙ | |||
402 403 404 405 406 407 408 409 410 411 412 413 414 415 | int rc = -1; char *zLogin = blob_terminate(pLogin); defossilize(zLogin); if( strcmp(zLogin, "nobody")==0 || strcmp(zLogin,"anonymous")==0 ){ return 0; /* Anybody is allowed to sync as "nobody" or "anonymous" */ } db_prepare(&q, "SELECT pw, cap, uid FROM user" " WHERE login=%Q" " AND login NOT IN ('anonymous','nobody','developer','reader')" " AND length(pw)>0", zLogin ); | > > > | 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 | int rc = -1; char *zLogin = blob_terminate(pLogin); defossilize(zLogin); if( strcmp(zLogin, "nobody")==0 || strcmp(zLogin,"anonymous")==0 ){ return 0; /* Anybody is allowed to sync as "nobody" or "anonymous" */ } if( fossil_strcmp(P("REMOTE_USER"), zLogin)==0 ){ return 0; /* Accept Basic Authorization */ } db_prepare(&q, "SELECT pw, cap, uid FROM user" " WHERE login=%Q" " AND login NOT IN ('anonymous','nobody','developer','reader')" " AND length(pw)>0", zLogin ); |
︙ | ︙ | |||
429 430 431 432 433 434 435 | blob_reset(&combined); if( rc!=0 && szPw!=40 ){ /* If this server stores cleartext passwords and the password did not ** match, then perhaps the client is sending SHA1 passwords. Try ** again with the SHA1 password. */ const char *zPw = db_column_text(&q, 0); | | | < < < | 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 | blob_reset(&combined); if( rc!=0 && szPw!=40 ){ /* If this server stores cleartext passwords and the password did not ** match, then perhaps the client is sending SHA1 passwords. Try ** again with the SHA1 password. */ const char *zPw = db_column_text(&q, 0); char *zSecret = sha1_shared_secret(zPw, blob_str(pLogin), 0); blob_zero(&combined); blob_copy(&combined, pNonce); blob_append(&combined, zSecret, -1); free(zSecret); sha1sum_blob(&combined, &hash); rc = blob_compare(&hash, pSig); blob_reset(&hash); blob_reset(&combined); } if( rc==0 ){ const char *zCap; zCap = db_column_text(&q, 1); login_set_capabilities(zCap, 0); g.userUid = db_column_int(&q, 2); g.zLogin = mprintf("%b", pLogin); g.zNonce = mprintf("%b", pNonce); } } db_finalize(&q); return rc; } /* |
︙ | ︙ | |||
481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 | ** Check to see if the number of unclustered entries is greater than ** 100 and if it is, form a new cluster. Unclustered phantoms do not ** count toward the 100 total. And phantoms are never added to a new ** cluster. */ void create_cluster(void){ Blob cluster, cksum; Stmt q; int nUncl; int nRow = 0; /* We should not ever get any private artifacts in the unclustered table. ** But if we do (because of a bug) now is a good time to delete them. */ db_multi_exec( "DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)" ); nUncl = db_int(0, "SELECT count(*) FROM unclustered /*scan*/" " WHERE NOT EXISTS(SELECT 1 FROM phantom" " WHERE rid=unclustered.rid)"); if( nUncl>=100 ){ blob_zero(&cluster); db_prepare(&q, "SELECT uuid FROM unclustered, blob" " WHERE NOT EXISTS(SELECT 1 FROM phantom" " WHERE rid=unclustered.rid)" " AND unclustered.rid=blob.rid" " AND NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" " ORDER BY 1"); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(&cluster, "M %s\n", db_column_text(&q, 0)); nRow++; if( nRow>=800 && nUncl>nRow+100 ){ md5sum_blob(&cluster, &cksum); blob_appendf(&cluster, "Z %b\n", &cksum); blob_reset(&cksum); | > > > | > | > > > > | > > > > > > > > > > > > > > > > > | | | 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 | ** Check to see if the number of unclustered entries is greater than ** 100 and if it is, form a new cluster. Unclustered phantoms do not ** count toward the 100 total. And phantoms are never added to a new ** cluster. */ void create_cluster(void){ Blob cluster, cksum; Blob deleteWhere; Stmt q; int nUncl; int nRow = 0; int rid; /* We should not ever get any private artifacts in the unclustered table. ** But if we do (because of a bug) now is a good time to delete them. */ db_multi_exec( "DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)" ); nUncl = db_int(0, "SELECT count(*) FROM unclustered /*scan*/" " WHERE NOT EXISTS(SELECT 1 FROM phantom" " WHERE rid=unclustered.rid)"); if( nUncl>=100 ){ blob_zero(&cluster); blob_zero(&deleteWhere); db_prepare(&q, "SELECT uuid FROM unclustered, blob" " WHERE NOT EXISTS(SELECT 1 FROM phantom" " WHERE rid=unclustered.rid)" " AND unclustered.rid=blob.rid" " AND NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" " ORDER BY 1"); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(&cluster, "M %s\n", db_column_text(&q, 0)); nRow++; if( nRow>=800 && nUncl>nRow+100 ){ md5sum_blob(&cluster, &cksum); blob_appendf(&cluster, "Z %b\n", &cksum); blob_reset(&cksum); rid = content_put(&cluster); blob_reset(&cluster); nUncl -= nRow; nRow = 0; blob_appendf(&deleteWhere, ",%d", rid); } } db_finalize(&q); db_multi_exec( "DELETE FROM unclustered WHERE rid NOT IN (0 %s)", blob_str(&deleteWhere) ); blob_reset(&deleteWhere); if( nRow>0 ){ md5sum_blob(&cluster, &cksum); blob_appendf(&cluster, "Z %b\n", &cksum); blob_reset(&cksum); content_put(&cluster); blob_reset(&cluster); } } } /* ** Send igot messages for every private artifact */ static int send_private(Xfer *pXfer){ int cnt = 0; Stmt q; if( pXfer->syncPrivate ){ db_prepare(&q, "SELECT uuid FROM private JOIN blob USING(rid)"); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pXfer->pOut, "igot %s 1\n", db_column_text(&q,0)); cnt++; } db_finalize(&q); } return cnt; } /* ** Send an igot message for every entry in unclustered table. ** Return the number of cards sent. */ static int send_unclustered(Xfer *pXfer){ Stmt q; int cnt = 0; db_prepare(&q, "SELECT uuid FROM unclustered JOIN blob USING(rid)" " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=blob.rid)" " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)" ); while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pXfer->pOut, "igot %s\n", db_column_text(&q, 0)); cnt++; } db_finalize(&q); return cnt; |
︙ | ︙ | |||
566 567 568 569 570 571 572 | while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pXfer->pOut, "igot %s\n", db_column_text(&q, 0)); } db_finalize(&q); } /* | | > > > | > > > > > > > > | 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 | while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pXfer->pOut, "igot %s\n", db_column_text(&q, 0)); } db_finalize(&q); } /* ** Send a single old-style config card for configuration item zName. ** ** This routine and the functionality it implements is scheduled for ** removal on 2012-05-01. */ static void send_legacy_config_card(Xfer *pXfer, const char *zName){ if( zName[0]!='@' ){ Blob val; blob_zero(&val); db_blob(&val, "SELECT value FROM config WHERE name=%Q", zName); if( blob_size(&val)>0 ){ blob_appendf(pXfer->pOut, "config %s %d\n", zName, blob_size(&val)); blob_append(pXfer->pOut, blob_buffer(&val), blob_size(&val)); blob_reset(&val); blob_append(pXfer->pOut, "\n", 1); } }else{ Blob content; blob_zero(&content); configure_render_special_name(zName, &content); blob_appendf(pXfer->pOut, "config %s %d\n%s\n", zName, blob_size(&content), blob_str(&content)); blob_reset(&content); } } /* ** Called when there is an attempt to transfer private content to and ** from a server without authorization. */ static void server_private_xfer_not_authorized(void){ @ error not\sauthorized\sto\ssync\sprivate\scontent } /* ** If this variable is set, disable login checks. Used for debugging ** only. */ static int disableLogin = 0; |
︙ | ︙ | |||
626 627 628 629 630 631 632 633 634 635 636 637 638 | char *zNow; if( strcmp(PD("REQUEST_METHOD","POST"),"POST") ){ fossil_redirect_home(); } g.zLogin = "anonymous"; login_set_anon_nobody_capabilities(); memset(&xfer, 0, sizeof(xfer)); blobarray_zero(xfer.aToken, count(xfer.aToken)); cgi_set_content_type(g.zContentType); blob_zero(&xfer.err); xfer.pIn = &g.cgiIn; xfer.pOut = cgi_output_blob(); | > > > > > | > | 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 | char *zNow; if( strcmp(PD("REQUEST_METHOD","POST"),"POST") ){ fossil_redirect_home(); } g.zLogin = "anonymous"; login_set_anon_nobody_capabilities(); login_check_credentials(); memset(&xfer, 0, sizeof(xfer)); blobarray_zero(xfer.aToken, count(xfer.aToken)); cgi_set_content_type(g.zContentType); if( db_schema_is_outofdate() ){ @ error database\sschema\sis\sout-of-date\son\sthe\sserver. return; } blob_zero(&xfer.err); xfer.pIn = &g.cgiIn; xfer.pOut = cgi_output_blob(); xfer.mxSend = db_get_int("max-download", 5000000); g.xferPanic = 1; db_begin_transaction(); db_multi_exec( "CREATE TEMP TABLE onremote(rid INTEGER PRIMARY KEY);" ); manifest_crosslink_begin(); while( blob_line(xfer.pIn, &xfer.line) ){ if( blob_buffer(&xfer.line)[0]=='#' ) continue; if( blob_size(&xfer.line)==0 ) continue; xfer.nToken = blob_tokenize(&xfer.line, xfer.aToken, count(xfer.aToken)); /* file UUID SIZE \n CONTENT ** file UUID DELTASRC SIZE \n CONTENT ** ** Accept a file from the client. */ |
︙ | ︙ | |||
664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 | if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; } }else /* gimme UUID ** ** Client is requesting a file. Send it. */ if( blob_eq(&xfer.aToken[0], "gimme") && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ nGimme++; if( isPull ){ | > > > > > > > > > > > > > > > > > > > > > | | | > | > | > > > > > | 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 | if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; } }else /* cfile UUID USIZE CSIZE \n CONTENT ** cfile UUID DELTASRC USIZE CSIZE \n CONTENT ** ** Accept a file from the client. */ if( blob_eq(&xfer.aToken[0], "cfile") ){ if( !isPush ){ cgi_reset_content(); @ error not\sauthorized\sto\swrite nErr++; break; } xfer_accept_compressed_file(&xfer); if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; } }else /* gimme UUID ** ** Client is requesting a file. Send it. */ if( blob_eq(&xfer.aToken[0], "gimme") && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ nGimme++; if( isPull ){ int rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid ){ send_file(&xfer, rid, &xfer.aToken[1], deltaFlag); } } }else /* igot UUID ?ISPRIVATE? ** ** Client announces that it has a particular file. If the ISPRIVATE ** argument exists and is non-zero, then the file is a private file. */ if( xfer.nToken>=2 && blob_eq(&xfer.aToken[0], "igot") && blob_is_uuid(&xfer.aToken[1]) ){ if( isPush ){ if( xfer.nToken==2 || blob_eq(&xfer.aToken[2],"1")==0 ){ rid_from_uuid(&xfer.aToken[1], 1, 0); }else if( g.okPrivate ){ rid_from_uuid(&xfer.aToken[1], 1, 1); }else{ server_private_xfer_not_authorized(); } } }else /* pull SERVERCODE PROJECTCODE ** push SERVERCODE PROJECTCODE ** |
︙ | ︙ | |||
761 762 763 764 765 766 767 768 769 770 | break; } if( xfer.nToken==3 && blob_is_int(&xfer.aToken[1], &iVers) && iVers>=2 ){ int seqno, max; blob_is_int(&xfer.aToken[2], &seqno); max = db_int(0, "SELECT max(rid) FROM blob"); while( xfer.mxSend>blob_size(xfer.pOut) && seqno<=max ){ | > > > > > > | > | 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 | break; } if( xfer.nToken==3 && blob_is_int(&xfer.aToken[1], &iVers) && iVers>=2 ){ int seqno, max; if( iVers>=3 ){ cgi_set_content_type("application/x-fossil-uncompressed"); } blob_is_int(&xfer.aToken[2], &seqno); max = db_int(0, "SELECT max(rid) FROM blob"); while( xfer.mxSend>blob_size(xfer.pOut) && seqno<=max ){ if( iVers>=3 ){ send_compressed_file(&xfer, seqno); }else{ send_file(&xfer, seqno, 0, 1); } seqno++; } if( seqno>=max ) seqno = 0; @ clone_seqno %d(seqno) }else{ isClone = 1; isPull = 1; |
︙ | ︙ | |||
786 787 788 789 790 791 792 | ** Check for a valid login. This has to happen before anything else. ** The client can send multiple logins. Permissions are cumulative. */ if( blob_eq(&xfer.aToken[0], "login") && xfer.nToken==4 ){ if( disableLogin ){ | | | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 | ** Check for a valid login. This has to happen before anything else. ** The client can send multiple logins. Permissions are cumulative. */ if( blob_eq(&xfer.aToken[0], "login") && xfer.nToken==4 ){ if( disableLogin ){ g.okRead = g.okWrite = g.okPrivate = g.okAdmin = 1; }else{ if( check_tail_hash(&xfer.aToken[2], xfer.pIn) || check_login(&xfer.aToken[1], &xfer.aToken[2], &xfer.aToken[3]) ){ cgi_reset_content(); @ error login\sfailed nErr++; |
︙ | ︙ | |||
808 809 810 811 812 813 814 | ** Request a configuration value */ if( blob_eq(&xfer.aToken[0], "reqconfig") && xfer.nToken==2 ){ if( g.okRead ){ char *zName = blob_str(&xfer.aToken[1]); | > > > > > > | > | | 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 | ** Request a configuration value */ if( blob_eq(&xfer.aToken[0], "reqconfig") && xfer.nToken==2 ){ if( g.okRead ){ char *zName = blob_str(&xfer.aToken[1]); if( zName[0]=='/' ){ /* New style configuration transfer */ int groupMask = configure_name_to_mask(&zName[1], 0); if( !g.okAdmin ) groupMask &= ~CONFIGSET_USER; if( !g.okRdAddr ) groupMask &= ~CONFIGSET_ADDR; configure_send_group(xfer.pOut, groupMask, 0); }else if( configure_is_exportable(zName) ){ /* Old style configuration transfer */ send_legacy_config_card(&xfer, zName); } } }else /* config NAME SIZE \n CONTENT ** ** Receive a configuration value from the client. This is only |
︙ | ︙ | |||
831 832 833 834 835 836 837 | blob_extract(xfer.pIn, size, &content); if( !g.okAdmin ){ cgi_reset_content(); @ error not\sauthorized\sto\spush\sconfiguration nErr++; break; } | < < < < < < < < < < < < < < < < < < < < < < | | | | | < | 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 | blob_extract(xfer.pIn, size, &content); if( !g.okAdmin ){ cgi_reset_content(); @ error not\sauthorized\sto\spush\sconfiguration nErr++; break; } if( !recvConfig && zName[0]=='@' ){ configure_prepare_to_receive(0); recvConfig = 1; } configure_receive(zName, &content, CONFIGSET_ALL); blob_reset(&content); blob_seek(xfer.pIn, 1, BLOB_SEEK_CUR); }else /* cookie TEXT |
︙ | ︙ | |||
886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 | ** back several different cookies to the server. The server should be ** prepared to sift through the cookies and pick the one that it wants. */ if( blob_eq(&xfer.aToken[0], "cookie") && xfer.nToken==2 ){ /* Process the cookie */ }else /* Unknown message */ { cgi_reset_content(); @ error bad\scommand:\s%F(blob_str(&xfer.line)) } blobarray_reset(xfer.aToken, xfer.nToken); } if( isPush ){ request_phantoms(&xfer, 500); } if( isClone && nGimme==0 ){ /* The initial "clone" message from client to server contains no ** "gimme" cards. On that initial message, send the client an "igot" ** card for every artifact currently in the respository. This will ** cause the client to create phantoms for all artifacts, which will ** in turn make sure that the entire repository is sent efficiently ** and expeditiously. */ send_all(&xfer); }else if( isPull ){ create_cluster(); send_unclustered(&xfer); } if( recvConfig ){ configure_finalize_receive(); } manifest_crosslink_end(); /* Send the server timestamp last, in case prior processing happened | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 | ** back several different cookies to the server. The server should be ** prepared to sift through the cookies and pick the one that it wants. */ if( blob_eq(&xfer.aToken[0], "cookie") && xfer.nToken==2 ){ /* Process the cookie */ }else /* private ** ** This card indicates that the next "file" or "cfile" will contain ** private content. */ if( blob_eq(&xfer.aToken[0], "private") ){ if( !g.okPrivate ){ server_private_xfer_not_authorized(); }else{ xfer.nextIsPrivate = 1; } }else /* pragma NAME VALUE... ** ** The client issue pragmas to try to influence the behavior of the ** server. These are requests only. Unknown pragmas are silently ** ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma send-private ** ** If the user has the "x" privilege (which must be set explicitly - ** it is not automatic with "a" or "s") then this pragma causes ** private information to be pulled in addition to public records. */ if( blob_eq(&xfer.aToken[1], "send-private") ){ login_check_credentials(); if( !g.okPrivate ){ server_private_xfer_not_authorized(); }else{ xfer.syncPrivate = 1; } } }else /* Unknown message */ { cgi_reset_content(); @ error bad\scommand:\s%F(blob_str(&xfer.line)) } blobarray_reset(xfer.aToken, xfer.nToken); } if( isPush ){ request_phantoms(&xfer, 500); } if( isClone && nGimme==0 ){ /* The initial "clone" message from client to server contains no ** "gimme" cards. On that initial message, send the client an "igot" ** card for every artifact currently in the respository. This will ** cause the client to create phantoms for all artifacts, which will ** in turn make sure that the entire repository is sent efficiently ** and expeditiously. */ send_all(&xfer); if( xfer.syncPrivate ) send_private(&xfer); }else if( isPull ){ create_cluster(); send_unclustered(&xfer); if( xfer.syncPrivate ) send_private(&xfer); } if( recvConfig ){ configure_finalize_receive(); } manifest_crosslink_end(); /* Send the server timestamp last, in case prior processing happened |
︙ | ︙ | |||
947 948 949 950 951 952 953 954 955 956 | ** server in gdb: ** ** gdb fossil ** r test-xfer out.txt */ void cmd_test_xfer(void){ int notUsed; if( g.argc!=2 && g.argc!=3 ){ usage("?MESSAGEFILE?"); } | > < | 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 | ** server in gdb: ** ** gdb fossil ** r test-xfer out.txt */ void cmd_test_xfer(void){ int notUsed; db_find_and_open_repository(0,0); if( g.argc!=2 && g.argc!=3 ){ usage("?MESSAGEFILE?"); } blob_zero(&g.cgiIn); blob_read_from_file(&g.cgiIn, g.argc==2 ? "-" : g.argv[2]); disableLogin = 1; page_xfer(); printf("%s\n", cgi_extract_content(¬Used)); } |
︙ | ︙ | |||
973 974 975 976 977 978 979 | ** Sync to the host identified in g.urlName and g.urlPath. This ** routine is called by the client. ** ** Records are pushed to the server if pushFlag is true. Records ** are pulled if pullFlag is true. A full sync occurs if both are ** true. */ | | > | > | > > > > > > > > | | 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 | ** Sync to the host identified in g.urlName and g.urlPath. This ** routine is called by the client. ** ** Records are pushed to the server if pushFlag is true. Records ** are pulled if pullFlag is true. A full sync occurs if both are ** true. */ int client_sync( int pushFlag, /* True to do a push (or a sync) */ int pullFlag, /* True to do a pull (or a sync) */ int cloneFlag, /* True if this is a clone */ int privateFlag, /* True to exchange private branches */ int configRcvMask, /* Receive these configuration items */ int configSendMask /* Send these configuration items */ ){ int go = 1; /* Loop until zero */ int nCardSent = 0; /* Number of cards sent */ int nCardRcvd = 0; /* Number of cards received */ int nCycle = 0; /* Number of round trips to the server */ int size; /* Size of a config value */ int nFileSend = 0; int origConfigRcvMask; /* Original value of configRcvMask */ int nFileRecv; /* Number of files received */ int mxPhantomReq = 200; /* Max number of phantoms to request per comm */ const char *zCookie; /* Server cookie */ i64 nSent, nRcvd; /* Bytes sent and received (after compression) */ int cloneSeqno = 1; /* Sequence number for clones */ Blob send; /* Text we are sending to the server */ Blob recv; /* Reply we got back from the server */ Xfer xfer; /* Transfer data */ int pctDone; /* Percentage done with a message */ int lastPctDone = -1; /* Last displayed pctDone */ double rArrivalTime; /* Time at which a message arrived */ const char *zSCode = db_get("server-code", "x"); const char *zPCode = db_get("project-code", 0); int nErr = 0; /* Number of errors */ if( db_get_boolean("dont-push", 0) ) pushFlag = 0; if( pushFlag + pullFlag + cloneFlag == 0 && configRcvMask==0 && configSendMask==0 ) return 0; transport_stats(0, 0, 1); socket_global_init(); memset(&xfer, 0, sizeof(xfer)); xfer.pIn = &recv; xfer.pOut = &send; xfer.mxSend = db_get_int("max-upload", 250000); if( privateFlag ){ g.okPrivate = 1; xfer.syncPrivate = 1; } assert( pushFlag | pullFlag | cloneFlag | configRcvMask | configSendMask ); db_begin_transaction(); db_record_repository_filename(0); db_multi_exec( "CREATE TEMP TABLE onremote(rid INTEGER PRIMARY KEY);" ); blobarray_zero(xfer.aToken, count(xfer.aToken)); blob_zero(&send); blob_zero(&recv); blob_zero(&xfer.err); blob_zero(&xfer.line); origConfigRcvMask = 0; /* Send the send-private pragma if we are trying to sync private data */ if( privateFlag ) blob_append(&send, "pragma send-private\n", -1); /* ** Always begin with a clone, pull, or push message */ if( cloneFlag ){ blob_appendf(&send, "clone 3 %d\n", cloneSeqno); pushFlag = 0; pullFlag = 0; nCardSent++; /* TBD: Request all transferable configuration values */ content_enable_dephantomize(0); }else if( pullFlag ){ blob_appendf(&send, "pull %s %s\n", zSCode, zPCode); |
︙ | ︙ | |||
1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 | */ if( pullFlag || (cloneFlag && cloneSeqno==1) ){ request_phantoms(&xfer, mxPhantomReq); } if( pushFlag ){ send_unsent(&xfer); nCardSent += send_unclustered(&xfer); } /* Send configuration parameter requests. On a clone, delay sending ** this until the second cycle since the login card might fail on ** the first cycle. */ if( configRcvMask && (cloneFlag==0 || nCycle>0) ){ const char *zName; zName = configure_first_name(configRcvMask); while( zName ){ blob_appendf(&send, "reqconfig %s\n", zName); zName = configure_next_name(configRcvMask); nCardSent++; } | > | > > > | > | | | | | | > > > | 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 | */ if( pullFlag || (cloneFlag && cloneSeqno==1) ){ request_phantoms(&xfer, mxPhantomReq); } if( pushFlag ){ send_unsent(&xfer); nCardSent += send_unclustered(&xfer); if( privateFlag ) send_private(&xfer); } /* Send configuration parameter requests. On a clone, delay sending ** this until the second cycle since the login card might fail on ** the first cycle. */ if( configRcvMask && (cloneFlag==0 || nCycle>0) ){ const char *zName; zName = configure_first_name(configRcvMask); while( zName ){ blob_appendf(&send, "reqconfig %s\n", zName); zName = configure_next_name(configRcvMask); nCardSent++; } if( (configRcvMask & (CONFIGSET_USER|CONFIGSET_TKT))!=0 && (configRcvMask & CONFIGSET_OLDFORMAT)!=0 ){ int overwrite = (configRcvMask & CONFIGSET_OVERWRITE)!=0; configure_prepare_to_receive(overwrite); } origConfigRcvMask = configRcvMask; configRcvMask = 0; } /* Send configuration parameters being pushed */ if( configSendMask ){ if( configSendMask & CONFIGSET_OLDFORMAT ){ const char *zName; zName = configure_first_name(configSendMask); while( zName ){ send_legacy_config_card(&xfer, zName); zName = configure_next_name(configSendMask); nCardSent++; } }else{ nCardSent += configure_send_group(xfer.pOut, configSendMask, 0); } configSendMask = 0; } /* Append randomness to the end of the message. This makes all ** messages unique so that that the login-card nonce will always ** be unique. |
︙ | ︙ | |||
1124 1125 1126 1127 1128 1129 1130 | xfer.nDeltaSent = 0; xfer.nGimmeSent = 0; xfer.nIGotSent = 0; if( !g.cgiOutput && !g.fQuiet ){ printf("waiting for server..."); } fflush(stdout); | | > > > > > > < < < < < | | | | | > > > > > | 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 | xfer.nDeltaSent = 0; xfer.nGimmeSent = 0; xfer.nIGotSent = 0; if( !g.cgiOutput && !g.fQuiet ){ printf("waiting for server..."); } fflush(stdout); if( http_exchange(&send, &recv, cloneFlag==0 || nCycle>0) ){ nErr++; break; } lastPctDone = -1; blob_reset(&send); rArrivalTime = db_double(0.0, "SELECT julianday('now')"); /* Send the send-private pragma if we are trying to sync private data */ if( privateFlag ) blob_append(&send, "pragma send-private\n", -1); /* Begin constructing the next message (which might never be ** sent) by beginning with the pull or push cards */ if( pullFlag ){ blob_appendf(&send, "pull %s %s\n", zSCode, zPCode); nCardSent++; } if( pushFlag ){ blob_appendf(&send, "push %s %s\n", zSCode, zPCode); nCardSent++; } go = 0; /* Process the reply that came back from the server */ while( blob_line(&recv, &xfer.line) ){ if( blob_buffer(&xfer.line)[0]=='#' ){ const char *zLine = blob_buffer(&xfer.line); if( memcmp(zLine, "# timestamp ", 12)==0 ){ char zTime[20]; double rDiff; sqlite3_snprintf(sizeof(zTime), zTime, "%.19s", &zLine[12]); rDiff = db_double(9e99, "SELECT julianday('%q') - %.17g", zTime, rArrivalTime); if( rDiff>9e98 || rDiff<-9e98 ) rDiff = 0.0; if( (rDiff*24.0*3600.0) > 10.0 ){ fossil_warning("*** time skew *** server is fast by %s", db_timespan_name(rDiff)); g.clockSkewSeen = 1; }else if( rDiff*24.0*3600.0 < -(blob_size(&recv)/5000.0 + 20.0) ){ fossil_warning("*** time skew *** server is slow by %s", db_timespan_name(-rDiff)); g.clockSkewSeen = 1; } } nCardRcvd++; continue; } xfer.nToken = blob_tokenize(&xfer.line, xfer.aToken, count(xfer.aToken)); nCardRcvd++; if( !g.cgiOutput && !g.fQuiet && recv.nUsed>0 ){ pctDone = (recv.iCursor*100)/recv.nUsed; if( pctDone!=lastPctDone ){ |
︙ | ︙ | |||
1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 | ** file UUID DELTASRC SIZE \n CONTENT ** ** Receive a file transmitted from the server. */ if( blob_eq(&xfer.aToken[0],"file") ){ xfer_accept_file(&xfer, cloneFlag); }else /* gimme UUID ** ** Server is requesting a file. If the file is a manifest, assume ** that the server will also want to know all of the content files ** associated with the manifest and send those too. */ if( blob_eq(&xfer.aToken[0], "gimme") && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ if( pushFlag ){ | > > > > > > > > > | | > > > > | > > | | > > | | 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 | ** file UUID DELTASRC SIZE \n CONTENT ** ** Receive a file transmitted from the server. */ if( blob_eq(&xfer.aToken[0],"file") ){ xfer_accept_file(&xfer, cloneFlag); }else /* cfile UUID USIZE CSIZE \n CONTENT ** cfile UUID DELTASRC USIZE CSIZE \n CONTENT ** ** Receive a compressed file transmitted from the server. */ if( blob_eq(&xfer.aToken[0],"cfile") ){ xfer_accept_compressed_file(&xfer); }else /* gimme UUID ** ** Server is requesting a file. If the file is a manifest, assume ** that the server will also want to know all of the content files ** associated with the manifest and send those too. */ if( blob_eq(&xfer.aToken[0], "gimme") && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ if( pushFlag ){ int rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid ) send_file(&xfer, rid, &xfer.aToken[1], 0); } }else /* igot UUID ?PRIVATEFLAG? ** ** Server announces that it has a particular file. If this is ** not a file that we have and we are pulling, then create a ** phantom to cause this file to be requested on the next cycle. ** Always remember that the server has this file so that we do ** not transmit it by accident. ** ** If the PRIVATE argument exists and is 1, then the file is ** private. Pretend it does not exists if we are not pulling ** private files. */ if( xfer.nToken>=2 && blob_eq(&xfer.aToken[0], "igot") && blob_is_uuid(&xfer.aToken[1]) ){ int rid; int isPriv = xfer.nToken>=3 && blob_eq(&xfer.aToken[2],"1"); rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid>0 ){ if( !isPriv ) content_make_public(rid); }else if( isPriv && !g.okPrivate ){ /* ignore private files */ }else if( pullFlag || cloneFlag ){ rid = content_new(blob_str(&xfer.aToken[1]), isPriv); if( rid ) newPhantom = 1; } remote_has(rid); }else /* push SERVERCODE PRODUCTCODE |
︙ | ︙ | |||
1243 1244 1245 1246 1247 1248 1249 | if( blob_eq_str(&xfer.aToken[1], zSCode, -1) ){ fossil_fatal("server loop"); } if( zPCode==0 ){ zPCode = mprintf("%b", &xfer.aToken[2]); db_set("project-code", zPCode, 0); } | | > > > | < < < < < < < < < < < < < < < < < < < < < < < < < > > > > > > > > > > > | 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 | if( blob_eq_str(&xfer.aToken[1], zSCode, -1) ){ fossil_fatal("server loop"); } if( zPCode==0 ){ zPCode = mprintf("%b", &xfer.aToken[2]); db_set("project-code", zPCode, 0); } if( cloneSeqno>0 ) blob_appendf(&send, "clone 3 %d\n", cloneSeqno); nCardSent++; }else /* config NAME SIZE \n CONTENT ** ** Receive a configuration value from the server. ** ** The received configuration setting is silently ignored if it was ** not requested by a prior "reqconfig" sent from client to server. */ if( blob_eq(&xfer.aToken[0],"config") && xfer.nToken==3 && blob_is_int(&xfer.aToken[2], &size) ){ const char *zName = blob_str(&xfer.aToken[1]); Blob content; blob_zero(&content); blob_extract(xfer.pIn, size, &content); g.okAdmin = g.okRdAddr = 1; configure_receive(zName, &content, origConfigRcvMask); nCardSent++; blob_reset(&content); blob_seek(xfer.pIn, 1, BLOB_SEEK_CUR); }else /* cookie TEXT ** ** The server might include a cookie in its reply. The client ** should remember this cookie and send it back to the server ** in its next query. ** ** Each cookie received overwrites the prior cookie from the ** same server. */ if( blob_eq(&xfer.aToken[0], "cookie") && xfer.nToken==2 ){ db_set("cookie", blob_str(&xfer.aToken[1]), 0); }else /* private ** ** This card indicates that the next "file" or "cfile" will contain ** private content. */ if( blob_eq(&xfer.aToken[0], "private") ){ xfer.nextIsPrivate = 1; }else /* clone_seqno N ** ** When doing a clone, the server tries to send all of its artifacts ** in sequence. This card indicates the sequence number of the next ** blob that needs to be sent. If N<=0 that indicates that all blobs ** have been sent. |
︙ | ︙ | |||
1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 | ** to the next cycle. */ if( blob_eq(&xfer.aToken[0],"message") && xfer.nToken==2 ){ char *zMsg = blob_terminate(&xfer.aToken[1]); defossilize(zMsg); if( zMsg ) fossil_print("\rServer says: %s\n", zMsg); }else /* error MESSAGE ** ** Report an error and abandon the sync session. ** ** Except, when cloning we will sometimes get an error on the ** first message exchange because the project-code is unknown | > > > > > > > > > | 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 | ** to the next cycle. */ if( blob_eq(&xfer.aToken[0],"message") && xfer.nToken==2 ){ char *zMsg = blob_terminate(&xfer.aToken[1]); defossilize(zMsg); if( zMsg ) fossil_print("\rServer says: %s\n", zMsg); }else /* pragma NAME VALUE... ** ** The server can send pragmas to try to convey meta-information to ** the client. These are informational only. Unknown pragmas are ** silently ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ }else /* error MESSAGE ** ** Report an error and abandon the sync session. ** ** Except, when cloning we will sometimes get an error on the ** first message exchange because the project-code is unknown |
︙ | ︙ | |||
1350 1351 1352 1353 1354 1355 1356 | if( nCycle<2 ){ if( !g.dontKeepUrl ) db_unset("last-sync-pw", 0); go = 1; } }else{ blob_appendf(&xfer.err, "\rserver says: %s", zMsg); } | | > > < > | > > | | > > | > > | 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 | if( nCycle<2 ){ if( !g.dontKeepUrl ) db_unset("last-sync-pw", 0); go = 1; } }else{ blob_appendf(&xfer.err, "\rserver says: %s", zMsg); } fossil_warning("\rError: %s", zMsg); nErr++; break; } }else /* Unknown message */ if( xfer.nToken>0 ){ if( blob_str(&xfer.aToken[0])[0]=='<' ){ fossil_warning( "server replies with HTML instead of fossil sync protocol:\n%b", &recv ); nErr++; break; } blob_appendf(&xfer.err, "unknown command: [%b]", &xfer.aToken[0]); } if( blob_size(&xfer.err) ){ fossil_warning("%b", &xfer.err); nErr++; break; } blobarray_reset(xfer.aToken, xfer.nToken); blob_reset(&xfer.line); } if( (configRcvMask & (CONFIGSET_USER|CONFIGSET_TKT))!=0 && (configRcvMask & CONFIGSET_OLDFORMAT)!=0 ){ configure_finalize_receive(); } origConfigRcvMask = 0; if( nCardRcvd>0 ){ fossil_print(zValueFormat, "Received:", blob_size(&recv), nCardRcvd, xfer.nFileRcvd, xfer.nDeltaRcvd + xfer.nDanglingFile); |
︙ | ︙ | |||
1409 1410 1411 1412 1413 1414 1415 | if( xfer.nFileSent+xfer.nDeltaSent>0 ){ go = 1; } /* If this is a clone, the go at least two rounds */ if( cloneFlag && nCycle==1 ) go = 1; | | > > > > | | > | 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 | if( xfer.nFileSent+xfer.nDeltaSent>0 ){ go = 1; } /* If this is a clone, the go at least two rounds */ if( cloneFlag && nCycle==1 ) go = 1; /* Stop the cycle if the server sends a "clone_seqno 0" card and ** we have gone at least two rounds. Always go at least two rounds ** on a clone in order to be sure to retrieve the configuration ** information which is only sent on the second round. */ if( cloneSeqno<=0 && nCycle>1 ) go = 0; }; transport_stats(&nSent, &nRcvd, 1); fossil_print("Total network traffic: %lld bytes sent, %lld bytes received\n", nSent, nRcvd); transport_close(); transport_global_shutdown(); db_multi_exec("DROP TABLE onremote"); manifest_crosslink_end(); content_enable_dephantomize(1); db_end_transaction(0); return nErr; } |
Changes to src/zip.c.
︙ | ︙ | |||
102 103 104 105 106 107 108 | for(j=0; j<nDir; j++){ if( strcmp(zName, azDir[j])==0 ) break; } if( j>=nDir ){ nDir++; azDir = fossil_realloc(azDir, sizeof(azDir[0])*nDir); azDir[j] = mprintf("%s", zName); | | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | for(j=0; j<nDir; j++){ if( strcmp(zName, azDir[j])==0 ) break; } if( j>=nDir ){ nDir++; azDir = fossil_realloc(azDir, sizeof(azDir[0])*nDir); azDir[j] = mprintf("%s", zName); zip_add_file(zName, 0, 0); } zName[i+1] = c; } } } /* ** Append a single file to a growing ZIP archive. ** ** pFile is the file to be appended. zName is the name ** that the file should be saved as. */ void zip_add_file(const char *zName, const Blob *pFile, int isExe){ z_stream stream; int nameLen; int toOut = 0; int iStart; int iCRC = 0; int nByte = 0; int nByteCompr = 0; |
︙ | ︙ | |||
137 138 139 140 141 142 143 | char zOutBuf[100000]; /* Fill in as much of the header as we know. */ nBlob = pFile ? blob_size(pFile) : 0; if( nBlob>0 ){ iMethod = 8; | | | 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | char zOutBuf[100000]; /* Fill in as much of the header as we know. */ nBlob = pFile ? blob_size(pFile) : 0; if( nBlob>0 ){ iMethod = 8; iMode = isExe ? 0100755 : 0100644; }else{ iMethod = 0; iMode = 040755; } nameLen = strlen(zName); memset(zHdr, 0, sizeof(zHdr)); put32(&zHdr[0], 0x04034b50); |
︙ | ︙ | |||
285 286 287 288 289 290 291 | if( g.argc<3 ){ usage("ARCHIVE FILE...."); } zip_open(); for(i=3; i<g.argc; i++){ blob_zero(&file); blob_read_from_file(&file, g.argv[i]); | | | 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 | if( g.argc<3 ){ usage("ARCHIVE FILE...."); } zip_open(); for(i=3; i<g.argc; i++){ blob_zero(&file); blob_read_from_file(&file, g.argv[i]); zip_add_file(g.argv[i], &file, file_isexe(g.argv[i])); blob_reset(&file); } zip_close(&zip); blob_write_to_file(&zip, g.argv[2]); } /* |
︙ | ︙ | |||
339 340 341 342 343 344 345 | if( pManifest ){ char *zName; zip_set_timedate(pManifest->rDate); if( db_get_boolean("manifest", 0) ){ blob_append(&filename, "manifest", -1); zName = blob_str(&filename); zip_add_folders(zName); | | | | | 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 | if( pManifest ){ char *zName; zip_set_timedate(pManifest->rDate); if( db_get_boolean("manifest", 0) ){ blob_append(&filename, "manifest", -1); zName = blob_str(&filename); zip_add_folders(zName); zip_add_file(zName, &mfile, 0); sha1sum_blob(&mfile, &hash); blob_reset(&mfile); blob_append(&hash, "\n", 1); blob_resize(&filename, nPrefix); blob_append(&filename, "manifest.uuid", -1); zName = blob_str(&filename); zip_add_file(zName, &hash, 0); blob_reset(&hash); } manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ int fid = uuid_to_rid(pFile->zUuid, 0); if( fid ){ content_get(fid, &file); blob_resize(&filename, nPrefix); blob_append(&filename, pFile->zName, -1); zName = blob_str(&filename); zip_add_folders(zName); zip_add_file(zName, &file, manifest_file_mperm(pFile)); blob_reset(&file); } } }else{ blob_reset(&mfile); } manifest_destroy(pManifest); |
︙ | ︙ | |||
386 387 388 389 390 391 392 | ** the artifact ID of the check-in. */ void baseline_zip_cmd(void){ int rid; Blob zip; const char *zName; zName = find_option("name", 0, 1); | | | 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 | ** the artifact ID of the check-in. */ void baseline_zip_cmd(void){ int rid; Blob zip; const char *zName; zName = find_option("name", 0, 1); db_find_and_open_repository(0, 0); if( g.argc!=4 ){ usage("VERSION OUTPUTFILE"); } rid = name_to_rid(g.argv[2]); if( zName==0 ){ zName = db_text("default-name", "SELECT replace(%Q,' ','_') " |
︙ | ︙ |
Changes to test/delta1.test.
1 2 3 4 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or | | | | | < < < < < < > < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the delta mechanism. # # Use test script files as the basis for this test. # # For each test, copy the file intact to "./t1". Make # some random changes in "./t2". Then call test-delta on the # two files to make sure that deltas between these two files # work properly. # set filelist [glob $testdir/*] foreach f $filelist { if {[file isdir $f]} continue set base [file root [file tail $f]] set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { write_file t2 [random_changes $f1 1 1 0 0.1] fossil test-delta t1 t2 test delta-$base-$i-1 {$RESULT=="ok"} write_file t2 [random_changes $f1 1 1 0 0.2] |
︙ | ︙ |
Added test/file1.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | # # Copyright (c) 2011 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # File utilities # proc simplify-name {testname args} { set i 1 foreach {path result} $args { fossil test-simplify-name $path test simplify-name-$testname.$i {$::RESULT=="\[$path\] -> \[$result\]"} incr i } } simplify-name 100 . . .// . .. .. ..///// .. simplify-name 101 {} {} / / ///////// / ././././ . simplify-name 102 x x /x /x ///x //x simplify-name 103 a/b a/b /a/b /a/b a///b a/b ///a///b///// //a/b simplify-name 104 a/b/../c/ a/c /a/b/../c /a/c /a/b//../c /a/c /a/b/..///c /a/c simplify-name 105 a/b/../../x/y x/y /a/b/../../x/y /x/y simplify-name 106 a/b/../../../x/y ../x/y /a/b/../../../x/y /../x/y simplify-name 107 a/./b/.././../x/y x/y a//.//b//..//.//..//x//y/// x/y |
Added test/graph-test-1.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | <title>Graph Test One</title> This page contains examples a list of URLs of timelines with interesting graphs. Click on all URLs, one by one, to verify the correct operation of the graph drawing logic. * <a href="../../../timeline?n=20&y=ci&b=2010-11-08" target="testwindow"> 20-element timeline, check-ins only, before 2010-11-08</a> * <a href="../../../timeline?n=20&y=ci&b=2010-11-08&ng" target="testwindow"> 20-element timeline, check-ins only, no graph, before 2010-11-08</a> * <a href="../../../timeline?n=20&y=ci&b=2010-11-08&fc" target="testwindow"> 20-element timeline, check-ins only, file changes, before 2010-11-08</a> * <a href="../../../timeline?n=40&y=ci&b=2010-11-08" target="testwindow"> 40-element timeline, check-ins only, before 2010-11-08</a> * <a href="../../../timeline?n=1000&y=ci&b=2010-11-08" target="testwindow"> 1000-element timeline, check-ins only, before 2010-11-08</a> * <a href="../../../timeline?n=10&c=2010-11-07+10:23:00" target="testwindow"> 10-elements circa 2010-11-07 10:23:00, with dividers</a> * <a href="../../../timeline?n=10&c=2010-11-07+10:23:00&nd" target="testwindow"> 10-elements circa 2010-11-07 10:23:00, without dividers</a> * <a href="../../../timeline?f=3ea66260b5555" target="testwindow"> Parents and children of check-in 3ea66260b5555</a> * <a href="../../../timeline?d=e5fe4164f74a7576&p=e5fe4164f74a7576&n=3" target="testwindow">multiple merge descenders from the penultimate node. </a> * <a href="../../../timeline?y=ci&a=2010-12-20" target="testwindow"> multiple branch risers.</a> * <a href="../../../timeline?y=ci&a=2010-12-20&n=18" target="testwindow"> multiple branch risers, n=18.</a> * <a href="../../../timeline?y=ci&a=2010-12-20&n=9" target="testwindow"> multiple branch risers, n=9.</a> * <a href="../../../timeline?r=experimental" target="testwindow"> Experimental branch using and related check-ins.</a> * <a href="../../../timeline?r=experimental&mionly" target="testwindow"> Experimental branch using and merge-ins only.</a> * <a href="../../../timeline?t=experimental" target="testwindow"> Experimental branch check-ins only.</a> * <a href="../../../timeline?r=experimental&n=1000" target="testwindow"> Experimental branch using and related check-ins - 1000 elements.</a> * <a href="../../../timeline?r=release" target="testwindow"> Check-ins tagged "release" and related check-ins</a> * <a href="../../../timeline?r=release&mionly" target="testwindow"> Check-ins tagged "release" and merge-ins</a> * <a href="../../../timeline?t=release" target="testwindow"> Only check-ins tagged "release"</a> * <a href="../../../finfo?name=Makefile" target="testwindow"> History of source file "Makefile".</a> * <a href="../../../timeline?a=1970-01-01" target="testwindow"> 20 elements after 1970-01-01.</a> * <a href="../../../timeline?n=100000000&y=ci" target="testwindow"> All check-ins - a huge graph.</a> * <a href="../../../timeline?f=8dfed953f7530442" target="testwindow"> This malformed commit has a merge parent which is not a valid checkin.</a> * <a href="../../../timeline?from=e663bac6f7&to=a298a0e2f9" target="testwindow"> From e663bac6f7 to a298a0e2f9 by shortest path.</a> * <a href="../../../timeline?from=e663bac6f7&to=a298a0e2f9&nomerge" target="testwindow"> From e663bac6f7 to a298a0e2f9 without merge links.</a> * <a href="../../../timeline?me=e663bac6f7&you=a298a0e2f9" target="testwindow"> Common ancestor path of e663bac6f7 to a298a0e2f9.</a> * <a href="../../../timeline?f=65dd90fb95a2af55"> Merge on the same branch does not result in a leaf. </a> External: * <a href="http://www.sqlite.org/src/timeline?c=2010-09-29&nd" target="testwindow">Timewarp due to a mis-configured system clock.</a> |
Changes to test/merge1.test.
1 2 3 4 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or | | | | | < < < < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # |
︙ | ︙ | |||
75 76 77 78 79 80 81 | 111 - This is line one OF the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { | | | > > | | | > > | | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | 111 - This is line one OF the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 111 - This is line ONE of the demo program - 1111 ======= COMMON ANCESTOR content follows ============================ 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== 111 - This is line one OF the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 111 - This is line one OF the demo program - 1111 ======= COMMON ANCESTOR content follows ============================ 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== 111 - This is line ONE of the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } fossil test-3 t1 t3 t2 a32 test merge1-2.1 {[same_file t32 a32]} |
︙ | ︙ | |||
156 157 158 159 160 161 162 | write_file_indented t3 { 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { | | | > > | | | > > | | 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | write_file_indented t3 { 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< ======= COMMON ANCESTOR content follows ============================ 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 ======= COMMON ANCESTOR content follows ============================ 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } fossil test-3 t1 t3 t2 a32 test merge1-4.1 {[same_file t32 a32]} |
︙ | ︙ | |||
291 292 293 294 295 296 297 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd | | | > > > > > > > > > | | 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< efgh 2 ijkl 2 mnop 2 qrst uvwx yzAB 2 CDEF 2 GHIJ 2 ======= COMMON ANCESTOR content follows ============================ efgh ijkl mnop qrst uvwx yzAB CDEF GHIJ ======= MERGED IN content follows ================================== efgh ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } fossil test-3 t1 t2 t3 a23 test merge1-7.1 {[same_file t23 a23]} |
︙ | ︙ | |||
350 351 352 353 354 355 356 357 358 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd efgh 2 ijkl 2 | > < | | > > > > > > > > > > > | | 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< efgh 2 ijkl 2 mnop qrst uvwx yzAB 2 CDEF 2 GHIJ 2 ======= COMMON ANCESTOR content follows ============================ efgh ijkl mnop qrst uvwx yzAB CDEF GHIJ ======= MERGED IN content follows ================================== efgh ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } fossil test-3 t1 t2 t3 a23 test merge1-7.2 {[same_file t23 a23]} |
Changes to test/merge2.test.
1 2 3 4 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or | | | | | < < < < < < > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the delta mechanism. # set filelist [glob $testdir/*] foreach f $filelist { if {[file isdir $f]} continue set base [file root [file tail $f]] set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { expr {srand($i*2)} write_file t2 [set f2 [random_changes $f1 2 4 0 0.1]] expr {srand($i*2+1)} |
︙ | ︙ |
Changes to test/merge3.test.
1 2 3 4 | # # Copyright (c) 2009 D. Richard Hipp # # This program is free software; you can redistribute it and/or | | | | | < < < < < < | > > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | # # Copyright (c) 2009 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the 3-way merge # proc merge-test {testid basis v1 v2 result} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n fossil test-3-way-merge t1 t2 t3 t4 set x [read_file t4] regsub -all {<<<<<<< BEGIN MERGE CONFLICT: local copy shown first <+} $x \ {MINE:} x regsub -all {======= COMMON ANCESTOR content follows =+} $x {COM:} x regsub -all {======= MERGED IN content follows =+} $x {YOURS:} x regsub -all {>>>>>>> END MERGE CONFLICT >+} $x {END} x set x [split [string trim $x] \n] set result [string trim $result] if {$x!=$result} { protOut " Expected \[$result\]" protOut " Got \[$x\]" test merge3-$testid 0 } else { |
︙ | ︙ | |||
66 67 68 69 70 71 72 | merge-test 3 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { | | | | | | | | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | merge-test 3 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b COM: 3 4 5 YOURS: 3 4 5c END 6 7 8 9 } merge-test 4 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b 6b COM: 3 4 5 6 YOURS: 3 4 5c 6 END 7 8 9 } merge-test 5 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6c 7c 8 9 } { 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8 9 } merge-test 6 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8 9 } { 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8b 9 } merge-test 7 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8c 9 } { 1 2 MINE: 3b 4b 5b 6b 7 8b COM: 3 4 5 6 7 8 YOURS: 3 4 5c 6c 7c 8c END 9 } merge-test 8 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9b } { 1 2 3 4 5c 6c 7c 8c 9 } { 1 2 MINE: 3b 4b 5b 6b 7 8b 9b COM: 3 4 5 6 7 8 9 YOURS: 3 4 5c 6c 7c 8c 9 END } merge-test 9 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3 4 5c 6c 7c 8 9 |
︙ | ︙ | |||
139 140 141 142 143 144 145 | merge-test 11 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3b 4c 5 6c 7c 8 9 } { | | | 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | merge-test 11 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3b 4c 5 6c 7c 8 9 } { 1 2 MINE: 3b 4b COM: 3 4 YOURS: 3b 4c END 5 6c 7c 8b 9b } merge-test 12 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b4b 5 6 7 8b 9b } { 1 2 3b4b 5 6c 7c 8 9 |
︙ | ︙ | |||
194 195 196 197 198 199 200 | merge-test 24 { 1 2 3 4 5 6 7 8 9 } { 1 6 7 8 9 } { 1 2 3 4 9 } { | | | | 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | merge-test 24 { 1 2 3 4 5 6 7 8 9 } { 1 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } merge-test 25 { 1 2 3 4 5 6 7 8 9 } { 1 7 8 9 } { 1 2 3 9 } { 1 MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } merge-test 30 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 } { |
︙ | ︙ | |||
249 250 251 252 253 254 255 | merge-test 34 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 6 7 8 9 } { | | | | 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 | merge-test 34 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END 9 } merge-test 35 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 7 8 9 } { 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END 9 } merge-test 40 { 2 3 4 5 6 7 8 } { 3 4 5 6 7 8 } { |
︙ | ︙ | |||
304 305 306 307 308 309 310 | merge-test 44 { 2 3 4 5 6 7 8 } { 6 7 8 } { 2 3 4 } { | | | | 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 | merge-test 44 { 2 3 4 5 6 7 8 } { 6 7 8 } { 2 3 4 } { MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } merge-test 45 { 2 3 4 5 6 7 8 } { 7 8 } { 2 3 } { MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } merge-test 50 { 2 3 4 5 6 7 8 } { 2 3 4 5 6 7 } { |
︙ | ︙ | |||
358 359 360 361 362 363 364 | merge-test 54 { 2 3 4 5 6 7 8 } { 2 3 4 } { 6 7 8 } { | | | | 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 | merge-test 54 { 2 3 4 5 6 7 8 } { 2 3 4 } { 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END } merge-test 55 { 2 3 4 5 6 7 8 } { 2 3 } { 7 8 } { MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END } merge-test 60 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3 4 5 6 7 8 9 } { |
︙ | ︙ | |||
413 414 415 416 417 418 419 | merge-test 64 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 2 3 4 9 } { | | | | 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 | merge-test 64 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } merge-test 65 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { 1 2 3 9 } { 1 MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } merge-test 70 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 } { |
︙ | ︙ | |||
468 469 470 471 472 473 474 | merge-test 74 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { | | | | 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 | merge-test 74 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END 9 } merge-test 75 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END 9 } merge-test 80 { 2 3 4 5 6 7 8 } { 2b 3 4 5 6 7 8 } { |
︙ | ︙ | |||
523 524 525 526 527 528 529 | merge-test 84 { 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6 7 8 } { 2 3 4 } { | | | | 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 | merge-test 84 { 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6 7 8 } { 2 3 4 } { MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } merge-test 85 { 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6b 7 8 } { 2 3 } { MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } merge-test 90 { 2 3 4 5 6 7 8 } { 2 3 4 5 6 7 } { |
︙ | ︙ | |||
578 579 580 581 582 583 584 | merge-test 94 { 2 3 4 5 6 7 8 } { 2 3 4 } { 2b 3b 4b 5b 6 7 8 } { | | | | 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 | merge-test 94 { 2 3 4 5 6 7 8 } { 2 3 4 } { 2b 3b 4b 5b 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END } merge-test 95 { 2 3 4 5 6 7 8 } { 2 3 } { 2b 3b 4b 5b 6b 7 8 } { MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END } merge-test 100 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3 4 5 7 8 9 a b c d e } { |
︙ | ︙ | |||
624 625 626 627 628 629 630 | merge-test 103 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 9b a b c d e } { | | | | 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 | merge-test 103 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 MINE: 9b COM: 9 YOURS: 9b a b c d e END } merge-test 104 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 MINE: 9b a b c d e COM: 9 YOURS: 9b END } |
Changes to test/merge4.test.
1 2 3 4 | # # Copyright (c) 2009 D. Richard Hipp # # This program is free software; you can redistribute it and/or | | | | | < < < < < < | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | # # Copyright (c) 2009 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the 3-way merge # proc merge-test {testid basis v1 v2 result1 result2} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n fossil test-3-way-merge t1 t2 t3 t4 fossil test-3-way-merge t1 t3 t2 t5 set x [read_file t4] regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<<} $x {>} x regsub -all {=======.*=======} $x {=} x regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $x {<} x set x [split [string trim $x] \n] set y [read_file t5] regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<<} $y {>} y regsub -all {=======.*=======} $y {=} y regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $y {<} y set y [split [string trim $y] \n] set result1 [string trim $result1] if {$x!=$result1} { protOut " Expected \[$result1\]" protOut " Got \[$x\]" test merge3-$testid 0 } else { |
︙ | ︙ |
Added test/merge5.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 | # # Copyright (c) 2010 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the "merge" command # # Verify the results of a check-out # proc checkout-test {testid expected_content} { set flist {} foreach {status filename} [exec $::fossilexe ls -l] { if {$status!="DELETED"} {lappend flist $filename} } eval fossil sha1sum [lsort $flist] global RESULT regsub -all {\n *} [string trim $expected_content] "\n " expected regsub -all {\n *} [string trim $RESULT] "\n " result if {$result!=$expected} { protOut " Expected:\n $expected" protOut " Got:\n $result" test merge5-$testid 0 } else { test merge5-$testid 1 } } catch {exec $::fossilexe info} res puts res=$res if {![regexp {not within an open checkout} $res]} { puts stderr "Cannot run this test within an open checkout" return } # Construct a test repository # exec sqlite3 m5.fossil <$testdir/${testfile}_repo.sql fossil rebuild m5.fossil fossil open m5.fossil fossil update baseline checkout-test 10 { da5c8346496f3421cb58f84b6e59e9531d9d424d one.txt ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt } # Update to the tip of the trunk # fossil update trunk checkout-test 20 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Merge in the change that adds file four.txt # fossil merge br1 checkout-test 30 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Undo the merge. Verify restoration of former state. # fossil undo checkout-test 40 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Now switch to br1 then merge in the trunk. Verify that the result # is the same as merging br1 into trunk. # fossil update br1 fossil merge trunk checkout-test 50 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil undo fossil update trunk checkout-test 60 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Go back to the tip of the trunk and merge branch br1 again. This # time do a check-in of the merge. Verify that the same content appears # after the merge. # fossil update chng3 fossil merge br1 checkout-test 70 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil commit -tag m1 -m {merge with br1} -nosign -f checkout-test 71 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil update chng3 checkout-test 72 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil update m1 checkout-test 73 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Merge br2 into the trunk. br2 contains some independent change to the # two.txt file. Verify that these are merge in correctly. # fossil update m1 fossil merge br2 checkout-test 80 { 8f09bc55a60eb8ca06f10a3b577aafa869b31695 five.txt 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt 68eeee8b843eaea76e33d3911f416b745d0e5e5c two.txt } fossil undo checkout-test 81 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Now merge trunk into br2. Verify that the same set of changes result. # fossil update br2 fossil merge trunk checkout-test 90 { 8f09bc55a60eb8ca06f10a3b577aafa869b31695 five.txt 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt 68eeee8b843eaea76e33d3911f416b745d0e5e5c two.txt } fossil undo checkout-test 91 { 8f09bc55a60eb8ca06f10a3b577aafa869b31695 five.txt da5c8346496f3421cb58f84b6e59e9531d9d424d one.txt ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt 85286cb3bc6d9e6f2f586eb5532f6065678f75b9 two.txt } # Starting from chng3, merge in br4. The one file is deleted from br4, so # the merge should cause the one file to disappear from the checkout. # fossil update chng3 checkout-test 100 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil merge br4 checkout-test 101 { 6e167b139c294bed560e2e30b352361b101e1f39 four.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil undo checkout-test 102 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Do the same merge of br4 into chng3, but this time check it in as a new # branch. # fossil update chng3 fossil merge br4 fossil commit -nosign -branch br4-b -m {merge in br4} -tag m2 checkout-test 110 { 6e167b139c294bed560e2e30b352361b101e1f39 four.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } # Branches br1 and br4 both add file four.txt. So if we merge them together, # the version of file four.txt in the original should be preserved. # fossil update br1 checkout-test 120 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt da5c8346496f3421cb58f84b6e59e9531d9d424d one.txt ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt } fossil merge br4 checkout-test 121 { 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt } fossil undo fossil update br4 checkout-test 122 { 6e167b139c294bed560e2e30b352361b101e1f39 four.txt ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt } fossil merge br1 checkout-test 123 { 6e167b139c294bed560e2e30b352361b101e1f39 four.txt ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt } fossil undo # Merge br5 (which includes a file rename) into chng3 # fossil update chng3 checkout-test 130 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil merge br5 checkout-test 131 { 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt } fossil undo checkout-test 132 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil merge br5 checkout-test 133 { 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt } fossil commit -nosign -m {merge with rename} -branch {trunk+br5} checkout-test 134 { 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt } fossil update chng3 checkout-test 135 { 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt } fossil update trunk+br5 checkout-test 136 { 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt } # Merge the chng3 check-in into br5, verifying that the change to two.txt # from chng3 are applies to two-rename.txt in br5. # fossil update br5 checkout-test 140 { e866bb885d5184cba497cfb6a4eb281688519521 one.txt e09593950837f76e70ca2f8ff2272ae3df0ba017 three.txt 5ebb3c9ad50740a7382902657b84a6105c32fc7b two-rename.txt } fossil merge chng3 checkout-test 141 { 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt } fossil commit -nosign -m {change to two} -branch br5-2 checkout-test 142 { 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt } |
Added test/merge5_repo.sql.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 | PRAGMA foreign_keys=OFF; BEGIN TRANSACTION; CREATE TABLE blob( rid INTEGER PRIMARY KEY, rcvid INTEGER, size INTEGER, uuid TEXT UNIQUE NOT NULL, content BLOB, CHECK( length(uuid)==40 AND rid>0 ) ); INSERT INTO "blob" VALUES(1,1,160,'85d5623f065ba80b20a035dcb4f9aea045215406',X'000000A0789C1DC8BD0E82301000E0BD4F71334993FE1CB465D507300617C352AED7D0208D011C787BA3E3F75DA0D47294F81A775EDFC739EE34332DB2547105A3B492DA48AD078D7D1B7A6BC54DDC21A14E9E52F059A9C928E4E0950AC133658FC6B118A099B6586986068EED5397DFECE72AFF80463C206DB37802C69CACCB31B8EC983B424D2D2361B04645EE92F802B1932DED'); INSERT INTO "blob" VALUES(2,2,10,'da5c8346496f3421cb58f84b6e59e9531d9d424d',X'0000000A789C73E472E272E672E172E5020008CA0182'); INSERT INTO "blob" VALUES(3,2,10,'ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4',X'0000000A789CF3E6F2E1F2E5F2E3F2E7020009F601B4'); INSERT INTO "blob" VALUES(4,2,10,'278a402316510f6ae4a77186796a6bde78c7dbc1',X'0000000A789C73E372E7F2E0F2E4F2E202000960019B'); INSERT INTO "blob" VALUES(5,2,325,'69cc5363f67013c09bf506e3fe9533c32d0bc1ef',X'000000AF789C05C1C10AC2300C00D07BBF621FB0439A365B3B2F0351F426CA2EB24BBAA4AC3005D7FD3FBE47645EC3B96191B91EEBAE3AD7F22D47E16DAEB96C5AEFE3B5F5030E00388DF6D1DADF1048A84397A1A3C401120283235992CF9195C1135AF2D09967B308E788392FA187B8B0042F2E893AC60E803499A9917D35EF2670C09853E0A8092893EF859CB33D6A8414351AFCF0ED72FA03D5E231F2'); INSERT INTO "blob" VALUES(6,3,190,'3bfa21d09f8718adc3195848d57114e1fbf564fe',X'000000BE789C8DCC3B0EC2300C00D03DA7C88E2AF9A32475678E5016C49238B6400206D285DB036261E400EFED2301C2843421AE9816A02595B0C65DAF9BC52CAA89337B2E80AC20CD1364633749CCCAD4A1299AFF24A7F15D003ECB78DEA656875D2FF7FFB77088FD710EC758BA6095590C082BEAECEC4AC58CDEBEA2617801EE253190'); INSERT INTO "blob" VALUES(7,4,10,'35815cf5804e8933eab64ae34e00bbb381be72c5',X'0000000A789C0BE00AE40AE20AE60AE102000A8C01CD'); INSERT INTO "blob" VALUES(8,4,364,'341db367709dc7bc5046ab4fda4f7d9423118cba',X'0000016C789C35CE3B4E44310C85E1FEAE221B1864C78F24B4206A84A0413476EC888A918641B07CEE0CA23D3A9FF4DF158B78FB5CC7AFD3CDF9E7BCDD970A0807AC07C467945BE05B94EDA1FC1F0A4947994B3A70F64194E6CA96C409E0EED4D1B3D57931C78FBC9230999D5879E822AE385DFAEAEC9A327208618CE0CAB193F3FB29FF50C6BEE0885635B0D1C21EBED886C1EA821C6694348D2FE8FB7825B57563A8842A084B2DD95AC3AE6DA8A947B63E5BF8C4EDB1E8985348696903A409C3978026AD4B0F4DAA01FB31D7F65422C1AC73A4A3D5BD7CEF07016B16BBB5C9DB4B89D3FBF65A6C7873C33052034A6D94E82CAED85BA4ACED1765FC64F9'); INSERT INTO "blob" VALUES(9,5,294,'9223b4cad40c0bc997b747fe4eee586559a43815',X'00000126789C8D8E310E83300C45774E91B955243B714860EE11E85275B16322A4AA0C0186DEBEA00E5DD9FEF2FE7B37E300C1A2B38803861EA8F7D40CE62295E73C194FA8E2DB18A1D31C2507A096858A3295A81D398F98B2B0918A07B67CDE769FA7B99DB92AAFE379D1BFF7B9FC8231EC2FF630AF759B5FE7DD77A3756A1EC68D0C3E69424DA20E3460804C259172F15E52F305DF884EE5'); INSERT INTO "blob" VALUES(10,6,10,'8f09bc55a60eb8ca06f10a3b577aafa869b31695',X'0000000A789C0BE50AE30AE78AE08AE402000B2201E6'); INSERT INTO "blob" VALUES(11,6,426,'3d5bce2e44a6b39205eb514c581aa41d86f48540',X'00000116789C458ECB4A04311444F7F98AAC075BEE23B949DA8DA01F203A832082E471C30C6A2FBA47D1BF37BA7157A7288A23DDBCCC3736B7F6BCF5D3A75E9EBFCEE6D612204C4813E21EFD0C7E266AD78F177C9829C4EC8018C52374C9EA7208182524C9529A8658432B15CD9D9554AB67E12E01902BA4D23D8872D7E4992B538331D46EEE6D23AA320E1CE4C6B1A5CCDDA973181C6BF705CCDEEECA9A977AB43B5B56FAE5EDFB7D1AD1EE064C7F90377D3B2DFADF9CD78FE575E0C1B6F5689E6C57284961887A14A89A628B340C2825E118C020A6F0E0AE7E00A7744B2A'); INSERT INTO "blob" VALUES(12,7,10,'c6536e40b3d4755ead483c2e0fb2f16ecb751b6a',X'0000000A789C33E032E432E232E632E1020006CC012D'); INSERT INTO "blob" VALUES(13,7,424,'b95701e0b73e5ebf452a9a7ae00360bf7fd187fd',X'000001A8789C4590414F83310886EFDFAFE879C90C9496B65E359E8DD18BF10285668B3A936F33EABFB753136FBCC003BC5C05317B3A1EF79F17A7CFD3721D22206C316E11EF315F025FC6B8DC84B7839FEBC124F74A8953E3412962D75C474DCA9E9BB74C68CD524C3691BF91A17326F6044A964ACE2E962AF5E830340E64EF5A322ACB444EBBD57FF7B8CD21D8AC44362C34B09A8E244D60D48CC944C8A94B3A431F6F3F482C55124442CE0883C5939482954B6361352FB517D38ECB6DE0D6FB3C8A061740EAD0746460A771B6409DA2C16CF4B1DC85311D4D9AA4E4DEB21924D10AAA0834AD132CF761A3AB1CFA2E6C82AE74D6C7AFD7ED0CC3668AED8F90A3BFEC0FFE9F39ADEF87E7291F82ADBBE5312895341D420637C74AB5CD6F5185E23AC41B2FDF3224768A'); INSERT INTO "blob" VALUES(14,8,11,'6f525ab779ad66e24474d845c5fb7938be42d50d',X'0000000B789C73E4728AE072E672E172E502000C1801DA'); INSERT INTO "blob" VALUES(15,8,319,'c19f6ae71a35cddef0d5de3e1e6109c4378c8215',X'000000E0789C0DCA314E04310C05D03EA7C80158E41F27F6646890587A844048681B274E986A4762B6A0E2ECF0EA977FC3B63EC5BED9F56B1C97E3B65F8EFD3AEE6F3FB7708E894027A413F086B292AE00F6C7ED2E7DAC4917CB94185240536C6453C5225AC5A4F9D0A5ABB78EF012A5F65E58788A12B8536DB3900C9EA316E6CEC9E93F8E195EA361641573481EA5D6E900235B422FD49435BC47FFDEC2672469A9CEC98AAEE685672ECD45B28FEA8B590BA99D9FBD3DFC010C0C3CFF'); INSERT INTO "blob" VALUES(16,9,11,'b262fee89ed8a27a23a5e09d3917e0bebe22cd24',X'0000000B789C73E372E7F288E0F2E4F2E202000C5A01F3'); INSERT INTO "blob" VALUES(17,9,319,'41a02f2b2277d771d85e7f1202c82d4d6675662f',X'000000D8789C0DCE3B4E03410C00D07E4EB107D820DB63CF8F668B488882068890E8BCB6478806258A142ACE4E0EF0A4C77FE96983F56DDC7E1EAEBFD7745C08100E4807C47794017590206CDFAB8F7062C7EE958A63CD139BEF93B52BCC26C8AE9A239BF265A39715CFC3B0CFA25151B3987B4C70F1C8815110BA71AECD1AA1A4D785F5AE4929B07397CADE729F6462358A36F3745AFCF2953E17C9AD30766AA5CE3DD8E27EA1BD548FD80D9C533E3F9FE6C7E33F8FCE3AED'); INSERT INTO "blob" VALUES(18,10,187,'6c22c86ee8a603f5e18171efb6ff571b59e35cf3',X'000000BB789C8DCCAD0EC230140650DFA7A8274BFAB5B4B7ADE611862198EEFE30038261787B9843A24F724E3E068409710266E41E6A4776B33FC878A967342B4309236516510B924593420B42E363A2CA3522FF24D76D5FA803FBB2BDEF13AF8F1BFEAEDCD9CB7375176F45D8961C858C9651BED86C080189381621F701B249318A'); INSERT INTO "blob" VALUES(19,11,187,'8b9f82378ea7a5f9ef8e80d31929f6dfab988acc',X'000000BB789C8DCC3D0EC2300C06D03DA7C88E22D95F623BEACC11CA8258F24B1718280BB747DD18790778670F620A8CC0BCB22C9497486EF5A75EDEC3272E84890A987533EE59864D06A165F4D4554D54317F92DB7E2CB6408E65FF3C42DB9E77FC5DB98BEFAFCD5DBD0C9E604819AC31D5A489F2906AA5C5AAA4C57D01A0C52E74'); INSERT INTO "blob" VALUES(20,12,11,'64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b',X'0000000B789CF3E6F2E1F2E5F28BE0F2E702000CA7020C'); INSERT INTO "blob" VALUES(21,12,321,'5a71ecd64585155b076a4f1e4d489eb5fe3d05f4',X'00000141789C3D8F314E04310C45FB9C221758643B719C6C0BA246081A44E3C40E53ED48BB23C1F1095B50B8F8FA7A4FDF8F716C7AF9F2DBE7EDD8D76D57F787E3E7084F9100E18474427C433E239D81C273DC2FF73E96C9C4DA459A5A294E394BB69A79F0ECD252ED9EC9186C21FFD258B256E52189608E31357772B6B91823504E15B596C1FD0FFADEEF48A742D3BD36B7AA244A49D9A1596A280EDDBB130DA31C5E6246059AD489444C04ADB2CB44021A952CAF99C265D9C2EBB2226371186AA4C35704D7C252C0B06A1FE13DDA750B1F9111E6C4543CF1C4A1136606582F0FE406D424FC021EAB59DA'); INSERT INTO "blob" VALUES(22,13,187,'a2c1640c98b3059a83df0b597d673ad073b782de',X'000000BB789C8DCC310EC2300C00C03DAFC88E22D9899DA49D794259108B93D874818176E1F7805818994FBAA38F80103006C40579C63863718B3F0CD9D5B314D43E327165646E50B290A1D2A03A6963D334808D7E92CBF65D207E96ED790B7DBD5FD3DF953BF9F158DDD96B1EB1D91B92285649B917AD5627A8066842EE05469C30D4'); INSERT INTO "blob" VALUES(23,14,10,'6e167b139c294bed560e2e30b352361b101e1f39',X'0000000A789C33E532E332E7B2E0B2E4020007620146'); INSERT INTO "blob" VALUES(24,14,388,'8015e80a2391d4497530b5d0f36689f359e374ba',X'00000184789C4590314FC4300C85F7FE8ACC2715D971E234C7086246E8585017277174086851DBD3C1BF27BD85CDCFF6A7F7EC075396F97B5CE749EFC7554A19D749AFE35AE7CBD23D1A0B083DDA1EF184FE8874F4DC3D997D78B7FD6C86153924A4986D74498B6750AB0489BC25C684808A956263B6F3A27A83B45857309660B960A08A4349D54914A88347574448298BDBA1EB7C436C18C48125648F5059D449083870882C9C8A862187923276CF8663CE9E982A0740CA1053F5C04A55A327CA640BB445ADDD8B69E611497373AF425407B0FB19A145C6C04CDDC91CD222533E9B83498BDBF5FAFBD5B7D21C9AE86F4256FD7C9FF4BFB32D97E9A3C9D7F6DA73F7666A96907CB351CB9A23D89CAB44675B50715CA8FB033C856E21'); INSERT INTO "blob" VALUES(25,15,11,'e866bb885d5184cba497cfb6a4eb281688519521',X'0000000B789C73E472E272E672E1728DE002000AF201DA'); INSERT INTO "blob" VALUES(26,15,372,'b882b17dc56af1e0297bf8a57c9d15050215c64a',X'00000174789C458E414B04310C85EFF32B7A5E1869D23669BD2A9E45D68B78499BD41175166657D47F6F7711841CF2F2DE47DE8D6B8BAC2FF67C3CAC36DD3AF4E067C019600FE97A0CD274E7867775FA3E39CB44B5E69C34418EAD4A2CDC7A25895631030D074A4218C869D9EC0F528C0A451949814387ACB54729E2FB88471509169AC433F475B820C859A2C70094C077128BC20C99B8905055E3DC586B83E9DE51692D050A9DD84368BED49E3C59E85652082DA0FA11B43E3D38EB297266D4D414C427E30E56C2E8D0A1F9D4A7BDDBD54DD6B6B89DAB5B3AEBE3CFC73C56B71B62BE0839DAFBEB6AFF97D3F6B9BE0DF9E8745BA6A7F1463B76C4AC68218D9A1AA216C2D822D4C63CFD0205E667AF'); INSERT INTO "blob" VALUES(27,16,11,'e09593950837f76e70ca2f8ff2272ae3df0ba017',X'0000000B789CF38EE0F2E1F2E5F2E3F2E702000DB2020C'); INSERT INTO "blob" VALUES(28,16,341,'9df70ff13b57e70ea2de63f655eb72df7b83121a',X'000000EF789C0DCD3D4E03311040E1DEA7F001089AF17AFE360D52525220040D8A846C8FAD5004A4CD8AE5F86CF99AEFD173B8CDA7D8FD6BBDDCD7EBD2FBF1725FFA77B9F5BDB79F708E09100E980E886F4833F28C80BF4FFE90C69C444B86342113C2E0D273114165312E5CBD8B36F1DA306E71B71ED7BF35BCC4AA9A2A8A37E232B04332A9430B493347028284D43897F01AD5D4B077A326640C5981477611A324FB7A84F7E8CB357C441BD51C9407541D0A669433FBA05C26622E14A6069F1B1FFF019008411F'); INSERT INTO "blob" VALUES(29,17,11,'5ebb3c9ad50740a7382902657b84a6105c32fc7b',X'0000000B789C73E372E7F2E0F28CE4F2E202000C0101F4'); INSERT INTO "blob" VALUES(30,17,322,'94bd3e6b45cc15e86cc76ecfd2207fddbb675b15',X'00000142789C3DCE314E44310C04D0FE9F221758643B711C6F0BA246081A441327B696825D69F9121C9F8010F5F8CDF836F97CDB5F3FF6CFCBE1EAE7FEEEDB5D224038201D109F908F588F85B6FB7439FBCDFEB5276FB59AB5C693B19561BDA88CB0DA8B1B35AC2B4165C245F6D3D5FF10286B56869625A4BAC0E8142D8248A87B9E01D601E507FDBFF22BD9CDF2D03E19A44097DC48812A8BB5D22B028F4C31C4B687A4330422301BCB5AF04ED36B8ECAAB436885D63212F6ED31E56985D1EB2A96A22DFA8800CC6C485A88FAF69CE6F5B4BD24556990111A87AECBEEEAB50C5F8EAB308FED1BE1CB587B'); INSERT INTO "blob" VALUES(31,18,11,'85286cb3bc6d9e6f2f586eb5532f6065678f75b9',X'0000000B789C73E372E7F2E0F2E4F28AE402000BA301F4'); INSERT INTO "blob" VALUES(32,18,360,'333c4d56d218735e20565944b50202f24827e3b8',X'00000168789C25CF3D4E43311004E0DEA7F00582BCDE1FAFD382A8118206D1ECDA5E250D91C213E1F83C9476349F34F398D73C6F9FDFDBED929E722D500E500F006F20472A47C4F49CE3FCB31EB6DF2D6B94EE83D9A42CD76145028AA1736B66612ADD11A4F36E2E5F77328D8722097509A40AC35943C965715F9D11669F5469EE643B5DD71DADB927D067AB32A161804E0FB26E259481A6192E1C46FFE876B94FE3AA321C7DC8EC4BA206AB2C67C61A5284A56934F69E5E324EF6B1EA223271ECB5F0DE031AAC604630558294A9A4D76C4B54753F568B8B16F4D5102946AD4EA1BAD27B9ED753FAC80338F6EBB37A31F568AD888A556519A32B78FA037C076278'); CREATE TABLE delta( rid INTEGER PRIMARY KEY, srcid INTEGER NOT NULL REFERENCES blob ); INSERT INTO "delta" VALUES(5,8); INSERT INTO "delta" VALUES(11,32); INSERT INTO "delta" VALUES(15,17); INSERT INTO "delta" VALUES(17,21); INSERT INTO "delta" VALUES(28,30); CREATE TABLE rcvfrom( rcvid INTEGER PRIMARY KEY, uid INTEGER REFERENCES user, mtime DATETIME, nonce TEXT UNIQUE, ipaddr TEXT ); INSERT INTO "rcvfrom" VALUES(1,1,2455542.12469579,NULL,NULL); INSERT INTO "rcvfrom" VALUES(2,1,2455542.12638891,NULL,NULL); INSERT INTO "rcvfrom" VALUES(3,1,2455542.12705387,NULL,'127.0.0.1'); INSERT INTO "rcvfrom" VALUES(4,1,2455542.12796271,NULL,NULL); INSERT INTO "rcvfrom" VALUES(5,1,2455542.12818105,NULL,'127.0.0.1'); INSERT INTO "rcvfrom" VALUES(6,1,2455542.12873891,NULL,NULL); INSERT INTO "rcvfrom" VALUES(7,1,2455542.1294266,NULL,NULL); INSERT INTO "rcvfrom" VALUES(8,1,2455542.12999842,NULL,NULL); INSERT INTO "rcvfrom" VALUES(9,1,2455542.13015219,NULL,NULL); INSERT INTO "rcvfrom" VALUES(10,1,2455542.13073123,NULL,'127.0.0.1'); INSERT INTO "rcvfrom" VALUES(11,1,2455542.13090438,NULL,'127.0.0.1'); INSERT INTO "rcvfrom" VALUES(12,1,2455542.1333612,NULL,NULL); INSERT INTO "rcvfrom" VALUES(13,1,2455542.13353873,NULL,'127.0.0.1'); INSERT INTO "rcvfrom" VALUES(14,1,2455542.13467894,NULL,NULL); INSERT INTO "rcvfrom" VALUES(15,1,2455542.13571911,NULL,NULL); INSERT INTO "rcvfrom" VALUES(16,1,2455542.13623403,NULL,NULL); INSERT INTO "rcvfrom" VALUES(17,1,2455542.13660848,NULL,NULL); INSERT INTO "rcvfrom" VALUES(18,1,2455542.19483546,NULL,NULL); CREATE TABLE user( uid INTEGER PRIMARY KEY, login TEXT, pw TEXT, cap TEXT, cookie TEXT, ipaddr TEXT, cexpire DATETIME, info TEXT, photo BLOB ); INSERT INTO "user" VALUES(1,'drh','efbbd0','s',NULL,NULL,NULL,'',NULL); INSERT INTO "user" VALUES(2,'anonymous','42568D1800352E28','ghmncz',NULL,NULL,NULL,'Anon',NULL); INSERT INTO "user" VALUES(3,'nobody','','jor',NULL,NULL,NULL,'Nobody',NULL); INSERT INTO "user" VALUES(4,'developer','','dei',NULL,NULL,NULL,'Dev',NULL); INSERT INTO "user" VALUES(5,'reader','','kptw',NULL,NULL,NULL,'Reader',NULL); CREATE TABLE config( name TEXT PRIMARY KEY NOT NULL, value CLOB, CHECK( typeof(name)='text' AND length(name)>=1 ) ); INSERT INTO "config" VALUES('server-code','48d0adf1afd58dae13c5b70ff7aff208fb8b9f53'); INSERT INTO "config" VALUES('project-code','8201587e22864f2fe6b8eb623626de6f9f747058'); INSERT INTO "config" VALUES('autosync','1'); INSERT INTO "config" VALUES('localauth','0'); INSERT INTO "config" VALUES('project-name','Test Case 1'); INSERT INTO "config" VALUES('project-description','Fossil repository for testing'); INSERT INTO "config" VALUES('index-page','/timeline'); INSERT INTO "config" VALUES('content-schema','1'); INSERT INTO "config" VALUES('aux-schema','2010-11-24'); CREATE TABLE shun(uuid UNIQUE); CREATE TABLE private(rid INTEGER PRIMARY KEY); CREATE TABLE reportfmt( rn integer primary key, owner text, title text, cols text, sqlcode text ); INSERT INTO "reportfmt" VALUES(1,NULL,'All Tickets','#ffffff Key: #f2dcdc Active #e8e8e8 Review #cfe8bd Fixed #bde5d6 Tested #cacae5 Deferred #c8c8c8 Closed','SELECT CASE WHEN status IN (''Open'',''Verified'') THEN ''#f2dcdc'' WHEN status=''Review'' THEN ''#e8e8e8'' WHEN status=''Fixed'' THEN ''#cfe8bd'' WHEN status=''Tested'' THEN ''#bde5d6'' WHEN status=''Deferred'' THEN ''#cacae5'' ELSE ''#c8c8c8'' END AS ''bgcolor'', substr(tkt_uuid,1,10) AS ''#'', datetime(tkt_mtime) AS ''mtime'', type, status, subsystem, title FROM ticket'); CREATE TABLE concealed( hash TEXT PRIMARY KEY, content TEXT ); CREATE TABLE filename( fnid INTEGER PRIMARY KEY, name TEXT UNIQUE ); INSERT INTO "filename" VALUES(1,'four.txt'); INSERT INTO "filename" VALUES(2,'one.txt'); INSERT INTO "filename" VALUES(3,'three.txt'); INSERT INTO "filename" VALUES(4,'two.txt'); INSERT INTO "filename" VALUES(5,'five.txt'); INSERT INTO "filename" VALUES(6,'six.txt'); INSERT INTO "filename" VALUES(7,'two-rename.txt'); CREATE TABLE mlink( mid INTEGER REFERENCES blob, pid INTEGER REFERENCES blob, fid INTEGER REFERENCES blob, fnid INTEGER REFERENCES filename, pfnid INTEGER REFERENCES filename ); INSERT INTO "mlink" VALUES(8,0,7,1,0); INSERT INTO "mlink" VALUES(5,0,2,2,0); INSERT INTO "mlink" VALUES(5,0,3,3,0); INSERT INTO "mlink" VALUES(5,0,4,4,0); INSERT INTO "mlink" VALUES(11,0,10,5,0); INSERT INTO "mlink" VALUES(13,0,12,6,0); INSERT INTO "mlink" VALUES(21,3,20,3,0); INSERT INTO "mlink" VALUES(17,4,16,4,0); INSERT INTO "mlink" VALUES(15,2,14,2,0); INSERT INTO "mlink" VALUES(24,0,23,1,0); INSERT INTO "mlink" VALUES(24,2,0,2,0); INSERT INTO "mlink" VALUES(26,2,25,2,0); INSERT INTO "mlink" VALUES(30,4,29,7,0); INSERT INTO "mlink" VALUES(28,3,27,3,0); INSERT INTO "mlink" VALUES(28,4,4,7,4); INSERT INTO "mlink" VALUES(28,4,0,4,0); INSERT INTO "mlink" VALUES(32,4,31,4,0); CREATE TABLE plink( pid INTEGER REFERENCES blob, cid INTEGER REFERENCES blob, isprim BOOLEAN, mtime DATETIME, UNIQUE(pid, cid) ); INSERT INTO "plink" VALUES(5,8,1,2455542.12795139); INSERT INTO "plink" VALUES(1,5,1,2455542.12638889); INSERT INTO "plink" VALUES(5,11,1,2455542.12873843); INSERT INTO "plink" VALUES(5,13,1,2455542.1294213); INSERT INTO "plink" VALUES(17,21,1,2455542.13335648); INSERT INTO "plink" VALUES(15,17,1,2455542.13015046); INSERT INTO "plink" VALUES(5,15,1,2455542.12998843); INSERT INTO "plink" VALUES(5,24,1,2455542.13467593); INSERT INTO "plink" VALUES(5,26,1,2455542.13571759); INSERT INTO "plink" VALUES(28,30,1,2455542.13659722); INSERT INTO "plink" VALUES(26,28,1,2455542.13622685); INSERT INTO "plink" VALUES(11,32,1,2455542.19482639); CREATE TABLE event( type TEXT, mtime DATETIME, objid INTEGER PRIMARY KEY, tagid INTEGER, uid INTEGER REFERENCES user, bgcolor TEXT, euser TEXT, user TEXT, ecomment TEXT, comment TEXT, brief TEXT, omtime DATETIME ); INSERT INTO "event" VALUES('ci',2455542.1246875,1,NULL,NULL,NULL,NULL,'drh',NULL,'initial empty check-in',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.12638889,5,NULL,NULL,NULL,NULL,'drh',NULL,'add three initial files',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.12795139,8,NULL,NULL,NULL,NULL,'drh',NULL,'add four.txt',NULL,2455542.12795139); INSERT INTO "event" VALUES('ci',2455542.12873843,11,NULL,NULL,NULL,NULL,'drh',NULL,'add five.txt',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.1294213,13,NULL,NULL,NULL,NULL,'drh',NULL,'add six.txt',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.12998843,15,NULL,NULL,NULL,NULL,'drh',NULL,'changes to one.txt',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.13015046,17,NULL,NULL,NULL,NULL,'drh',NULL,'changes to two.txt',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.13335648,21,NULL,NULL,NULL,NULL,'drh',NULL,'changes to three.txt',NULL,2455542.13335648); INSERT INTO "event" VALUES('ci',2455542.13467593,24,NULL,NULL,NULL,NULL,'drh',NULL,'drop one; add new four',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.13571759,26,NULL,NULL,NULL,NULL,'drh',NULL,'change one',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.13622685,28,NULL,NULL,NULL,NULL,'drh',NULL,'edit three; rename two',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.13659722,30,NULL,NULL,NULL,NULL,'drh',NULL,'edit two-rename',NULL,NULL); INSERT INTO "event" VALUES('ci',2455542.19482639,32,NULL,NULL,NULL,NULL,'drh',NULL,'edit two',NULL,NULL); CREATE TABLE phantom( rid INTEGER PRIMARY KEY ); CREATE TABLE orphan( rid INTEGER PRIMARY KEY, baseline INTEGER ); CREATE TABLE unclustered( rid INTEGER PRIMARY KEY ); INSERT INTO "unclustered" VALUES(1); INSERT INTO "unclustered" VALUES(2); INSERT INTO "unclustered" VALUES(3); INSERT INTO "unclustered" VALUES(4); INSERT INTO "unclustered" VALUES(5); INSERT INTO "unclustered" VALUES(6); INSERT INTO "unclustered" VALUES(7); INSERT INTO "unclustered" VALUES(8); INSERT INTO "unclustered" VALUES(9); INSERT INTO "unclustered" VALUES(10); INSERT INTO "unclustered" VALUES(11); INSERT INTO "unclustered" VALUES(12); INSERT INTO "unclustered" VALUES(13); INSERT INTO "unclustered" VALUES(14); INSERT INTO "unclustered" VALUES(15); INSERT INTO "unclustered" VALUES(16); INSERT INTO "unclustered" VALUES(17); INSERT INTO "unclustered" VALUES(18); INSERT INTO "unclustered" VALUES(19); INSERT INTO "unclustered" VALUES(20); INSERT INTO "unclustered" VALUES(21); INSERT INTO "unclustered" VALUES(22); INSERT INTO "unclustered" VALUES(23); INSERT INTO "unclustered" VALUES(24); INSERT INTO "unclustered" VALUES(25); INSERT INTO "unclustered" VALUES(26); INSERT INTO "unclustered" VALUES(27); INSERT INTO "unclustered" VALUES(28); INSERT INTO "unclustered" VALUES(29); INSERT INTO "unclustered" VALUES(30); INSERT INTO "unclustered" VALUES(31); INSERT INTO "unclustered" VALUES(32); CREATE TABLE unsent( rid INTEGER PRIMARY KEY ); INSERT INTO "unsent" VALUES(31); INSERT INTO "unsent" VALUES(32); CREATE TABLE tag( tagid INTEGER PRIMARY KEY, tagname TEXT UNIQUE ); INSERT INTO "tag" VALUES(1,'bgcolor'); INSERT INTO "tag" VALUES(2,'comment'); INSERT INTO "tag" VALUES(3,'user'); INSERT INTO "tag" VALUES(4,'date'); INSERT INTO "tag" VALUES(5,'hidden'); INSERT INTO "tag" VALUES(6,'private'); INSERT INTO "tag" VALUES(7,'cluster'); INSERT INTO "tag" VALUES(8,'branch'); INSERT INTO "tag" VALUES(9,'closed'); INSERT INTO "tag" VALUES(10,'sym-trunk'); INSERT INTO "tag" VALUES(11,'sym-baseline'); INSERT INTO "tag" VALUES(12,'sym-br1'); INSERT INTO "tag" VALUES(13,'sym-br2'); INSERT INTO "tag" VALUES(14,'sym-br3'); INSERT INTO "tag" VALUES(15,'sym-chng1'); INSERT INTO "tag" VALUES(16,'sym-chng2'); INSERT INTO "tag" VALUES(17,'sym-chng3'); INSERT INTO "tag" VALUES(18,'sym-br4'); INSERT INTO "tag" VALUES(19,'sym-br5'); CREATE TABLE tagxref( tagid INTEGER REFERENCES tag, tagtype INTEGER, srcid INTEGER REFERENCES blob, origid INTEGER REFERENCES blob, value TEXT, mtime TIMESTAMP, rid INTEGER REFERENCE blob, UNIQUE(rid, tagid) ); INSERT INTO "tagxref" VALUES(8,2,1,1,'trunk',2455542.1246875,1); INSERT INTO "tagxref" VALUES(10,2,1,1,NULL,2455542.1246875,1); INSERT INTO "tagxref" VALUES(4,1,6,5,'2010-12-11 15:02:00',2455542.12704861,5); INSERT INTO "tagxref" VALUES(11,1,6,5,NULL,2455542.12704861,5); INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,5); INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,5); INSERT INTO "tagxref" VALUES(8,2,9,8,'br1',2455542.1281713,8); INSERT INTO "tagxref" VALUES(12,2,9,8,NULL,2455542.1281713,8); INSERT INTO "tagxref" VALUES(4,1,9,8,'2010-12-11 15:04:15',2455542.1281713,8); INSERT INTO "tagxref" VALUES(10,0,9,8,NULL,2455542.1281713,8); INSERT INTO "tagxref" VALUES(8,2,11,11,'br2',2455542.12873843,11); INSERT INTO "tagxref" VALUES(13,2,11,11,NULL,2455542.12873843,11); INSERT INTO "tagxref" VALUES(11,0,11,11,NULL,2455542.12873843,11); INSERT INTO "tagxref" VALUES(10,0,11,11,NULL,2455542.12873843,11); INSERT INTO "tagxref" VALUES(8,2,13,13,'br3',2455542.1294213,13); INSERT INTO "tagxref" VALUES(14,2,13,13,NULL,2455542.1294213,13); INSERT INTO "tagxref" VALUES(11,0,13,13,NULL,2455542.1294213,13); INSERT INTO "tagxref" VALUES(10,0,13,13,NULL,2455542.1294213,13); INSERT INTO "tagxref" VALUES(4,1,18,15,'2010-12-11 15:07:11',2455542.13072917,15); INSERT INTO "tagxref" VALUES(15,1,18,15,NULL,2455542.13072917,15); INSERT INTO "tagxref" VALUES(4,1,19,17,'2010-12-11 15:07:25',2455542.13090278,17); INSERT INTO "tagxref" VALUES(16,1,19,17,NULL,2455542.13090278,17); INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,15); INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,17); INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,21); INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,15); INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,17); INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,21); INSERT INTO "tagxref" VALUES(4,1,22,21,'2010-12-11 15:12:02',2455542.13353009,21); INSERT INTO "tagxref" VALUES(17,1,22,21,NULL,2455542.13353009,21); INSERT INTO "tagxref" VALUES(8,2,24,24,'br4',2455542.13467593,24); INSERT INTO "tagxref" VALUES(18,2,24,24,NULL,2455542.13467593,24); INSERT INTO "tagxref" VALUES(11,0,24,24,NULL,2455542.13467593,24); INSERT INTO "tagxref" VALUES(10,0,24,24,NULL,2455542.13467593,24); INSERT INTO "tagxref" VALUES(8,2,26,26,'br5',2455542.13571759,26); INSERT INTO "tagxref" VALUES(19,2,26,26,NULL,2455542.13571759,26); INSERT INTO "tagxref" VALUES(11,0,26,26,NULL,2455542.13571759,26); INSERT INTO "tagxref" VALUES(10,0,26,26,NULL,2455542.13571759,26); INSERT INTO "tagxref" VALUES(8,2,0,26,'br5',2455542.13571759,28); INSERT INTO "tagxref" VALUES(8,2,0,26,'br5',2455542.13571759,30); INSERT INTO "tagxref" VALUES(19,2,0,26,NULL,2455542.13571759,28); INSERT INTO "tagxref" VALUES(19,2,0,26,NULL,2455542.13571759,30); INSERT INTO "tagxref" VALUES(8,2,0,11,'br2',2455542.12873843,32); INSERT INTO "tagxref" VALUES(13,2,0,11,NULL,2455542.12873843,32); CREATE TABLE backlink( target TEXT, srctype INT, srcid INT, mtime TIMESTAMP, UNIQUE(target, srctype, srcid) ); CREATE TABLE attachment( attachid INTEGER PRIMARY KEY, isLatest BOOLEAN DEFAULT 0, mtime TIMESTAMP, src TEXT, target TEXT, filename TEXT, comment TEXT, user TEXT ); CREATE TABLE ticket( -- Do not change any column that begins with tkt_ tkt_id INTEGER PRIMARY KEY, tkt_uuid TEXT UNIQUE, tkt_mtime DATE, -- Add as many field as required below this line type TEXT, status TEXT, subsystem TEXT, priority TEXT, severity TEXT, foundin TEXT, private_contact TEXT, resolution TEXT, title TEXT, comment TEXT ); CREATE INDEX delta_i1 ON delta(srcid); CREATE INDEX mlink_i1 ON mlink(mid); CREATE INDEX mlink_i2 ON mlink(fnid); CREATE INDEX mlink_i3 ON mlink(fid); CREATE INDEX mlink_i4 ON mlink(pid); CREATE INDEX plink_i2 ON plink(cid,pid); CREATE INDEX event_i1 ON event(mtime); CREATE INDEX orphan_baseline ON orphan(baseline); CREATE INDEX tagxref_i1 ON tagxref(tagid, mtime); CREATE INDEX backlink_src ON backlink(srcid, srctype); CREATE INDEX attachment_idx1 ON attachment(target, filename, mtime); CREATE INDEX attachment_idx2 ON attachment(src); COMMIT; |
Added test/release-checklist.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | <title>Release Checklist</title> This file describes the testing procedures for Fossil prior to an official release. <ol> <li><p> From a private directory (not the source tree) run "<b>tclsh $SRC/test/tester.tcl $FOSSIL</b>" where $FOSSIL is the name of the executable under test and $SRC is the source tree. Verify that there are no errors. <li><p> Click on each of the links in in the [./graph-test-1.wiki] document and verify that all graphs are rendered correctly. <li><p> Compile for all of the following platforms: <ol type="a"> <li> Linux x86 <li> Linux x86_64 <li> Mac x86 <li> Mac x86_64 <li> Windows (mingw) <li> OpenBSD </ol> <li><p> Run at least one occurrence of the the following commands on every platform: <ol type="a"> <li> <b>fossil rebuild</b> <li> <b>fossil sync</b> <li> <b>fossil test-integrity</b> </ol> <li><p> Run the following commands on Linux and verify no major memory leaks and no run-time errors or warnings (except for the well-known jump on an uninitialized value that occurs within zlib). <ol type="a"> <li> <b>valgrind fossil rebuild</b> <li> <b>valgrind fossil sync</b> </ol> <li><p> Inspect all code changes since the previous release, paying particular attention to the following details: <ol type="a"> <li> Can a malicious HTTP request cause a buffer overrun. <li> Can a malicious HTTP request expose privileged information to unauthorized users. </ol> <li><p> Use the release candidate version of fossil in production on the [http://www.fossil-scm.org/] website for at least 48 hours (without incident) prior to making the release official. </ol> <hr> Upon successful completion of all tests above, tag the release candidate with the "release" tag and set its background color to "#d0c0ff". |
Changes to test/tester.tcl.
1 2 3 4 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or | | | | | < < < < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # |
︙ | ︙ | |||
89 90 91 92 93 94 95 96 97 98 99 100 101 102 | } protOut $cmd flush stdout set rc [catch {eval exec $cmd} result] global RESULT CODE set CODE $rc set RESULT $result } # Read a file into memory. # proc read_file {filename} { set in [open $filename r] | > | 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | } protOut $cmd flush stdout set rc [catch {eval exec $cmd} result] global RESULT CODE set CODE $rc if {$rc} {puts "ERROR: $result"} set RESULT $result } # Read a file into memory. # proc read_file {filename} { set in [open $filename r] |
︙ | ︙ | |||
126 127 128 129 130 131 132 133 | set y [read_file $b] regsub -all { +\n} $y \n y return [expr {$x==$y}] } # Perform a test # proc test {name expr} { | > | > | 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | set y [read_file $b] regsub -all { +\n} $y \n y return [expr {$x==$y}] } # Perform a test # set test_count 0 proc test {name expr} { global bad_test test_count incr test_count set r [uplevel 1 [list expr $expr]] if {$r} { protOut "test $name OK" } else { protOut "test $name FAILED!" lappend bad_test $name if {$::HALT} exit |
︙ | ︙ | |||
199 200 201 202 203 204 205 206 207 208 | append out \n$line } return [string range $out 1 end] } protInit $fossilexe foreach testfile $argv { protOut "***** $testfile ******" source $testdir/$testfile.test } | > > > > > > > | > > > > | 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | append out \n$line } return [string range $out 1 end] } protInit $fossilexe foreach testfile $argv { set dir [file root [file tail $testfile]] file delete -force $dir file mkdir $dir set origwd [pwd] cd $dir protOut "***** $testfile ******" source $testdir/$testfile.test protOut "***** End of $testfile: [llength $bad_test] errors so far ******" cd $origwd } set nErr [llength $bad_test] protOut "***** Final result: $nErr errors out of $test_count tests" if {$nErr>0} { protOut "***** Failures: $bad_test" } |
Changes to win/Makefile.PellesCGMake.
|
| | > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 | # DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh src/makemake.tcl" # to regenerate this file. # # HowTo # ----- # # This is a Makefile to compile fossil with PellesC from # http://www.smorgasbordet.com/pellesc/index.htm # In addition to the Compiler envrionment, you need |
︙ | ︙ | |||
22 23 24 25 26 27 28 | # Windows XP SP 2 # and # PellesC 6.00.4 # gmake 3.80 # zlib sources 1.2.5 # Windows 7 Home Premium # | < | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | # Windows XP SP 2 # and # PellesC 6.00.4 # gmake 3.80 # zlib sources 1.2.5 # Windows 7 Home Premium # # PellesCDir=c:\Programme\PellesC # Select between 32/64 bit code, default is 32 bit #TARGETVERSION=64 |
︙ | ︙ | |||
56 57 58 59 60 61 62 | LINK=$(PellesCDir)/bin/polink.exe LINKFLAGS=-subsystem:console -machine:$(TARGETMACHINE_LN) /LIBPATH:$(PellesCDir)\lib\win$(TARGETEXTEND) /LIBPATH:$(PellesCDir)\lib kernel32.lib advapi32.lib delayimp$(TARGETEXTEND).lib Wsock32.lib Crtmt$(TARGETEXTEND).lib # define standard C-compiler and flags, used to compile # the fossil binary. Some special definitions follow for # special files follow CC=$(PellesCDir)\bin\pocc.exe | | > > > > > > | < < < < < < < > > > < < | | < | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 | LINK=$(PellesCDir)/bin/polink.exe LINKFLAGS=-subsystem:console -machine:$(TARGETMACHINE_LN) /LIBPATH:$(PellesCDir)\lib\win$(TARGETEXTEND) /LIBPATH:$(PellesCDir)\lib kernel32.lib advapi32.lib delayimp$(TARGETEXTEND).lib Wsock32.lib Crtmt$(TARGETEXTEND).lib # define standard C-compiler and flags, used to compile # the fossil binary. Some special definitions follow for # special files follow CC=$(PellesCDir)\bin\pocc.exe DEFINES=-D_pgmptr=g.argv[0] CCFLAGS=-T$(TARGETMACHINE_CC)-coff -Ot -W2 -Gd -Go -Ze -MT $(DEFINES) INCLUDE=/I $(PellesCDir)\Include\Win /I $(PellesCDir)\Include /I $(ZLIBSRCDIR) /I $(SRCDIR) # define commands for building the windows resource files RESOURCE=fossil.res RC=$(PellesCDir)\bin\porc.exe RCFLAGS=$(INCLUDE) -D__POCC__=1 -D_M_X$(TARGETVERSION) # define the special utilities files, needed to generate # the automatically generated source files UTILS=translate.exe mkindex.exe makeheaders.exe UTILS_OBJ=$(UTILS:.exe=.obj) UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c)) # define the sqlite files, which need special flags on compile SQLITESRC=sqlite3.c ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf)) SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj)) SQLITEDEFINES=-DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -Dlocaltime=fossil_localtime -DSQLITE_ENABLE_LOCKING_STYLE=0 # define the sqlite shell files, which need special flags on compile SQLITESHELLSRC=shell.c ORIGSQLITESHELLSRC=$(foreach sf,$(SQLITESHELLSRC),$(SRCDIR)$(sf)) SQLITESHELLOBJ=$(foreach sf,$(SQLITESHELLSRC),$(sf:.c=.obj)) SQLITESHELLDEFINES=-Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 # define the th scripting files, which need special flags on compile THSRC=th.c th_lang.c ORIGTHSRC=$(foreach sf,$(THSRC),$(SRCDIR)$(sf)) THOBJ=$(foreach sf,$(THSRC),$(sf:.c=.obj)) # define the zlib files, needed by this compile ZLIBSRC=adler32.c compress.c crc32.c deflate.c gzclose.c gzlib.c gzread.c gzwrite.c infback.c inffast.c inflate.c inftrees.c trees.c uncompr.c zutil.c ORIGZLIBSRC=$(foreach sf,$(ZLIBSRC),$(ZLIBSRCDIR)$(sf)) ZLIBOBJ=$(foreach sf,$(ZLIBSRC),$(sf:.c=.obj)) # define all fossil sources, using the standard compile and # source generation. These are all files in SRCDIR, which are not # mentioned as special files above: ORIGSRC=$(filter-out $(UTILS_SRC) $(ORIGTHSRC) $(ORIGSQLITESRC) $(ORIGSQLITESHELLSRC),$(wildcard $(SRCDIR)*.c)) SRC=$(subst $(SRCDIR),,$(ORIGSRC)) TRANSLATEDSRC=$(SRC:.c=_.c) TRANSLATEDOBJ=$(TRANSLATEDSRC:.c=.obj) # main target file is the application APPLICATION=fossil.exe # define the standard make target .PHONY: default default: page_index.h headers $(APPLICATION) # symbolic target to generate the source generate utils .PHONY: utils utils: $(UTILS) # link utils $(UTILS) version.exe: %.exe: %.obj $(LINK) $(LINKFLAGS) -out:"$@" $< # compiling standard fossil utils $(UTILS_OBJ): %.obj: $(SRCDIR)%.c $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" # compile special windows utils version.obj: $(WINDIR)version.c $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" # generate the translated c-source files $(TRANSLATEDSRC): %_.c: $(SRCDIR)%.c translate.exe translate.exe $< >$@ # generate the index source, containing all web references,.. page_index.h: $(TRANSLATEDSRC) mkindex.exe mkindex.exe $(TRANSLATEDSRC) >$@ # extracting version info from manifest VERSION.h: version.exe ..\manifest.uuid ..\manifest version.exe ..\manifest.uuid ..\manifest > $@ # generate the simplified headers headers: makeheaders.exe page_index.h VERSION.h ../src/sqlite3.h ../src/th.h VERSION.h makeheaders.exe $(foreach ts,$(TRANSLATEDSRC),$(ts):$(ts:_.c=.h)) ../src/sqlite3.h ../src/th.h VERSION.h echo Done >$@ # compile C sources with relevant options $(TRANSLATEDOBJ): %_.obj: %_.c %.h $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" $(SQLITEOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)%.h $(CC) $(CCFLAGS) $(SQLITEDEFINES) $(INCLUDE) "$<" -Fo"$@" $(SQLITESHELLOBJ): %.obj: $(SRCDIR)%.c $(CC) $(CCFLAGS) $(SQLITESHELLDEFINES) $(INCLUDE) "$<" -Fo"$@" $(THOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)th.h $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" $(ZLIBOBJ): %.obj: $(ZLIBSRCDIR)%.c $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" # create the windows resource with icon and version info $(RESOURCE): %.res: ../win/%.rc ../win/*.ico $(RC) $(RCFLAGS) $< -Fo"$@" # link the application $(APPLICATION): $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) headers $(RESOURCE) $(LINK) $(LINKFLAGS) -out:"$@" $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) $(RESOURCE) # cleanup .PHONY: clean clean: del /F $(TRANSLATEDOBJ) $(SQLITEOBJ) $(THOBJ) $(ZLIBOBJ) $(UTILS_OBJ) version.obj del /F $(TRANSLATEDSRC) del /F *.h headers del /F $(RESOURCE) .PHONY: clobber clobber: clean del /F *.exe |
Changes to win/Makefile.dmc.
1 2 3 | # DO NOT EDIT # # This file is automatically generated. Instead of editing this | | < < < < | | > > | | > | | | | < < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | # DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh src/makemake.tcl" # to regenerate this file. B = .. SRCDIR = $B\src OBJDIR = . O = .obj E = .exe # Maybe DMDIR, SSL or INCL needs adjustment DMDIR = c:\DM INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(DMDIR)\extra\include #SSL = -DFOSSIL_ENABLE_SSL=1 SSL = CFLAGS = -o BCC = $(DMDIR)\bin\dmc $(CFLAGS) TCC = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL) LIBS = $(DMDIR)\extra\lib\ zlib wsock32 SQLITE_OPTIONS = -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_ENABLE_STAT2 -Dlocaltime=fossil_localtime -DSQLITE_ENABLE_LOCKING_STYLE=0 SRC = add_.c allrepo_.c attach_.c bag_.c bisect_.c blob_.c branch_.c browse_.c captcha_.c cgi_.c checkin_.c checkout_.c clearsign_.c clone_.c comformat_.c configure_.c content_.c db_.c delta_.c deltacmd_.c descendants_.c diff_.c diffcmd_.c doc_.c encode_.c event_.c export_.c file_.c finfo_.c glob_.c graph_.c gzip_.c http_.c http_socket_.c http_ssl_.c http_transport_.c import_.c info_.c leaf_.c login_.c main_.c manifest_.c md5_.c merge_.c merge3_.c name_.c path_.c pivot_.c popen_.c pqueue_.c printf_.c rebuild_.c report_.c rss_.c schema_.c search_.c setup_.c sha1_.c shun_.c skins_.c sqlcmd_.c stash_.c stat_.c style_.c sync_.c tag_.c tar_.c th_main_.c timeline_.c tkt_.c tktsetup_.c undo_.c update_.c url_.c user_.c verify_.c vfile_.c wiki_.c wikiformat_.c winhttp_.c xfer_.c zip_.c OBJ = $(OBJDIR)\add$O $(OBJDIR)\allrepo$O $(OBJDIR)\attach$O $(OBJDIR)\bag$O $(OBJDIR)\bisect$O $(OBJDIR)\blob$O $(OBJDIR)\branch$O $(OBJDIR)\browse$O $(OBJDIR)\captcha$O $(OBJDIR)\cgi$O $(OBJDIR)\checkin$O $(OBJDIR)\checkout$O $(OBJDIR)\clearsign$O $(OBJDIR)\clone$O $(OBJDIR)\comformat$O $(OBJDIR)\configure$O $(OBJDIR)\content$O $(OBJDIR)\db$O $(OBJDIR)\delta$O $(OBJDIR)\deltacmd$O $(OBJDIR)\descendants$O $(OBJDIR)\diff$O $(OBJDIR)\diffcmd$O $(OBJDIR)\doc$O $(OBJDIR)\encode$O $(OBJDIR)\event$O $(OBJDIR)\export$O $(OBJDIR)\file$O $(OBJDIR)\finfo$O $(OBJDIR)\glob$O $(OBJDIR)\graph$O $(OBJDIR)\gzip$O $(OBJDIR)\http$O $(OBJDIR)\http_socket$O $(OBJDIR)\http_ssl$O $(OBJDIR)\http_transport$O $(OBJDIR)\import$O $(OBJDIR)\info$O $(OBJDIR)\leaf$O $(OBJDIR)\login$O $(OBJDIR)\main$O $(OBJDIR)\manifest$O $(OBJDIR)\md5$O $(OBJDIR)\merge$O $(OBJDIR)\merge3$O $(OBJDIR)\name$O $(OBJDIR)\path$O $(OBJDIR)\pivot$O $(OBJDIR)\popen$O $(OBJDIR)\pqueue$O $(OBJDIR)\printf$O $(OBJDIR)\rebuild$O $(OBJDIR)\report$O $(OBJDIR)\rss$O $(OBJDIR)\schema$O $(OBJDIR)\search$O $(OBJDIR)\setup$O $(OBJDIR)\sha1$O $(OBJDIR)\shun$O $(OBJDIR)\skins$O $(OBJDIR)\sqlcmd$O $(OBJDIR)\stash$O $(OBJDIR)\stat$O $(OBJDIR)\style$O $(OBJDIR)\sync$O $(OBJDIR)\tag$O $(OBJDIR)\tar$O $(OBJDIR)\th_main$O $(OBJDIR)\timeline$O $(OBJDIR)\tkt$O $(OBJDIR)\tktsetup$O $(OBJDIR)\undo$O $(OBJDIR)\update$O $(OBJDIR)\url$O $(OBJDIR)\user$O $(OBJDIR)\verify$O $(OBJDIR)\vfile$O $(OBJDIR)\wiki$O $(OBJDIR)\wikiformat$O $(OBJDIR)\winhttp$O $(OBJDIR)\xfer$O $(OBJDIR)\zip$O $(OBJDIR)\shell$O $(OBJDIR)\sqlite3$O $(OBJDIR)\th$O $(OBJDIR)\th_lang$O RC=$(DMDIR)\bin\rcc RCFLAGS=-32 -w1 -I$(SRCDIR) /D__DMC__ APPNAME = $(OBJDIR)\fossil$(E) all: $(APPNAME) $(APPNAME) : translate$E mkindex$E headers $(OBJ) $(OBJDIR)\link cd $(OBJDIR) $(DMDIR)\bin\link @link $(OBJDIR)\fossil.res: $B\win\fossil.rc $(RC) $(RCFLAGS) -o$@ $** $(OBJDIR)\link: $B\win\Makefile.dmc $(OBJDIR)\fossil.res +echo add allrepo attach bag bisect blob branch browse captcha cgi checkin checkout clearsign clone comformat configure content db delta deltacmd descendants diff diffcmd doc encode event export file finfo glob graph gzip http http_socket http_ssl http_transport import info leaf login main manifest md5 merge merge3 name path pivot popen pqueue printf rebuild report rss schema search setup sha1 shun skins sqlcmd stash stat style sync tag tar th_main timeline tkt tktsetup undo update url user verify vfile wiki wikiformat winhttp xfer zip shell sqlite3 th th_lang > $@ +echo fossil >> $@ +echo fossil >> $@ +echo $(LIBS) >> $@ +echo. >> $@ +echo fossil >> $@ translate$E: $(SRCDIR)\translate.c $(BCC) -o$@ $** makeheaders$E: $(SRCDIR)\makeheaders.c $(BCC) -o$@ $** mkindex$E: $(SRCDIR)\mkindex.c $(BCC) -o$@ $** version$E: $B\win\version.c $(BCC) -o$@ $** $(OBJDIR)\shell$O : $(SRCDIR)\shell.c $(TCC) -o$@ -c -Dmain=sqlite3_shell $(SQLITE_OPTIONS) $** $(OBJDIR)\sqlite3$O : $(SRCDIR)\sqlite3.c $(TCC) -o$@ -c $(SQLITE_OPTIONS) $** $(OBJDIR)\th$O : $(SRCDIR)\th.c $(TCC) -o$@ -c $** $(OBJDIR)\th_lang$O : $(SRCDIR)\th_lang.c $(TCC) -o$@ -c $** |
︙ | ︙ | |||
111 112 113 114 115 116 117 118 119 120 121 122 123 124 | +translate$E $** > $@ $(OBJDIR)\bag$O : bag_.c bag.h $(TCC) -o$@ -c bag_.c bag_.c : $(SRCDIR)\bag.c +translate$E $** > $@ $(OBJDIR)\blob$O : blob_.c blob.h $(TCC) -o$@ -c blob_.c blob_.c : $(SRCDIR)\blob.c +translate$E $** > $@ | > > > > > > | 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | +translate$E $** > $@ $(OBJDIR)\bag$O : bag_.c bag.h $(TCC) -o$@ -c bag_.c bag_.c : $(SRCDIR)\bag.c +translate$E $** > $@ $(OBJDIR)\bisect$O : bisect_.c bisect.h $(TCC) -o$@ -c bisect_.c bisect_.c : $(SRCDIR)\bisect.c +translate$E $** > $@ $(OBJDIR)\blob$O : blob_.c blob.h $(TCC) -o$@ -c blob_.c blob_.c : $(SRCDIR)\blob.c +translate$E $** > $@ |
︙ | ︙ | |||
226 227 228 229 230 231 232 | $(OBJDIR)\doc$O : doc_.c doc.h $(TCC) -o$@ -c doc_.c doc_.c : $(SRCDIR)\doc.c +translate$E $** > $@ | < < < < < < > > > > > > > > > > > > > > > > > > | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 | $(OBJDIR)\doc$O : doc_.c doc.h $(TCC) -o$@ -c doc_.c doc_.c : $(SRCDIR)\doc.c +translate$E $** > $@ $(OBJDIR)\encode$O : encode_.c encode.h $(TCC) -o$@ -c encode_.c encode_.c : $(SRCDIR)\encode.c +translate$E $** > $@ $(OBJDIR)\event$O : event_.c event.h $(TCC) -o$@ -c event_.c event_.c : $(SRCDIR)\event.c +translate$E $** > $@ $(OBJDIR)\export$O : export_.c export.h $(TCC) -o$@ -c export_.c export_.c : $(SRCDIR)\export.c +translate$E $** > $@ $(OBJDIR)\file$O : file_.c file.h $(TCC) -o$@ -c file_.c file_.c : $(SRCDIR)\file.c +translate$E $** > $@ $(OBJDIR)\finfo$O : finfo_.c finfo.h $(TCC) -o$@ -c finfo_.c finfo_.c : $(SRCDIR)\finfo.c +translate$E $** > $@ $(OBJDIR)\glob$O : glob_.c glob.h $(TCC) -o$@ -c glob_.c glob_.c : $(SRCDIR)\glob.c +translate$E $** > $@ $(OBJDIR)\graph$O : graph_.c graph.h $(TCC) -o$@ -c graph_.c graph_.c : $(SRCDIR)\graph.c +translate$E $** > $@ $(OBJDIR)\gzip$O : gzip_.c gzip.h $(TCC) -o$@ -c gzip_.c gzip_.c : $(SRCDIR)\gzip.c +translate$E $** > $@ $(OBJDIR)\http$O : http_.c http.h $(TCC) -o$@ -c http_.c http_.c : $(SRCDIR)\http.c +translate$E $** > $@ |
︙ | ︙ | |||
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | +translate$E $** > $@ $(OBJDIR)\http_transport$O : http_transport_.c http_transport.h $(TCC) -o$@ -c http_transport_.c http_transport_.c : $(SRCDIR)\http_transport.c +translate$E $** > $@ $(OBJDIR)\info$O : info_.c info.h $(TCC) -o$@ -c info_.c info_.c : $(SRCDIR)\info.c +translate$E $** > $@ $(OBJDIR)\login$O : login_.c login.h $(TCC) -o$@ -c login_.c login_.c : $(SRCDIR)\login.c +translate$E $** > $@ | > > > > > > > > > > > > | 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | +translate$E $** > $@ $(OBJDIR)\http_transport$O : http_transport_.c http_transport.h $(TCC) -o$@ -c http_transport_.c http_transport_.c : $(SRCDIR)\http_transport.c +translate$E $** > $@ $(OBJDIR)\import$O : import_.c import.h $(TCC) -o$@ -c import_.c import_.c : $(SRCDIR)\import.c +translate$E $** > $@ $(OBJDIR)\info$O : info_.c info.h $(TCC) -o$@ -c info_.c info_.c : $(SRCDIR)\info.c +translate$E $** > $@ $(OBJDIR)\leaf$O : leaf_.c leaf.h $(TCC) -o$@ -c leaf_.c leaf_.c : $(SRCDIR)\leaf.c +translate$E $** > $@ $(OBJDIR)\login$O : login_.c login.h $(TCC) -o$@ -c login_.c login_.c : $(SRCDIR)\login.c +translate$E $** > $@ |
︙ | ︙ | |||
333 334 335 336 337 338 339 340 341 342 343 344 345 346 | +translate$E $** > $@ $(OBJDIR)\name$O : name_.c name.h $(TCC) -o$@ -c name_.c name_.c : $(SRCDIR)\name.c +translate$E $** > $@ $(OBJDIR)\pivot$O : pivot_.c pivot.h $(TCC) -o$@ -c pivot_.c pivot_.c : $(SRCDIR)\pivot.c +translate$E $** > $@ | > > > > > > | 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 | +translate$E $** > $@ $(OBJDIR)\name$O : name_.c name.h $(TCC) -o$@ -c name_.c name_.c : $(SRCDIR)\name.c +translate$E $** > $@ $(OBJDIR)\path$O : path_.c path.h $(TCC) -o$@ -c path_.c path_.c : $(SRCDIR)\path.c +translate$E $** > $@ $(OBJDIR)\pivot$O : pivot_.c pivot.h $(TCC) -o$@ -c pivot_.c pivot_.c : $(SRCDIR)\pivot.c +translate$E $** > $@ |
︙ | ︙ | |||
411 412 413 414 415 416 417 418 419 420 421 422 423 424 | +translate$E $** > $@ $(OBJDIR)\skins$O : skins_.c skins.h $(TCC) -o$@ -c skins_.c skins_.c : $(SRCDIR)\skins.c +translate$E $** > $@ $(OBJDIR)\stat$O : stat_.c stat.h $(TCC) -o$@ -c stat_.c stat_.c : $(SRCDIR)\stat.c +translate$E $** > $@ | > > > > > > > > > > > > | 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 | +translate$E $** > $@ $(OBJDIR)\skins$O : skins_.c skins.h $(TCC) -o$@ -c skins_.c skins_.c : $(SRCDIR)\skins.c +translate$E $** > $@ $(OBJDIR)\sqlcmd$O : sqlcmd_.c sqlcmd.h $(TCC) -o$@ -c sqlcmd_.c sqlcmd_.c : $(SRCDIR)\sqlcmd.c +translate$E $** > $@ $(OBJDIR)\stash$O : stash_.c stash.h $(TCC) -o$@ -c stash_.c stash_.c : $(SRCDIR)\stash.c +translate$E $** > $@ $(OBJDIR)\stat$O : stat_.c stat.h $(TCC) -o$@ -c stat_.c stat_.c : $(SRCDIR)\stat.c +translate$E $** > $@ |
︙ | ︙ | |||
435 436 437 438 439 440 441 442 443 444 445 446 447 448 | +translate$E $** > $@ $(OBJDIR)\tag$O : tag_.c tag.h $(TCC) -o$@ -c tag_.c tag_.c : $(SRCDIR)\tag.c +translate$E $** > $@ $(OBJDIR)\th_main$O : th_main_.c th_main.h $(TCC) -o$@ -c th_main_.c th_main_.c : $(SRCDIR)\th_main.c +translate$E $** > $@ | > > > > > > | 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 | +translate$E $** > $@ $(OBJDIR)\tag$O : tag_.c tag.h $(TCC) -o$@ -c tag_.c tag_.c : $(SRCDIR)\tag.c +translate$E $** > $@ $(OBJDIR)\tar$O : tar_.c tar.h $(TCC) -o$@ -c tar_.c tar_.c : $(SRCDIR)\tar.c +translate$E $** > $@ $(OBJDIR)\th_main$O : th_main_.c th_main.h $(TCC) -o$@ -c th_main_.c th_main_.c : $(SRCDIR)\th_main.c +translate$E $** > $@ |
︙ | ︙ | |||
527 528 529 530 531 532 533 | $(OBJDIR)\zip$O : zip_.c zip.h $(TCC) -o$@ -c zip_.c zip_.c : $(SRCDIR)\zip.c +translate$E $** > $@ headers: makeheaders$E page_index.h VERSION.h | | | 578 579 580 581 582 583 584 585 586 | $(OBJDIR)\zip$O : zip_.c zip.h $(TCC) -o$@ -c zip_.c zip_.c : $(SRCDIR)\zip.c +translate$E $** > $@ headers: makeheaders$E page_index.h VERSION.h +makeheaders$E add_.c:add.h allrepo_.c:allrepo.h attach_.c:attach.h bag_.c:bag.h bisect_.c:bisect.h blob_.c:blob.h branch_.c:branch.h browse_.c:browse.h captcha_.c:captcha.h cgi_.c:cgi.h checkin_.c:checkin.h checkout_.c:checkout.h clearsign_.c:clearsign.h clone_.c:clone.h comformat_.c:comformat.h configure_.c:configure.h content_.c:content.h db_.c:db.h delta_.c:delta.h deltacmd_.c:deltacmd.h descendants_.c:descendants.h diff_.c:diff.h diffcmd_.c:diffcmd.h doc_.c:doc.h encode_.c:encode.h event_.c:event.h export_.c:export.h file_.c:file.h finfo_.c:finfo.h glob_.c:glob.h graph_.c:graph.h gzip_.c:gzip.h http_.c:http.h http_socket_.c:http_socket.h http_ssl_.c:http_ssl.h http_transport_.c:http_transport.h import_.c:import.h info_.c:info.h leaf_.c:leaf.h login_.c:login.h main_.c:main.h manifest_.c:manifest.h md5_.c:md5.h merge_.c:merge.h merge3_.c:merge3.h name_.c:name.h path_.c:path.h pivot_.c:pivot.h popen_.c:popen.h pqueue_.c:pqueue.h printf_.c:printf.h rebuild_.c:rebuild.h report_.c:report.h rss_.c:rss.h schema_.c:schema.h search_.c:search.h setup_.c:setup.h sha1_.c:sha1.h shun_.c:shun.h skins_.c:skins.h sqlcmd_.c:sqlcmd.h stash_.c:stash.h stat_.c:stat.h style_.c:style.h sync_.c:sync.h tag_.c:tag.h tar_.c:tar.h th_main_.c:th_main.h timeline_.c:timeline.h tkt_.c:tkt.h tktsetup_.c:tktsetup.h undo_.c:undo.h update_.c:update.h url_.c:url.h user_.c:user.h verify_.c:verify.h vfile_.c:vfile.h wiki_.c:wiki.h wikiformat_.c:wikiformat.h winhttp_.c:winhttp.h xfer_.c:xfer.h zip_.c:zip.h $(SRCDIR)\sqlite3.h $(SRCDIR)\th.h VERSION.h @copy /Y nul: headers |
Added win/Makefile.mingw.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 | #!/usr/bin/make # # This is a makefile for us on windows using mingw. # #### The toplevel directory of the source tree. Fossil can be built # in a directory that is separate from the source tree. Just change # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # # OBJDIR = wbld #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # BCC = gcc #### Enable HTTPS support via OpenSSL (links to libssl and libcrypto) # # FOSSIL_ENABLE_SSL=1 #### The directory in which the zlib compression library is installed. # # ZLIBDIR = /programs/gnuwin32 #### C Compile and options for use in building executables that # will run on the target platform. This is usually the same # as BCC, unless you are cross-compiling. This C compiler builds # the finished binary for fossil. The BCC compiler above is used # for building intermediate code-generator tools. # TCC = gcc -Os -Wall -L$(ZLIBDIR)/lib -I$(ZLIBDIR)/include # With HTTPS support ifdef FOSSIL_ENABLE_SSL TCC += -static -DFOSSIL_ENABLE_SSL=1 endif #### Extra arguments for linking the finished binary. Fossil needs # to link against the Z-Lib compression library. There are no # other dependencies. We sometimes add the -static option here # so that we can build a static executable that will run in a # chroot jail. # #LIB = -lz -lws2_32 # OpenSSL: ifdef FOSSIL_ENABLE_SSL LIB += -lssl -lcrypto -lgdi32 endif LIB += -lmingwex -lz -lws2_32 #### Tcl shell for use in running the fossil testsuite. This is only # used for testing. If you do not run # TCLSH = tclsh #### Nullsoft installer makensis location # MAKENSIS = "c:\Program Files\NSIS\makensis.exe" #### Include a configuration file that can override any one of these settings. # -include config.w32 # STOP HERE # You should not need to change anything below this line #-------------------------------------------------------- XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) SRC = \ $(SRCDIR)/add.c \ $(SRCDIR)/allrepo.c \ $(SRCDIR)/attach.c \ $(SRCDIR)/bag.c \ $(SRCDIR)/bisect.c \ $(SRCDIR)/blob.c \ $(SRCDIR)/branch.c \ $(SRCDIR)/browse.c \ $(SRCDIR)/captcha.c \ $(SRCDIR)/cgi.c \ $(SRCDIR)/checkin.c \ $(SRCDIR)/checkout.c \ $(SRCDIR)/clearsign.c \ $(SRCDIR)/clone.c \ $(SRCDIR)/comformat.c \ $(SRCDIR)/configure.c \ $(SRCDIR)/content.c \ $(SRCDIR)/db.c \ $(SRCDIR)/delta.c \ $(SRCDIR)/deltacmd.c \ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ $(SRCDIR)/http_transport.c \ $(SRCDIR)/import.c \ $(SRCDIR)/info.c \ $(SRCDIR)/leaf.c \ $(SRCDIR)/login.c \ $(SRCDIR)/main.c \ $(SRCDIR)/manifest.c \ $(SRCDIR)/md5.c \ $(SRCDIR)/merge.c \ $(SRCDIR)/merge3.c \ $(SRCDIR)/name.c \ $(SRCDIR)/path.c \ $(SRCDIR)/pivot.c \ $(SRCDIR)/popen.c \ $(SRCDIR)/pqueue.c \ $(SRCDIR)/printf.c \ $(SRCDIR)/rebuild.c \ $(SRCDIR)/report.c \ $(SRCDIR)/rss.c \ $(SRCDIR)/schema.c \ $(SRCDIR)/search.c \ $(SRCDIR)/setup.c \ $(SRCDIR)/sha1.c \ $(SRCDIR)/shun.c \ $(SRCDIR)/skins.c \ $(SRCDIR)/sqlcmd.c \ $(SRCDIR)/stash.c \ $(SRCDIR)/stat.c \ $(SRCDIR)/style.c \ $(SRCDIR)/sync.c \ $(SRCDIR)/tag.c \ $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ $(SRCDIR)/wiki.c \ $(SRCDIR)/wikiformat.c \ $(SRCDIR)/winhttp.c \ $(SRCDIR)/xfer.c \ $(SRCDIR)/zip.c TRANS_SRC = \ $(OBJDIR)/add_.c \ $(OBJDIR)/allrepo_.c \ $(OBJDIR)/attach_.c \ $(OBJDIR)/bag_.c \ $(OBJDIR)/bisect_.c \ $(OBJDIR)/blob_.c \ $(OBJDIR)/branch_.c \ $(OBJDIR)/browse_.c \ $(OBJDIR)/captcha_.c \ $(OBJDIR)/cgi_.c \ $(OBJDIR)/checkin_.c \ $(OBJDIR)/checkout_.c \ $(OBJDIR)/clearsign_.c \ $(OBJDIR)/clone_.c \ $(OBJDIR)/comformat_.c \ $(OBJDIR)/configure_.c \ $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ $(OBJDIR)/http_transport_.c \ $(OBJDIR)/import_.c \ $(OBJDIR)/info_.c \ $(OBJDIR)/leaf_.c \ $(OBJDIR)/login_.c \ $(OBJDIR)/main_.c \ $(OBJDIR)/manifest_.c \ $(OBJDIR)/md5_.c \ $(OBJDIR)/merge_.c \ $(OBJDIR)/merge3_.c \ $(OBJDIR)/name_.c \ $(OBJDIR)/path_.c \ $(OBJDIR)/pivot_.c \ $(OBJDIR)/popen_.c \ $(OBJDIR)/pqueue_.c \ $(OBJDIR)/printf_.c \ $(OBJDIR)/rebuild_.c \ $(OBJDIR)/report_.c \ $(OBJDIR)/rss_.c \ $(OBJDIR)/schema_.c \ $(OBJDIR)/search_.c \ $(OBJDIR)/setup_.c \ $(OBJDIR)/sha1_.c \ $(OBJDIR)/shun_.c \ $(OBJDIR)/skins_.c \ $(OBJDIR)/sqlcmd_.c \ $(OBJDIR)/stash_.c \ $(OBJDIR)/stat_.c \ $(OBJDIR)/style_.c \ $(OBJDIR)/sync_.c \ $(OBJDIR)/tag_.c \ $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ $(OBJDIR)/wiki_.c \ $(OBJDIR)/wikiformat_.c \ $(OBJDIR)/winhttp_.c \ $(OBJDIR)/xfer_.c \ $(OBJDIR)/zip_.c OBJ = \ $(OBJDIR)/add.o \ $(OBJDIR)/allrepo.o \ $(OBJDIR)/attach.o \ $(OBJDIR)/bag.o \ $(OBJDIR)/bisect.o \ $(OBJDIR)/blob.o \ $(OBJDIR)/branch.o \ $(OBJDIR)/browse.o \ $(OBJDIR)/captcha.o \ $(OBJDIR)/cgi.o \ $(OBJDIR)/checkin.o \ $(OBJDIR)/checkout.o \ $(OBJDIR)/clearsign.o \ $(OBJDIR)/clone.o \ $(OBJDIR)/comformat.o \ $(OBJDIR)/configure.o \ $(OBJDIR)/content.o \ $(OBJDIR)/db.o \ $(OBJDIR)/delta.o \ $(OBJDIR)/deltacmd.o \ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ $(OBJDIR)/http_transport.o \ $(OBJDIR)/import.o \ $(OBJDIR)/info.o \ $(OBJDIR)/leaf.o \ $(OBJDIR)/login.o \ $(OBJDIR)/main.o \ $(OBJDIR)/manifest.o \ $(OBJDIR)/md5.o \ $(OBJDIR)/merge.o \ $(OBJDIR)/merge3.o \ $(OBJDIR)/name.o \ $(OBJDIR)/path.o \ $(OBJDIR)/pivot.o \ $(OBJDIR)/popen.o \ $(OBJDIR)/pqueue.o \ $(OBJDIR)/printf.o \ $(OBJDIR)/rebuild.o \ $(OBJDIR)/report.o \ $(OBJDIR)/rss.o \ $(OBJDIR)/schema.o \ $(OBJDIR)/search.o \ $(OBJDIR)/setup.o \ $(OBJDIR)/sha1.o \ $(OBJDIR)/shun.o \ $(OBJDIR)/skins.o \ $(OBJDIR)/sqlcmd.o \ $(OBJDIR)/stash.o \ $(OBJDIR)/stat.o \ $(OBJDIR)/style.o \ $(OBJDIR)/sync.o \ $(OBJDIR)/tag.o \ $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ $(OBJDIR)/wiki.o \ $(OBJDIR)/wikiformat.o \ $(OBJDIR)/winhttp.o \ $(OBJDIR)/xfer.o \ $(OBJDIR)/zip.o APPNAME = fossil.exe TRANSLATE = $(subst /,\\,$(OBJDIR)/translate.exe) MAKEHEADERS = $(subst /,\\,$(OBJDIR)/makeheaders.exe) MKINDEX = $(subst /,\\,$(OBJDIR)/mkindex.exe) VERSION = $(subst /,\\,$(OBJDIR)/version.exe) all: $(OBJDIR) $(APPNAME) $(OBJDIR)/icon.o: $(SRCDIR)/../win/icon.rc cp $(SRCDIR)/../win/icon.rc $(OBJDIR) windres $(OBJDIR)/icon.rc -o $(OBJDIR)/icon.o install: $(APPNAME) mv $(APPNAME) $(INSTALLDIR) $(OBJDIR): mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(BCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(BCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(BCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c $(VERSION): $(SRCDIR)/../win/version.c $(BCC) -o $(OBJDIR)/version $(SRCDIR)/../win/version.c # WARNING. DANGER. Running the testsuite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. test: $(APPNAME) $(TCLSH) test/tester.tcl $(APPNAME) $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(VERSION) $(VERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest >$(OBJDIR)/VERSION.h EXTRAOBJ = $(OBJDIR)/sqlite3.o $(OBJDIR)/shell.o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o $(APPNAME): $(OBJDIR)/headers $(OBJ) $(EXTRAOBJ) $(OBJDIR)/icon.o $(TCC) -o $(APPNAME) $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/icon.o # This rule prevents make from using its default rules to try build # an executable named "manifest" out of the file named "manifest.c" # $(SRCDIR)/../manifest: # noop # Requires msys to be installed in addition to the mingw, for the "rm" # command. "del" will not work here because it is not a separate command # but a MSDOS-shell builtin. # clean: rm -rf $(OBJDIR) $(APPNAME) setup: $(OBJDIR) $(APPNAME) $(MAKENSIS) ./fossil.nsi $(OBJDIR)/page_index.h: $(TRANS_SRC) $(OBJDIR)/mkindex $(MKINDEX) $(TRANS_SRC) >$@ $(OBJDIR)/headers: $(OBJDIR)/page_index.h $(OBJDIR)/makeheaders $(OBJDIR)/VERSION.h $(MAKEHEADERS) $(OBJDIR)/add_.c:$(OBJDIR)/add.h $(OBJDIR)/allrepo_.c:$(OBJDIR)/allrepo.h $(OBJDIR)/attach_.c:$(OBJDIR)/attach.h $(OBJDIR)/bag_.c:$(OBJDIR)/bag.h $(OBJDIR)/bisect_.c:$(OBJDIR)/bisect.h $(OBJDIR)/blob_.c:$(OBJDIR)/blob.h $(OBJDIR)/branch_.c:$(OBJDIR)/branch.h $(OBJDIR)/browse_.c:$(OBJDIR)/browse.h $(OBJDIR)/captcha_.c:$(OBJDIR)/captcha.h $(OBJDIR)/cgi_.c:$(OBJDIR)/cgi.h $(OBJDIR)/checkin_.c:$(OBJDIR)/checkin.h $(OBJDIR)/checkout_.c:$(OBJDIR)/checkout.h $(OBJDIR)/clearsign_.c:$(OBJDIR)/clearsign.h $(OBJDIR)/clone_.c:$(OBJDIR)/clone.h $(OBJDIR)/comformat_.c:$(OBJDIR)/comformat.h $(OBJDIR)/configure_.c:$(OBJDIR)/configure.h $(OBJDIR)/content_.c:$(OBJDIR)/content.h $(OBJDIR)/db_.c:$(OBJDIR)/db.h $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h $(OBJDIR)/event_.c:$(OBJDIR)/event.h $(OBJDIR)/export_.c:$(OBJDIR)/export.h $(OBJDIR)/file_.c:$(OBJDIR)/file.h $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h $(OBJDIR)/http_.c:$(OBJDIR)/http.h $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h $(OBJDIR)/http_transport_.c:$(OBJDIR)/http_transport.h $(OBJDIR)/import_.c:$(OBJDIR)/import.h $(OBJDIR)/info_.c:$(OBJDIR)/info.h $(OBJDIR)/leaf_.c:$(OBJDIR)/leaf.h $(OBJDIR)/login_.c:$(OBJDIR)/login.h $(OBJDIR)/main_.c:$(OBJDIR)/main.h $(OBJDIR)/manifest_.c:$(OBJDIR)/manifest.h $(OBJDIR)/md5_.c:$(OBJDIR)/md5.h $(OBJDIR)/merge_.c:$(OBJDIR)/merge.h $(OBJDIR)/merge3_.c:$(OBJDIR)/merge3.h $(OBJDIR)/name_.c:$(OBJDIR)/name.h $(OBJDIR)/path_.c:$(OBJDIR)/path.h $(OBJDIR)/pivot_.c:$(OBJDIR)/pivot.h $(OBJDIR)/popen_.c:$(OBJDIR)/popen.h $(OBJDIR)/pqueue_.c:$(OBJDIR)/pqueue.h $(OBJDIR)/printf_.c:$(OBJDIR)/printf.h $(OBJDIR)/rebuild_.c:$(OBJDIR)/rebuild.h $(OBJDIR)/report_.c:$(OBJDIR)/report.h $(OBJDIR)/rss_.c:$(OBJDIR)/rss.h $(OBJDIR)/schema_.c:$(OBJDIR)/schema.h $(OBJDIR)/search_.c:$(OBJDIR)/search.h $(OBJDIR)/setup_.c:$(OBJDIR)/setup.h $(OBJDIR)/sha1_.c:$(OBJDIR)/sha1.h $(OBJDIR)/shun_.c:$(OBJDIR)/shun.h $(OBJDIR)/skins_.c:$(OBJDIR)/skins.h $(OBJDIR)/sqlcmd_.c:$(OBJDIR)/sqlcmd.h $(OBJDIR)/stash_.c:$(OBJDIR)/stash.h $(OBJDIR)/stat_.c:$(OBJDIR)/stat.h $(OBJDIR)/style_.c:$(OBJDIR)/style.h $(OBJDIR)/sync_.c:$(OBJDIR)/sync.h $(OBJDIR)/tag_.c:$(OBJDIR)/tag.h $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h $(OBJDIR)/update_.c:$(OBJDIR)/update.h $(OBJDIR)/url_.c:$(OBJDIR)/url.h $(OBJDIR)/user_.c:$(OBJDIR)/user.h $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h $(OBJDIR)/wiki_.c:$(OBJDIR)/wiki.h $(OBJDIR)/wikiformat_.c:$(OBJDIR)/wikiformat.h $(OBJDIR)/winhttp_.c:$(OBJDIR)/winhttp.h $(OBJDIR)/xfer_.c:$(OBJDIR)/xfer.h $(OBJDIR)/zip_.c:$(OBJDIR)/zip.h $(SRCDIR)/sqlite3.h $(SRCDIR)/th.h $(OBJDIR)/VERSION.h echo Done >$(OBJDIR)/headers $(OBJDIR)/headers: Makefile Makefile: $(OBJDIR)/add_.c: $(SRCDIR)/add.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/add.c >$(OBJDIR)/add_.c $(OBJDIR)/add.o: $(OBJDIR)/add_.c $(OBJDIR)/add.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/add.o -c $(OBJDIR)/add_.c add.h: $(OBJDIR)/headers $(OBJDIR)/allrepo_.c: $(SRCDIR)/allrepo.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/allrepo.c >$(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.o: $(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/allrepo.o -c $(OBJDIR)/allrepo_.c allrepo.h: $(OBJDIR)/headers $(OBJDIR)/attach_.c: $(SRCDIR)/attach.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/attach.c >$(OBJDIR)/attach_.c $(OBJDIR)/attach.o: $(OBJDIR)/attach_.c $(OBJDIR)/attach.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/attach.o -c $(OBJDIR)/attach_.c attach.h: $(OBJDIR)/headers $(OBJDIR)/bag_.c: $(SRCDIR)/bag.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/bag.c >$(OBJDIR)/bag_.c $(OBJDIR)/bag.o: $(OBJDIR)/bag_.c $(OBJDIR)/bag.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/bag.o -c $(OBJDIR)/bag_.c bag.h: $(OBJDIR)/headers $(OBJDIR)/bisect_.c: $(SRCDIR)/bisect.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/bisect.c >$(OBJDIR)/bisect_.c $(OBJDIR)/bisect.o: $(OBJDIR)/bisect_.c $(OBJDIR)/bisect.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/bisect.o -c $(OBJDIR)/bisect_.c bisect.h: $(OBJDIR)/headers $(OBJDIR)/blob_.c: $(SRCDIR)/blob.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/blob.c >$(OBJDIR)/blob_.c $(OBJDIR)/blob.o: $(OBJDIR)/blob_.c $(OBJDIR)/blob.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/blob.o -c $(OBJDIR)/blob_.c blob.h: $(OBJDIR)/headers $(OBJDIR)/branch_.c: $(SRCDIR)/branch.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/branch.c >$(OBJDIR)/branch_.c $(OBJDIR)/branch.o: $(OBJDIR)/branch_.c $(OBJDIR)/branch.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/branch.o -c $(OBJDIR)/branch_.c branch.h: $(OBJDIR)/headers $(OBJDIR)/browse_.c: $(SRCDIR)/browse.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/browse.c >$(OBJDIR)/browse_.c $(OBJDIR)/browse.o: $(OBJDIR)/browse_.c $(OBJDIR)/browse.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/browse.o -c $(OBJDIR)/browse_.c browse.h: $(OBJDIR)/headers $(OBJDIR)/captcha_.c: $(SRCDIR)/captcha.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/captcha.c >$(OBJDIR)/captcha_.c $(OBJDIR)/captcha.o: $(OBJDIR)/captcha_.c $(OBJDIR)/captcha.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/captcha.o -c $(OBJDIR)/captcha_.c captcha.h: $(OBJDIR)/headers $(OBJDIR)/cgi_.c: $(SRCDIR)/cgi.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/cgi.c >$(OBJDIR)/cgi_.c $(OBJDIR)/cgi.o: $(OBJDIR)/cgi_.c $(OBJDIR)/cgi.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/cgi.o -c $(OBJDIR)/cgi_.c cgi.h: $(OBJDIR)/headers $(OBJDIR)/checkin_.c: $(SRCDIR)/checkin.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/checkin.c >$(OBJDIR)/checkin_.c $(OBJDIR)/checkin.o: $(OBJDIR)/checkin_.c $(OBJDIR)/checkin.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/checkin.o -c $(OBJDIR)/checkin_.c checkin.h: $(OBJDIR)/headers $(OBJDIR)/checkout_.c: $(SRCDIR)/checkout.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/checkout.c >$(OBJDIR)/checkout_.c $(OBJDIR)/checkout.o: $(OBJDIR)/checkout_.c $(OBJDIR)/checkout.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/checkout.o -c $(OBJDIR)/checkout_.c checkout.h: $(OBJDIR)/headers $(OBJDIR)/clearsign_.c: $(SRCDIR)/clearsign.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/clearsign.c >$(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.o: $(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/clearsign.o -c $(OBJDIR)/clearsign_.c clearsign.h: $(OBJDIR)/headers $(OBJDIR)/clone_.c: $(SRCDIR)/clone.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/clone.c >$(OBJDIR)/clone_.c $(OBJDIR)/clone.o: $(OBJDIR)/clone_.c $(OBJDIR)/clone.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/clone.o -c $(OBJDIR)/clone_.c clone.h: $(OBJDIR)/headers $(OBJDIR)/comformat_.c: $(SRCDIR)/comformat.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/comformat.c >$(OBJDIR)/comformat_.c $(OBJDIR)/comformat.o: $(OBJDIR)/comformat_.c $(OBJDIR)/comformat.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/comformat.o -c $(OBJDIR)/comformat_.c comformat.h: $(OBJDIR)/headers $(OBJDIR)/configure_.c: $(SRCDIR)/configure.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/configure.c >$(OBJDIR)/configure_.c $(OBJDIR)/configure.o: $(OBJDIR)/configure_.c $(OBJDIR)/configure.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/configure.o -c $(OBJDIR)/configure_.c configure.h: $(OBJDIR)/headers $(OBJDIR)/content_.c: $(SRCDIR)/content.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/content.c >$(OBJDIR)/content_.c $(OBJDIR)/content.o: $(OBJDIR)/content_.c $(OBJDIR)/content.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/content.o -c $(OBJDIR)/content_.c content.h: $(OBJDIR)/headers $(OBJDIR)/db_.c: $(SRCDIR)/db.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/db.c >$(OBJDIR)/db_.c $(OBJDIR)/db.o: $(OBJDIR)/db_.c $(OBJDIR)/db.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/db.o -c $(OBJDIR)/db_.c db.h: $(OBJDIR)/headers $(OBJDIR)/delta_.c: $(SRCDIR)/delta.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/delta.c >$(OBJDIR)/delta_.c $(OBJDIR)/delta.o: $(OBJDIR)/delta_.c $(OBJDIR)/delta.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/delta.o -c $(OBJDIR)/delta_.c delta.h: $(OBJDIR)/headers $(OBJDIR)/deltacmd_.c: $(SRCDIR)/deltacmd.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/deltacmd.c >$(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.o: $(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/deltacmd.o -c $(OBJDIR)/deltacmd_.c deltacmd.h: $(OBJDIR)/headers $(OBJDIR)/descendants_.c: $(SRCDIR)/descendants.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/descendants.c >$(OBJDIR)/descendants_.c $(OBJDIR)/descendants.o: $(OBJDIR)/descendants_.c $(OBJDIR)/descendants.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/descendants.o -c $(OBJDIR)/descendants_.c descendants.h: $(OBJDIR)/headers $(OBJDIR)/diff_.c: $(SRCDIR)/diff.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/diff.c >$(OBJDIR)/diff_.c $(OBJDIR)/diff.o: $(OBJDIR)/diff_.c $(OBJDIR)/diff.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diff.o -c $(OBJDIR)/diff_.c diff.h: $(OBJDIR)/headers $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/diffcmd.c >$(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/doc.c >$(OBJDIR)/doc_.c $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c doc.h: $(OBJDIR)/headers $(OBJDIR)/encode_.c: $(SRCDIR)/encode.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/encode.c >$(OBJDIR)/encode_.c $(OBJDIR)/encode.o: $(OBJDIR)/encode_.c $(OBJDIR)/encode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/encode.o -c $(OBJDIR)/encode_.c encode.h: $(OBJDIR)/headers $(OBJDIR)/event_.c: $(SRCDIR)/event.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/event.c >$(OBJDIR)/event_.c $(OBJDIR)/event.o: $(OBJDIR)/event_.c $(OBJDIR)/event.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/event.o -c $(OBJDIR)/event_.c event.h: $(OBJDIR)/headers $(OBJDIR)/export_.c: $(SRCDIR)/export.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/export.c >$(OBJDIR)/export_.c $(OBJDIR)/export.o: $(OBJDIR)/export_.c $(OBJDIR)/export.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/export.o -c $(OBJDIR)/export_.c export.h: $(OBJDIR)/headers $(OBJDIR)/file_.c: $(SRCDIR)/file.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/file.c >$(OBJDIR)/file_.c $(OBJDIR)/file.o: $(OBJDIR)/file_.c $(OBJDIR)/file.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/file.o -c $(OBJDIR)/file_.c file.h: $(OBJDIR)/headers $(OBJDIR)/finfo_.c: $(SRCDIR)/finfo.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/finfo.c >$(OBJDIR)/finfo_.c $(OBJDIR)/finfo.o: $(OBJDIR)/finfo_.c $(OBJDIR)/finfo.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/finfo.o -c $(OBJDIR)/finfo_.c finfo.h: $(OBJDIR)/headers $(OBJDIR)/glob_.c: $(SRCDIR)/glob.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/glob.c >$(OBJDIR)/glob_.c $(OBJDIR)/glob.o: $(OBJDIR)/glob_.c $(OBJDIR)/glob.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/glob.o -c $(OBJDIR)/glob_.c glob.h: $(OBJDIR)/headers $(OBJDIR)/graph_.c: $(SRCDIR)/graph.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/graph.c >$(OBJDIR)/graph_.c $(OBJDIR)/graph.o: $(OBJDIR)/graph_.c $(OBJDIR)/graph.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/graph.o -c $(OBJDIR)/graph_.c graph.h: $(OBJDIR)/headers $(OBJDIR)/gzip_.c: $(SRCDIR)/gzip.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/gzip.c >$(OBJDIR)/gzip_.c $(OBJDIR)/gzip.o: $(OBJDIR)/gzip_.c $(OBJDIR)/gzip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/gzip.o -c $(OBJDIR)/gzip_.c gzip.h: $(OBJDIR)/headers $(OBJDIR)/http_.c: $(SRCDIR)/http.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/http.c >$(OBJDIR)/http_.c $(OBJDIR)/http.o: $(OBJDIR)/http_.c $(OBJDIR)/http.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http.o -c $(OBJDIR)/http_.c http.h: $(OBJDIR)/headers $(OBJDIR)/http_socket_.c: $(SRCDIR)/http_socket.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/http_socket.c >$(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.o: $(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http_socket.o -c $(OBJDIR)/http_socket_.c http_socket.h: $(OBJDIR)/headers $(OBJDIR)/http_ssl_.c: $(SRCDIR)/http_ssl.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/http_ssl.c >$(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.o: $(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http_ssl.o -c $(OBJDIR)/http_ssl_.c http_ssl.h: $(OBJDIR)/headers $(OBJDIR)/http_transport_.c: $(SRCDIR)/http_transport.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/http_transport.c >$(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.o: $(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/http_transport.o -c $(OBJDIR)/http_transport_.c http_transport.h: $(OBJDIR)/headers $(OBJDIR)/import_.c: $(SRCDIR)/import.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/import.c >$(OBJDIR)/import_.c $(OBJDIR)/import.o: $(OBJDIR)/import_.c $(OBJDIR)/import.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/import.o -c $(OBJDIR)/import_.c import.h: $(OBJDIR)/headers $(OBJDIR)/info_.c: $(SRCDIR)/info.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/info.c >$(OBJDIR)/info_.c $(OBJDIR)/info.o: $(OBJDIR)/info_.c $(OBJDIR)/info.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/info.o -c $(OBJDIR)/info_.c info.h: $(OBJDIR)/headers $(OBJDIR)/leaf_.c: $(SRCDIR)/leaf.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/leaf.c >$(OBJDIR)/leaf_.c $(OBJDIR)/leaf.o: $(OBJDIR)/leaf_.c $(OBJDIR)/leaf.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/leaf.o -c $(OBJDIR)/leaf_.c leaf.h: $(OBJDIR)/headers $(OBJDIR)/login_.c: $(SRCDIR)/login.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/login.c >$(OBJDIR)/login_.c $(OBJDIR)/login.o: $(OBJDIR)/login_.c $(OBJDIR)/login.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/login.o -c $(OBJDIR)/login_.c login.h: $(OBJDIR)/headers $(OBJDIR)/main_.c: $(SRCDIR)/main.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/main.c >$(OBJDIR)/main_.c $(OBJDIR)/main.o: $(OBJDIR)/main_.c $(OBJDIR)/main.h $(OBJDIR)/page_index.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/main.o -c $(OBJDIR)/main_.c main.h: $(OBJDIR)/headers $(OBJDIR)/manifest_.c: $(SRCDIR)/manifest.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/manifest.c >$(OBJDIR)/manifest_.c $(OBJDIR)/manifest.o: $(OBJDIR)/manifest_.c $(OBJDIR)/manifest.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/manifest.o -c $(OBJDIR)/manifest_.c manifest.h: $(OBJDIR)/headers $(OBJDIR)/md5_.c: $(SRCDIR)/md5.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/md5.c >$(OBJDIR)/md5_.c $(OBJDIR)/md5.o: $(OBJDIR)/md5_.c $(OBJDIR)/md5.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/md5.o -c $(OBJDIR)/md5_.c md5.h: $(OBJDIR)/headers $(OBJDIR)/merge_.c: $(SRCDIR)/merge.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/merge.c >$(OBJDIR)/merge_.c $(OBJDIR)/merge.o: $(OBJDIR)/merge_.c $(OBJDIR)/merge.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/merge.o -c $(OBJDIR)/merge_.c merge.h: $(OBJDIR)/headers $(OBJDIR)/merge3_.c: $(SRCDIR)/merge3.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/merge3.c >$(OBJDIR)/merge3_.c $(OBJDIR)/merge3.o: $(OBJDIR)/merge3_.c $(OBJDIR)/merge3.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/merge3.o -c $(OBJDIR)/merge3_.c merge3.h: $(OBJDIR)/headers $(OBJDIR)/name_.c: $(SRCDIR)/name.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/name.c >$(OBJDIR)/name_.c $(OBJDIR)/name.o: $(OBJDIR)/name_.c $(OBJDIR)/name.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/name.o -c $(OBJDIR)/name_.c name.h: $(OBJDIR)/headers $(OBJDIR)/path_.c: $(SRCDIR)/path.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/path.c >$(OBJDIR)/path_.c $(OBJDIR)/path.o: $(OBJDIR)/path_.c $(OBJDIR)/path.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/path.o -c $(OBJDIR)/path_.c path.h: $(OBJDIR)/headers $(OBJDIR)/pivot_.c: $(SRCDIR)/pivot.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/pivot.c >$(OBJDIR)/pivot_.c $(OBJDIR)/pivot.o: $(OBJDIR)/pivot_.c $(OBJDIR)/pivot.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/pivot.o -c $(OBJDIR)/pivot_.c pivot.h: $(OBJDIR)/headers $(OBJDIR)/popen_.c: $(SRCDIR)/popen.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/popen.c >$(OBJDIR)/popen_.c $(OBJDIR)/popen.o: $(OBJDIR)/popen_.c $(OBJDIR)/popen.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/popen.o -c $(OBJDIR)/popen_.c popen.h: $(OBJDIR)/headers $(OBJDIR)/pqueue_.c: $(SRCDIR)/pqueue.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/pqueue.c >$(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.o: $(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/pqueue.o -c $(OBJDIR)/pqueue_.c pqueue.h: $(OBJDIR)/headers $(OBJDIR)/printf_.c: $(SRCDIR)/printf.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/printf.c >$(OBJDIR)/printf_.c $(OBJDIR)/printf.o: $(OBJDIR)/printf_.c $(OBJDIR)/printf.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/printf.o -c $(OBJDIR)/printf_.c printf.h: $(OBJDIR)/headers $(OBJDIR)/rebuild_.c: $(SRCDIR)/rebuild.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/rebuild.c >$(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.o: $(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/rebuild.o -c $(OBJDIR)/rebuild_.c rebuild.h: $(OBJDIR)/headers $(OBJDIR)/report_.c: $(SRCDIR)/report.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/report.c >$(OBJDIR)/report_.c $(OBJDIR)/report.o: $(OBJDIR)/report_.c $(OBJDIR)/report.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/report.o -c $(OBJDIR)/report_.c report.h: $(OBJDIR)/headers $(OBJDIR)/rss_.c: $(SRCDIR)/rss.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/rss.c >$(OBJDIR)/rss_.c $(OBJDIR)/rss.o: $(OBJDIR)/rss_.c $(OBJDIR)/rss.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/rss.o -c $(OBJDIR)/rss_.c rss.h: $(OBJDIR)/headers $(OBJDIR)/schema_.c: $(SRCDIR)/schema.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/schema.c >$(OBJDIR)/schema_.c $(OBJDIR)/schema.o: $(OBJDIR)/schema_.c $(OBJDIR)/schema.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/schema.o -c $(OBJDIR)/schema_.c schema.h: $(OBJDIR)/headers $(OBJDIR)/search_.c: $(SRCDIR)/search.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/search.c >$(OBJDIR)/search_.c $(OBJDIR)/search.o: $(OBJDIR)/search_.c $(OBJDIR)/search.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/search.o -c $(OBJDIR)/search_.c search.h: $(OBJDIR)/headers $(OBJDIR)/setup_.c: $(SRCDIR)/setup.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/setup.c >$(OBJDIR)/setup_.c $(OBJDIR)/setup.o: $(OBJDIR)/setup_.c $(OBJDIR)/setup.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/setup.o -c $(OBJDIR)/setup_.c setup.h: $(OBJDIR)/headers $(OBJDIR)/sha1_.c: $(SRCDIR)/sha1.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/sha1.c >$(OBJDIR)/sha1_.c $(OBJDIR)/sha1.o: $(OBJDIR)/sha1_.c $(OBJDIR)/sha1.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/sha1.o -c $(OBJDIR)/sha1_.c sha1.h: $(OBJDIR)/headers $(OBJDIR)/shun_.c: $(SRCDIR)/shun.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/shun.c >$(OBJDIR)/shun_.c $(OBJDIR)/shun.o: $(OBJDIR)/shun_.c $(OBJDIR)/shun.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/shun.o -c $(OBJDIR)/shun_.c shun.h: $(OBJDIR)/headers $(OBJDIR)/skins_.c: $(SRCDIR)/skins.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/skins.c >$(OBJDIR)/skins_.c $(OBJDIR)/skins.o: $(OBJDIR)/skins_.c $(OBJDIR)/skins.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/skins.o -c $(OBJDIR)/skins_.c skins.h: $(OBJDIR)/headers $(OBJDIR)/sqlcmd_.c: $(SRCDIR)/sqlcmd.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/sqlcmd.c >$(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.o: $(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/sqlcmd.o -c $(OBJDIR)/sqlcmd_.c sqlcmd.h: $(OBJDIR)/headers $(OBJDIR)/stash_.c: $(SRCDIR)/stash.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/stash.c >$(OBJDIR)/stash_.c $(OBJDIR)/stash.o: $(OBJDIR)/stash_.c $(OBJDIR)/stash.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/stash.o -c $(OBJDIR)/stash_.c stash.h: $(OBJDIR)/headers $(OBJDIR)/stat_.c: $(SRCDIR)/stat.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/stat.c >$(OBJDIR)/stat_.c $(OBJDIR)/stat.o: $(OBJDIR)/stat_.c $(OBJDIR)/stat.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/stat.o -c $(OBJDIR)/stat_.c stat.h: $(OBJDIR)/headers $(OBJDIR)/style_.c: $(SRCDIR)/style.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/style.c >$(OBJDIR)/style_.c $(OBJDIR)/style.o: $(OBJDIR)/style_.c $(OBJDIR)/style.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/style.o -c $(OBJDIR)/style_.c style.h: $(OBJDIR)/headers $(OBJDIR)/sync_.c: $(SRCDIR)/sync.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/sync.c >$(OBJDIR)/sync_.c $(OBJDIR)/sync.o: $(OBJDIR)/sync_.c $(OBJDIR)/sync.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/sync.o -c $(OBJDIR)/sync_.c sync.h: $(OBJDIR)/headers $(OBJDIR)/tag_.c: $(SRCDIR)/tag.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/tag.c >$(OBJDIR)/tag_.c $(OBJDIR)/tag.o: $(OBJDIR)/tag_.c $(OBJDIR)/tag.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tag.o -c $(OBJDIR)/tag_.c tag.h: $(OBJDIR)/headers $(OBJDIR)/tar_.c: $(SRCDIR)/tar.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/tar.c >$(OBJDIR)/tar_.c $(OBJDIR)/tar.o: $(OBJDIR)/tar_.c $(OBJDIR)/tar.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tar.o -c $(OBJDIR)/tar_.c tar.h: $(OBJDIR)/headers $(OBJDIR)/th_main_.c: $(SRCDIR)/th_main.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/th_main.c >$(OBJDIR)/th_main_.c $(OBJDIR)/th_main.o: $(OBJDIR)/th_main_.c $(OBJDIR)/th_main.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/th_main.o -c $(OBJDIR)/th_main_.c th_main.h: $(OBJDIR)/headers $(OBJDIR)/timeline_.c: $(SRCDIR)/timeline.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/timeline.c >$(OBJDIR)/timeline_.c $(OBJDIR)/timeline.o: $(OBJDIR)/timeline_.c $(OBJDIR)/timeline.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/timeline.o -c $(OBJDIR)/timeline_.c timeline.h: $(OBJDIR)/headers $(OBJDIR)/tkt_.c: $(SRCDIR)/tkt.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/tkt.c >$(OBJDIR)/tkt_.c $(OBJDIR)/tkt.o: $(OBJDIR)/tkt_.c $(OBJDIR)/tkt.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tkt.o -c $(OBJDIR)/tkt_.c tkt.h: $(OBJDIR)/headers $(OBJDIR)/tktsetup_.c: $(SRCDIR)/tktsetup.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/tktsetup.c >$(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.o: $(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/tktsetup.o -c $(OBJDIR)/tktsetup_.c tktsetup.h: $(OBJDIR)/headers $(OBJDIR)/undo_.c: $(SRCDIR)/undo.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/undo.c >$(OBJDIR)/undo_.c $(OBJDIR)/undo.o: $(OBJDIR)/undo_.c $(OBJDIR)/undo.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/undo.o -c $(OBJDIR)/undo_.c undo.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/update.c >$(OBJDIR)/update_.c $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c update.h: $(OBJDIR)/headers $(OBJDIR)/url_.c: $(SRCDIR)/url.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/url.c >$(OBJDIR)/url_.c $(OBJDIR)/url.o: $(OBJDIR)/url_.c $(OBJDIR)/url.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/url.o -c $(OBJDIR)/url_.c url.h: $(OBJDIR)/headers $(OBJDIR)/user_.c: $(SRCDIR)/user.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/user.c >$(OBJDIR)/user_.c $(OBJDIR)/user.o: $(OBJDIR)/user_.c $(OBJDIR)/user.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/user.o -c $(OBJDIR)/user_.c user.h: $(OBJDIR)/headers $(OBJDIR)/verify_.c: $(SRCDIR)/verify.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/verify.c >$(OBJDIR)/verify_.c $(OBJDIR)/verify.o: $(OBJDIR)/verify_.c $(OBJDIR)/verify.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/verify.o -c $(OBJDIR)/verify_.c verify.h: $(OBJDIR)/headers $(OBJDIR)/vfile_.c: $(SRCDIR)/vfile.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/vfile.c >$(OBJDIR)/vfile_.c $(OBJDIR)/vfile.o: $(OBJDIR)/vfile_.c $(OBJDIR)/vfile.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/vfile.o -c $(OBJDIR)/vfile_.c vfile.h: $(OBJDIR)/headers $(OBJDIR)/wiki_.c: $(SRCDIR)/wiki.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/wiki.c >$(OBJDIR)/wiki_.c $(OBJDIR)/wiki.o: $(OBJDIR)/wiki_.c $(OBJDIR)/wiki.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/wiki.o -c $(OBJDIR)/wiki_.c wiki.h: $(OBJDIR)/headers $(OBJDIR)/wikiformat_.c: $(SRCDIR)/wikiformat.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/wikiformat.c >$(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.o: $(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/wikiformat.o -c $(OBJDIR)/wikiformat_.c wikiformat.h: $(OBJDIR)/headers $(OBJDIR)/winhttp_.c: $(SRCDIR)/winhttp.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/winhttp.c >$(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.o: $(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/winhttp.o -c $(OBJDIR)/winhttp_.c winhttp.h: $(OBJDIR)/headers $(OBJDIR)/xfer_.c: $(SRCDIR)/xfer.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/xfer.c >$(OBJDIR)/xfer_.c $(OBJDIR)/xfer.o: $(OBJDIR)/xfer_.c $(OBJDIR)/xfer.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/xfer.o -c $(OBJDIR)/xfer_.c xfer.h: $(OBJDIR)/headers $(OBJDIR)/zip_.c: $(SRCDIR)/zip.c $(OBJDIR)/translate $(TRANSLATE) $(SRCDIR)/zip.c >$(OBJDIR)/zip_.c $(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c zip.h: $(OBJDIR)/headers $(OBJDIR)/sqlite3.o: $(SRCDIR)/sqlite3.c $(XTCC) -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_ENABLE_STAT2 -Dlocaltime=fossil_localtime -DSQLITE_ENABLE_LOCKING_STYLE=0 -c $(SRCDIR)/sqlite3.c -o $(OBJDIR)/sqlite3.o $(OBJDIR)/shell.o: $(SRCDIR)/shell.c $(XTCC) -Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -c $(SRCDIR)/shell.c -o $(OBJDIR)/shell.o $(OBJDIR)/th.o: $(SRCDIR)/th.c $(XTCC) -I$(SRCDIR) -c $(SRCDIR)/th.c -o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o: $(SRCDIR)/th_lang.c $(XTCC) -I$(SRCDIR) -c $(SRCDIR)/th_lang.c -o $(OBJDIR)/th_lang.o |
Added win/Makefile.mingw32cross.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | #!/usr/bin/make # This makefile is for use with the Mingw32 *cross compiler* on Linux # #### The toplevel directory of the source tree. Fossil can be built # in a directory that is separate from the source tree. Just change # the following to point from the build directory to the src/ folder. # SRCDIR = ./src OBJDIR = ./wobj #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # BCC = gcc -s -O2 #### The suffix to add to executable files. ".exe" for windows. # Nothing for unix. # E = .exe #### Enable HTTPS support via OpenSSL (links to libssl and libcrypto) # FOSSIL_ENABLE_SSL=1 #### C Compile and options for use in building executables that # will run on the target platform. This is usually the same # as BCC, unless you are cross-compiling. This C compiler builds # the finished binary for fossil. The BCC compiler above is used # for building intermediate code-generator tools. # #TCC = gcc -O6 #TCC = gcc -g -O0 -Wall -fprofile-arcs -ftest-coverage #TCC = gcc -g -Os -Wall #TCC = gcc -g -Os -Wall -DFOSSIL_I18N=0 -L/usr/local/lib -I/usr/local/include TCC=i686-pc-mingw32-gcc -O3 -s -Wall -DFOSSIL_I18N=0 # With HTTPS support ifdef FOSSIL_ENABLE_SSL TCC += -DFOSSIL_ENABLE_SSL=1 endif #### Extra arguments for linking the finished binary. Fossil needs # to link against the Z-Lib compression library. There are no # other dependencies. We sometimes add the -static option here # so that we can build a static executable that will run in a # chroot jail. # LIB=-lz -lssl -lcrypto -lwsock32 -lgdi32 CFLAGS=-O3 -s LDFLAGS=-s #### Tcl shell for use in running the fossil testsuite. # TCLSH = tclsh #### Include a configuration file that can override any one of these settings. # -include config.w32 # You should not need to change anything below this line ############################################################################### include $(SRCDIR)/main.mk |
Changes to win/Makefile.msc.
1 2 3 | # DO NOT EDIT # # This file is automatically generated. Instead of editing this | | < > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | # DO NOT EDIT # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh src/makemake.tcl" # to regenerate this file. B = .. SRCDIR = $B\src OBJDIR = . OX = . O = .obj E = .exe # Maybe MSCDIR, SSL, ZLIB, or INCL needs adjustment MSCDIR = c:\msc # Uncomment below for SSL support |
︙ | ︙ | |||
26 27 28 29 30 31 32 | #ZLIB = zdll.lib ZINCDIR = $(MSCDIR)\extra\include ZLIBDIR = $(MSCDIR)\extra\lib ZLIB = zlib.lib INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(MSCDIR)\extra\include -I$(ZINCDIR) | < < < | > > | < > | | | | | | | > > > | | | | | | | | | > > > > > > | | | | | | | | | | | | | | | | | | | | > > > > > > > > > > > > | | > > > > > > | > > > > > > | | | | > > > > > > | > > > > > > | | | | | | | > > > > > > | | | | | | | | | | | | | > > > > > > > > > > > > | | | | > > > > > > | | | | | | | | | | | | | | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 | #ZLIB = zdll.lib ZINCDIR = $(MSCDIR)\extra\include ZLIBDIR = $(MSCDIR)\extra\lib ZLIB = zlib.lib INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(MSCDIR)\extra\include -I$(ZINCDIR) CFLAGS = -nologo -MT -O2 BCC = $(CC) $(CFLAGS) TCC = $(CC) -c $(CFLAGS) $(MSCDEF) $(SSL) $(INCL) LIBS = $(ZLIB) ws2_32.lib $(SSLLIB) LIBDIR = -LIBPATH:$(MSCDIR)\extra\lib -LIBPATH:$(ZLIBDIR) SQLITE_OPTIONS = /DSQLITE_OMIT_LOAD_EXTENSION=1 /DSQLITE_THREADSAFE=0 /DSQLITE_DEFAULT_FILE_FORMAT=4 /DSQLITE_ENABLE_STAT2 /Dlocaltime=fossil_localtime /DSQLITE_ENABLE_LOCKING_STYLE=0 SRC = add_.c allrepo_.c attach_.c bag_.c bisect_.c blob_.c branch_.c browse_.c captcha_.c cgi_.c checkin_.c checkout_.c clearsign_.c clone_.c comformat_.c configure_.c content_.c db_.c delta_.c deltacmd_.c descendants_.c diff_.c diffcmd_.c doc_.c encode_.c event_.c export_.c file_.c finfo_.c glob_.c graph_.c gzip_.c http_.c http_socket_.c http_ssl_.c http_transport_.c import_.c info_.c leaf_.c login_.c main_.c manifest_.c md5_.c merge_.c merge3_.c name_.c path_.c pivot_.c popen_.c pqueue_.c printf_.c rebuild_.c report_.c rss_.c schema_.c search_.c setup_.c sha1_.c shun_.c skins_.c sqlcmd_.c stash_.c stat_.c style_.c sync_.c tag_.c tar_.c th_main_.c timeline_.c tkt_.c tktsetup_.c undo_.c update_.c url_.c user_.c verify_.c vfile_.c wiki_.c wikiformat_.c winhttp_.c xfer_.c zip_.c OBJ = $(OX)\add$O $(OX)\allrepo$O $(OX)\attach$O $(OX)\bag$O $(OX)\bisect$O $(OX)\blob$O $(OX)\branch$O $(OX)\browse$O $(OX)\captcha$O $(OX)\cgi$O $(OX)\checkin$O $(OX)\checkout$O $(OX)\clearsign$O $(OX)\clone$O $(OX)\comformat$O $(OX)\configure$O $(OX)\content$O $(OX)\db$O $(OX)\delta$O $(OX)\deltacmd$O $(OX)\descendants$O $(OX)\diff$O $(OX)\diffcmd$O $(OX)\doc$O $(OX)\encode$O $(OX)\event$O $(OX)\export$O $(OX)\file$O $(OX)\finfo$O $(OX)\glob$O $(OX)\graph$O $(OX)\gzip$O $(OX)\http$O $(OX)\http_socket$O $(OX)\http_ssl$O $(OX)\http_transport$O $(OX)\import$O $(OX)\info$O $(OX)\leaf$O $(OX)\login$O $(OX)\main$O $(OX)\manifest$O $(OX)\md5$O $(OX)\merge$O $(OX)\merge3$O $(OX)\name$O $(OX)\path$O $(OX)\pivot$O $(OX)\popen$O $(OX)\pqueue$O $(OX)\printf$O $(OX)\rebuild$O $(OX)\report$O $(OX)\rss$O $(OX)\schema$O $(OX)\search$O $(OX)\setup$O $(OX)\sha1$O $(OX)\shun$O $(OX)\skins$O $(OX)\sqlcmd$O $(OX)\stash$O $(OX)\stat$O $(OX)\style$O $(OX)\sync$O $(OX)\tag$O $(OX)\tar$O $(OX)\th_main$O $(OX)\timeline$O $(OX)\tkt$O $(OX)\tktsetup$O $(OX)\undo$O $(OX)\update$O $(OX)\url$O $(OX)\user$O $(OX)\verify$O $(OX)\vfile$O $(OX)\wiki$O $(OX)\wikiformat$O $(OX)\winhttp$O $(OX)\xfer$O $(OX)\zip$O $(OX)\shell$O $(OX)\sqlite3$O $(OX)\th$O $(OX)\th_lang$O APPNAME = $(OX)\fossil$(E) all: $(OX) $(APPNAME) $(APPNAME) : translate$E mkindex$E headers $(OBJ) $(OX)\linkopts cd $(OX) link -LINK -OUT:$@ $(LIBDIR) @linkopts $(OX)\linkopts: $B\win\Makefile.msc echo add allrepo attach bag bisect blob branch browse captcha cgi checkin checkout clearsign clone comformat configure content db delta deltacmd descendants diff diffcmd doc encode event export file finfo glob graph gzip http http_socket http_ssl http_transport import info leaf login main manifest md5 merge merge3 name path pivot popen pqueue printf rebuild report rss schema search setup sha1 shun skins sqlcmd stash stat style sync tag tar th_main timeline tkt tktsetup undo update url user verify vfile wiki wikiformat winhttp xfer zip sqlite3 th th_lang > $@ echo $(LIBS) >> $@ $(OX): @-mkdir $@ translate$E: $(SRCDIR)\translate.c $(BCC) $** makeheaders$E: $(SRCDIR)\makeheaders.c $(BCC) $** mkindex$E: $(SRCDIR)\mkindex.c $(BCC) $** version$E: $B\win\version.c $(BCC) $** $(OX)\shell$O : $(SRCDIR)\shell.c $(TCC) /Fo$@ /Dmain=sqlite3_shell $(SQLITE_OPTIONS) -c shell_.c $(OX)\sqlite3$O : $(SRCDIR)\sqlite3.c $(TCC) /Fo$@ -c $(SQLITE_OPTIONS) $** $(OX)\th$O : $(SRCDIR)\th.c $(TCC) /Fo$@ -c $** $(OX)\th_lang$O : $(SRCDIR)\th_lang.c $(TCC) /Fo$@ -c $** VERSION.h : version$E $B\manifest.uuid $B\manifest $** > $@ page_index.h: mkindex$E $(SRC) $** > $@ clean: -del $(OX)\*.obj -del *.obj *_.c *.h *.map -del headers linkopts realclean: -del $(APPNAME) translate$E mkindex$E makeheaders$E version$E $(OX)\add$O : add_.c add.h $(TCC) /Fo$@ -c add_.c add_.c : $(SRCDIR)\add.c translate$E $** > $@ $(OX)\allrepo$O : allrepo_.c allrepo.h $(TCC) /Fo$@ -c allrepo_.c allrepo_.c : $(SRCDIR)\allrepo.c translate$E $** > $@ $(OX)\attach$O : attach_.c attach.h $(TCC) /Fo$@ -c attach_.c attach_.c : $(SRCDIR)\attach.c translate$E $** > $@ $(OX)\bag$O : bag_.c bag.h $(TCC) /Fo$@ -c bag_.c bag_.c : $(SRCDIR)\bag.c translate$E $** > $@ $(OX)\bisect$O : bisect_.c bisect.h $(TCC) /Fo$@ -c bisect_.c bisect_.c : $(SRCDIR)\bisect.c translate$E $** > $@ $(OX)\blob$O : blob_.c blob.h $(TCC) /Fo$@ -c blob_.c blob_.c : $(SRCDIR)\blob.c translate$E $** > $@ $(OX)\branch$O : branch_.c branch.h $(TCC) /Fo$@ -c branch_.c branch_.c : $(SRCDIR)\branch.c translate$E $** > $@ $(OX)\browse$O : browse_.c browse.h $(TCC) /Fo$@ -c browse_.c browse_.c : $(SRCDIR)\browse.c translate$E $** > $@ $(OX)\captcha$O : captcha_.c captcha.h $(TCC) /Fo$@ -c captcha_.c captcha_.c : $(SRCDIR)\captcha.c translate$E $** > $@ $(OX)\cgi$O : cgi_.c cgi.h $(TCC) /Fo$@ -c cgi_.c cgi_.c : $(SRCDIR)\cgi.c translate$E $** > $@ $(OX)\checkin$O : checkin_.c checkin.h $(TCC) /Fo$@ -c checkin_.c checkin_.c : $(SRCDIR)\checkin.c translate$E $** > $@ $(OX)\checkout$O : checkout_.c checkout.h $(TCC) /Fo$@ -c checkout_.c checkout_.c : $(SRCDIR)\checkout.c translate$E $** > $@ $(OX)\clearsign$O : clearsign_.c clearsign.h $(TCC) /Fo$@ -c clearsign_.c clearsign_.c : $(SRCDIR)\clearsign.c translate$E $** > $@ $(OX)\clone$O : clone_.c clone.h $(TCC) /Fo$@ -c clone_.c clone_.c : $(SRCDIR)\clone.c translate$E $** > $@ $(OX)\comformat$O : comformat_.c comformat.h $(TCC) /Fo$@ -c comformat_.c comformat_.c : $(SRCDIR)\comformat.c translate$E $** > $@ $(OX)\configure$O : configure_.c configure.h $(TCC) /Fo$@ -c configure_.c configure_.c : $(SRCDIR)\configure.c translate$E $** > $@ $(OX)\content$O : content_.c content.h $(TCC) /Fo$@ -c content_.c content_.c : $(SRCDIR)\content.c translate$E $** > $@ $(OX)\db$O : db_.c db.h $(TCC) /Fo$@ -c db_.c db_.c : $(SRCDIR)\db.c translate$E $** > $@ $(OX)\delta$O : delta_.c delta.h $(TCC) /Fo$@ -c delta_.c delta_.c : $(SRCDIR)\delta.c translate$E $** > $@ $(OX)\deltacmd$O : deltacmd_.c deltacmd.h $(TCC) /Fo$@ -c deltacmd_.c deltacmd_.c : $(SRCDIR)\deltacmd.c translate$E $** > $@ $(OX)\descendants$O : descendants_.c descendants.h $(TCC) /Fo$@ -c descendants_.c descendants_.c : $(SRCDIR)\descendants.c translate$E $** > $@ $(OX)\diff$O : diff_.c diff.h $(TCC) /Fo$@ -c diff_.c diff_.c : $(SRCDIR)\diff.c translate$E $** > $@ $(OX)\diffcmd$O : diffcmd_.c diffcmd.h $(TCC) /Fo$@ -c diffcmd_.c diffcmd_.c : $(SRCDIR)\diffcmd.c translate$E $** > $@ $(OX)\doc$O : doc_.c doc.h $(TCC) /Fo$@ -c doc_.c doc_.c : $(SRCDIR)\doc.c translate$E $** > $@ $(OX)\encode$O : encode_.c encode.h $(TCC) /Fo$@ -c encode_.c encode_.c : $(SRCDIR)\encode.c translate$E $** > $@ $(OX)\event$O : event_.c event.h $(TCC) /Fo$@ -c event_.c event_.c : $(SRCDIR)\event.c translate$E $** > $@ $(OX)\export$O : export_.c export.h $(TCC) /Fo$@ -c export_.c export_.c : $(SRCDIR)\export.c translate$E $** > $@ $(OX)\file$O : file_.c file.h $(TCC) /Fo$@ -c file_.c file_.c : $(SRCDIR)\file.c translate$E $** > $@ $(OX)\finfo$O : finfo_.c finfo.h $(TCC) /Fo$@ -c finfo_.c finfo_.c : $(SRCDIR)\finfo.c translate$E $** > $@ $(OX)\glob$O : glob_.c glob.h $(TCC) /Fo$@ -c glob_.c glob_.c : $(SRCDIR)\glob.c translate$E $** > $@ $(OX)\graph$O : graph_.c graph.h $(TCC) /Fo$@ -c graph_.c graph_.c : $(SRCDIR)\graph.c translate$E $** > $@ $(OX)\gzip$O : gzip_.c gzip.h $(TCC) /Fo$@ -c gzip_.c gzip_.c : $(SRCDIR)\gzip.c translate$E $** > $@ $(OX)\http$O : http_.c http.h $(TCC) /Fo$@ -c http_.c http_.c : $(SRCDIR)\http.c translate$E $** > $@ $(OX)\http_socket$O : http_socket_.c http_socket.h $(TCC) /Fo$@ -c http_socket_.c http_socket_.c : $(SRCDIR)\http_socket.c translate$E $** > $@ $(OX)\http_ssl$O : http_ssl_.c http_ssl.h $(TCC) /Fo$@ -c http_ssl_.c http_ssl_.c : $(SRCDIR)\http_ssl.c translate$E $** > $@ $(OX)\http_transport$O : http_transport_.c http_transport.h $(TCC) /Fo$@ -c http_transport_.c http_transport_.c : $(SRCDIR)\http_transport.c translate$E $** > $@ $(OX)\import$O : import_.c import.h $(TCC) /Fo$@ -c import_.c import_.c : $(SRCDIR)\import.c translate$E $** > $@ $(OX)\info$O : info_.c info.h $(TCC) /Fo$@ -c info_.c info_.c : $(SRCDIR)\info.c translate$E $** > $@ $(OX)\leaf$O : leaf_.c leaf.h $(TCC) /Fo$@ -c leaf_.c leaf_.c : $(SRCDIR)\leaf.c translate$E $** > $@ $(OX)\login$O : login_.c login.h $(TCC) /Fo$@ -c login_.c login_.c : $(SRCDIR)\login.c translate$E $** > $@ $(OX)\main$O : main_.c main.h $(TCC) /Fo$@ -c main_.c main_.c : $(SRCDIR)\main.c translate$E $** > $@ $(OX)\manifest$O : manifest_.c manifest.h $(TCC) /Fo$@ -c manifest_.c manifest_.c : $(SRCDIR)\manifest.c translate$E $** > $@ $(OX)\md5$O : md5_.c md5.h $(TCC) /Fo$@ -c md5_.c md5_.c : $(SRCDIR)\md5.c translate$E $** > $@ $(OX)\merge$O : merge_.c merge.h $(TCC) /Fo$@ -c merge_.c merge_.c : $(SRCDIR)\merge.c translate$E $** > $@ $(OX)\merge3$O : merge3_.c merge3.h $(TCC) /Fo$@ -c merge3_.c merge3_.c : $(SRCDIR)\merge3.c translate$E $** > $@ $(OX)\name$O : name_.c name.h $(TCC) /Fo$@ -c name_.c name_.c : $(SRCDIR)\name.c translate$E $** > $@ $(OX)\path$O : path_.c path.h $(TCC) /Fo$@ -c path_.c path_.c : $(SRCDIR)\path.c translate$E $** > $@ $(OX)\pivot$O : pivot_.c pivot.h $(TCC) /Fo$@ -c pivot_.c pivot_.c : $(SRCDIR)\pivot.c translate$E $** > $@ $(OX)\popen$O : popen_.c popen.h $(TCC) /Fo$@ -c popen_.c popen_.c : $(SRCDIR)\popen.c translate$E $** > $@ $(OX)\pqueue$O : pqueue_.c pqueue.h $(TCC) /Fo$@ -c pqueue_.c pqueue_.c : $(SRCDIR)\pqueue.c translate$E $** > $@ $(OX)\printf$O : printf_.c printf.h $(TCC) /Fo$@ -c printf_.c printf_.c : $(SRCDIR)\printf.c translate$E $** > $@ $(OX)\rebuild$O : rebuild_.c rebuild.h $(TCC) /Fo$@ -c rebuild_.c rebuild_.c : $(SRCDIR)\rebuild.c translate$E $** > $@ $(OX)\report$O : report_.c report.h $(TCC) /Fo$@ -c report_.c report_.c : $(SRCDIR)\report.c translate$E $** > $@ $(OX)\rss$O : rss_.c rss.h $(TCC) /Fo$@ -c rss_.c rss_.c : $(SRCDIR)\rss.c translate$E $** > $@ $(OX)\schema$O : schema_.c schema.h $(TCC) /Fo$@ -c schema_.c schema_.c : $(SRCDIR)\schema.c translate$E $** > $@ $(OX)\search$O : search_.c search.h $(TCC) /Fo$@ -c search_.c search_.c : $(SRCDIR)\search.c translate$E $** > $@ $(OX)\setup$O : setup_.c setup.h $(TCC) /Fo$@ -c setup_.c setup_.c : $(SRCDIR)\setup.c translate$E $** > $@ $(OX)\sha1$O : sha1_.c sha1.h $(TCC) /Fo$@ -c sha1_.c sha1_.c : $(SRCDIR)\sha1.c translate$E $** > $@ $(OX)\shun$O : shun_.c shun.h $(TCC) /Fo$@ -c shun_.c shun_.c : $(SRCDIR)\shun.c translate$E $** > $@ $(OX)\skins$O : skins_.c skins.h $(TCC) /Fo$@ -c skins_.c skins_.c : $(SRCDIR)\skins.c translate$E $** > $@ $(OX)\sqlcmd$O : sqlcmd_.c sqlcmd.h $(TCC) /Fo$@ -c sqlcmd_.c sqlcmd_.c : $(SRCDIR)\sqlcmd.c translate$E $** > $@ $(OX)\stash$O : stash_.c stash.h $(TCC) /Fo$@ -c stash_.c stash_.c : $(SRCDIR)\stash.c translate$E $** > $@ $(OX)\stat$O : stat_.c stat.h $(TCC) /Fo$@ -c stat_.c stat_.c : $(SRCDIR)\stat.c translate$E $** > $@ $(OX)\style$O : style_.c style.h $(TCC) /Fo$@ -c style_.c style_.c : $(SRCDIR)\style.c translate$E $** > $@ $(OX)\sync$O : sync_.c sync.h $(TCC) /Fo$@ -c sync_.c sync_.c : $(SRCDIR)\sync.c translate$E $** > $@ $(OX)\tag$O : tag_.c tag.h $(TCC) /Fo$@ -c tag_.c tag_.c : $(SRCDIR)\tag.c translate$E $** > $@ $(OX)\tar$O : tar_.c tar.h $(TCC) /Fo$@ -c tar_.c tar_.c : $(SRCDIR)\tar.c translate$E $** > $@ $(OX)\th_main$O : th_main_.c th_main.h $(TCC) /Fo$@ -c th_main_.c th_main_.c : $(SRCDIR)\th_main.c translate$E $** > $@ $(OX)\timeline$O : timeline_.c timeline.h $(TCC) /Fo$@ -c timeline_.c timeline_.c : $(SRCDIR)\timeline.c translate$E $** > $@ $(OX)\tkt$O : tkt_.c tkt.h $(TCC) /Fo$@ -c tkt_.c tkt_.c : $(SRCDIR)\tkt.c translate$E $** > $@ $(OX)\tktsetup$O : tktsetup_.c tktsetup.h $(TCC) /Fo$@ -c tktsetup_.c tktsetup_.c : $(SRCDIR)\tktsetup.c translate$E $** > $@ $(OX)\undo$O : undo_.c undo.h $(TCC) /Fo$@ -c undo_.c undo_.c : $(SRCDIR)\undo.c translate$E $** > $@ $(OX)\update$O : update_.c update.h $(TCC) /Fo$@ -c update_.c update_.c : $(SRCDIR)\update.c translate$E $** > $@ $(OX)\url$O : url_.c url.h $(TCC) /Fo$@ -c url_.c url_.c : $(SRCDIR)\url.c translate$E $** > $@ $(OX)\user$O : user_.c user.h $(TCC) /Fo$@ -c user_.c user_.c : $(SRCDIR)\user.c translate$E $** > $@ $(OX)\verify$O : verify_.c verify.h $(TCC) /Fo$@ -c verify_.c verify_.c : $(SRCDIR)\verify.c translate$E $** > $@ $(OX)\vfile$O : vfile_.c vfile.h $(TCC) /Fo$@ -c vfile_.c vfile_.c : $(SRCDIR)\vfile.c translate$E $** > $@ $(OX)\wiki$O : wiki_.c wiki.h $(TCC) /Fo$@ -c wiki_.c wiki_.c : $(SRCDIR)\wiki.c translate$E $** > $@ $(OX)\wikiformat$O : wikiformat_.c wikiformat.h $(TCC) /Fo$@ -c wikiformat_.c wikiformat_.c : $(SRCDIR)\wikiformat.c translate$E $** > $@ $(OX)\winhttp$O : winhttp_.c winhttp.h $(TCC) /Fo$@ -c winhttp_.c winhttp_.c : $(SRCDIR)\winhttp.c translate$E $** > $@ $(OX)\xfer$O : xfer_.c xfer.h $(TCC) /Fo$@ -c xfer_.c xfer_.c : $(SRCDIR)\xfer.c translate$E $** > $@ $(OX)\zip$O : zip_.c zip.h $(TCC) /Fo$@ -c zip_.c zip_.c : $(SRCDIR)\zip.c translate$E $** > $@ headers: makeheaders$E page_index.h VERSION.h makeheaders$E add_.c:add.h allrepo_.c:allrepo.h attach_.c:attach.h bag_.c:bag.h bisect_.c:bisect.h blob_.c:blob.h branch_.c:branch.h browse_.c:browse.h captcha_.c:captcha.h cgi_.c:cgi.h checkin_.c:checkin.h checkout_.c:checkout.h clearsign_.c:clearsign.h clone_.c:clone.h comformat_.c:comformat.h configure_.c:configure.h content_.c:content.h db_.c:db.h delta_.c:delta.h deltacmd_.c:deltacmd.h descendants_.c:descendants.h diff_.c:diff.h diffcmd_.c:diffcmd.h doc_.c:doc.h encode_.c:encode.h event_.c:event.h export_.c:export.h file_.c:file.h finfo_.c:finfo.h glob_.c:glob.h graph_.c:graph.h gzip_.c:gzip.h http_.c:http.h http_socket_.c:http_socket.h http_ssl_.c:http_ssl.h http_transport_.c:http_transport.h import_.c:import.h info_.c:info.h leaf_.c:leaf.h login_.c:login.h main_.c:main.h manifest_.c:manifest.h md5_.c:md5.h merge_.c:merge.h merge3_.c:merge3.h name_.c:name.h path_.c:path.h pivot_.c:pivot.h popen_.c:popen.h pqueue_.c:pqueue.h printf_.c:printf.h rebuild_.c:rebuild.h report_.c:report.h rss_.c:rss.h schema_.c:schema.h search_.c:search.h setup_.c:setup.h sha1_.c:sha1.h shun_.c:shun.h skins_.c:skins.h sqlcmd_.c:sqlcmd.h stash_.c:stash.h stat_.c:stat.h style_.c:style.h sync_.c:sync.h tag_.c:tag.h tar_.c:tar.h th_main_.c:th_main.h timeline_.c:timeline.h tkt_.c:tkt.h tktsetup_.c:tktsetup.h undo_.c:undo.h update_.c:update.h url_.c:url.h user_.c:user.h verify_.c:verify.h vfile_.c:vfile.h wiki_.c:wiki.h wikiformat_.c:wikiformat.h winhttp_.c:winhttp.h xfer_.c:xfer.h zip_.c:zip.h $(SRCDIR)\sqlite3.h $(SRCDIR)\th.h VERSION.h @copy /Y nul: headers |
Added win/icon.rc.
> | 1 | ID ICON "../win/fossil.ico" |
Changes to win/version.c.
1 2 3 4 5 6 7 8 | #include <stdio.h> int main(int argc, char *argv[]){ FILE *m,*u; char b[10240]; u = fopen(argv[1],"r"); fgets(b, sizeof(b)-1,u); b[strlen(b)-1] =0; | > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | /* ** This C program exists to do the job that AWK would do for the unix ** makefile - to extract information from the "mainfest" and "manifest.uuid" ** files for this project in order to generate the "VERSION.h" header file. */ #include <stdio.h> #include <string.h> int main(int argc, char *argv[]){ FILE *m,*u; char b[10240]; u = fopen(argv[1],"r"); fgets(b, sizeof(b)-1,u); b[strlen(b)-1] =0; |
︙ | ︙ |
Changes to www/branching.wiki.
|
| < < | | | 1 2 3 4 5 6 7 8 9 | <title>Branching, Forking, Merging, and Tagging</title> <h2>Background</h2> In a simple and perfect world, the development of a project would proceed linearly, as shown in figure 1. <center><table border=1 cellpadding=10 hspace=10 vspace=10> <tr><td align="center"> <img src="branch01.gif" width=280 height=68><br> |
︙ | ︙ | |||
24 25 26 27 28 29 30 | was created by making edits to check-in 1 and then committing those edits. We say that 2 is a <i>child</i> of 1 and that 1 is a <i>parent</i> of 2. Check-in 3 is derived from check-in 2, making 3 a child of 2. We say that 3 is a <i>descendant</i> of both 1 and 2 and that 1 and 2 are both <i>ancestors</i> of 3. | > > > > > > | | | 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | was created by making edits to check-in 1 and then committing those edits. We say that 2 is a <i>child</i> of 1 and that 1 is a <i>parent</i> of 2. Check-in 3 is derived from check-in 2, making 3 a child of 2. We say that 3 is a <i>descendant</i> of both 1 and 2 and that 1 and 2 are both <i>ancestors</i> of 3. <a name="dag"></a> <h2>DAGs</h2> The graph of check-ins is a [http://en.wikipedia.org/wiki/Directed_acyclic_graph | directed acyclic graph] and so we will henceforth call it a <i>DAG</i>. Check-in 1 is the <i>root</i> of the DAG since it has no ancestors. Check-in 4 is a <i>leaf</i> of the DAG since it has no descendants. (We will give a more precise in the definition of "leaf" later.) Alas, reality often interferes with the simple linear development of a project. Suppose two programmers make independent modifications to check-in 2. After both changes are checked in, we have a check-in graph that looks like figure 2: |
︙ | ︙ | |||
53 54 55 56 57 58 59 | tried to commit his changes, fossil would try to verify that check-in 2 was still a leaf. Fossil would see that check-in 3 had occurred and would abort Bob's commit attempt with a message "would fork". This allows Bob to do a "fossil update" which would pull in Alice's changes and merge them together with his own changes. After merging, Bob could then commit check-in 4 as a child of check-in 3 and the result would be a linear graph as shown in figure 1. This is how CVS works. This is also how fossil | | | | | 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | tried to commit his changes, fossil would try to verify that check-in 2 was still a leaf. Fossil would see that check-in 3 had occurred and would abort Bob's commit attempt with a message "would fork". This allows Bob to do a "fossil update" which would pull in Alice's changes and merge them together with his own changes. After merging, Bob could then commit check-in 4 as a child of check-in 3 and the result would be a linear graph as shown in figure 1. This is how CVS works. This is also how fossil works in [concepts.wiki#workflow | "autosync"] mode. But it might be that Bob is off-network when he does his commit, so he has no way of knowing that Alice has already committed her changes. Or, it could be that Bob has turned off "autosync" mode in Fossil. Or, maybe Bob just doesn't want to merge in Alice's changes before he has saved his own, so he forces the commit to occur using the "--force" option to the fossil <b>commit</b> command. For whatever reason, two commits against check-in 2 have occurred and now the DAG has two leaves. So which version of the project is the "latest" in the sense of having the most features and the most bug fixes? When there is more than one leaf in the graph, you don't really know. So we like to have graphs with a single leaf. To resolve this situation, Alice can use the fossil <b>merge</b> command to merge in Bob's changes in her local copy of check-in 3. Then she can commit the results as check-in 5. This results in a DAG as shown in figure 3. <center><table border=1 cellpadding=10 hspace=10 vspace=10> <tr><td align="center"> <img src="branch03.gif" width=282 height=152><br> Figure 3 </td></tr></table></center> |
︙ | ︙ | |||
108 109 110 111 112 113 114 | taken in figure 1 is better because it is much easier to visualize a linear line of development and because the merging happens automatically instead of as a separate manual step. We will not take sides in that debate. We will simply point out that fossil enables you to do it either way. <h2>Forking Versus Branching</h2> | | | 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | taken in figure 1 is better because it is much easier to visualize a linear line of development and because the merging happens automatically instead of as a separate manual step. We will not take sides in that debate. We will simply point out that fossil enables you to do it either way. <h2>Forking Versus Branching</h2> Having more than one leaf in the check-in DAG is usually considered undesirable, and so forks are usually either avoided entirely, as in figure 1, or else quickly resolved as shown in figure 3. But sometimes, one does want to have multiple leaves. For example, a project might have one leaf that is the latest version of the project under development and another leaf that is the latest version that has been tested. When multiple leaves are desirable, we call the phenomenon <i>branching</i> |
︙ | ︙ | |||
232 233 234 235 236 237 238 | Here is a list of definitions of key terms: <blockquote><dl> <dt><b>Branch</b></dt> <dd><p>A branch is a set of check-ins that have the same value for their | | | 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 | Here is a list of definitions of key terms: <blockquote><dl> <dt><b>Branch</b></dt> <dd><p>A branch is a set of check-ins that have the same value for their "branch" property.</p></dd> <dt><b>Leaf</b></dt> <dd><p>A leaf is a check-in that has no children in the same branch.</p></dd> <dt><b>Closed Leaf</b></dt> <dd><p>A closed leaf is leaf that has the <b>closed</b> tag. Such leaves are intented to never be extended with descendants and hence are omitted from lists of leaves in the command-line and web interface.</p></dd> <dt><b>Open Leaf</b></dt> |
︙ | ︙ | |||
259 260 261 262 263 264 265 | but that child is in a different branch, so check-in 9 is a leaf. Because of the <b>closed</b> tag check-in 9, it is a closed leaf. Check-in 2 of figure 3 is considered a "fork" because it has two children in the same branch. Check-in 2 of figure 5 also has two children, but each child is in a different branch, hence in figure 5, check-in 2 is considered a "branch point". | > > > > > > > > | 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 | but that child is in a different branch, so check-in 9 is a leaf. Because of the <b>closed</b> tag check-in 9, it is a closed leaf. Check-in 2 of figure 3 is considered a "fork" because it has two children in the same branch. Check-in 2 of figure 5 also has two children, but each child is in a different branch, hence in figure 5, check-in 2 is considered a "branch point". <h2>Differences With Other DVCSes</h2> Fossil keeps all check-ins on a single DAG. Branches are identified with tags. This means that check-ins can be freely moved between branches simply by altering the tags. Most other DVCSes maintain a separate DAG for each branch. |
Changes to www/bugtheory.wiki.
|
| < | > | 1 2 3 4 5 6 7 8 9 | <title>Bug-Tracking In Fossil</title> <h2>Introduction</h2> A bug-report in fossil is called a "ticket". Tickets are tracked separately from code check-ins. Some other distributed bug-tracking systems store tickets as files within the source tree and thereby leverage the syncing and merging capabilities of the versioning system to sync and merge tickets. This approach is |
︙ | ︙ |
Added www/build-icons/openbsd.gif.
cannot compute difference between binary files
Changes to www/build.wiki.
1 | <title>Building and Installing Fossil</title> | < | 1 2 3 4 5 6 7 8 | <title>Building and Installing Fossil</title> <p>This page describes how to build and install Fossil. The whole process is designed to be very easy.</p> <h2>0.0 Using A Pre-compiled Binary</h2> <p>You can skip steps 1.0 and 2.0 below by downloading |
︙ | ︙ | |||
39 40 41 42 43 44 45 | <li><p>Finally, click on the "Zip Archive" link. This link will build a ZIP archive of the complete source code and download it to your browser. </ol> <h2>2.0 Compiling</h2> | | > > > > > > > > > | > > > > > > > > > > | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | <li><p>Finally, click on the "Zip Archive" link. This link will build a ZIP archive of the complete source code and download it to your browser. </ol> <h2>2.0 Compiling</h2> <p>Follow these steps to compile on a unix platform (Linux, *BSD, MacOS, etc):</p> <ol> <li value="6"> <p>Create a directory to hold the source code. Then unzip the ZIP archive you downloaded into that directory. You should be in the top-level folder of that directory</p></li> <li><p><b>(Optional:)</b> Edit the Makefile to set it up like you want. You probably do not need to do anything. Do not be intimidated: There are less than 10 variables in the makefile that can be changed. The whole Makefile is only a few dozen lines long and most of those lines are comments.</p> <li><p>Type "<b>make</b>" </ol> <p>To build on windows, use an alternative makefile suitable for your particular build environment. The alternative windows makefiles are all found in the win/ subdirectory of the source tree. So, for example, if you want build using the [http://www.mingw.org/ | mingw/msys compiler package] for windows, then run "<b>make -f win/Makefile.mingw</b>" instead of just "<b>make</b>" in step 8 above.</p> <h2>3.0 Installing</h2> <ol> <li value="9"> <p>The finished binary is named "fossil" (or "fossil.exe" on windows). Put this binary in a directory that is somewhere on your PATH environment variable. It does not matter where.</p> <li> <p><b>(Optional:)</b> To uninstall, just delete the binary.</p> </ol> <h2>4.0 Additional Considerations</h2> </nowiki> * If the makefiles that come with Fossil do not work for you, or for some other reason you want to know how to build Fossil manually, then refer to the [./makefile.wiki | Fossil Build Process] document which describes in detail what the makefiles do behind the scenes. |
Added www/checkin.wiki.
> > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | <title>Checkin Checklist</title> Before every checkins: 1. <b>fossil diff</b> → no stray changes 2. <b>fossil extra</b> → no unmanaged files need to be added. 3. The checkin will go onto the desired branch. 4. "Autosync" is enabled → <ol> <li> The checkin will not cause a unintentional fork. <li> The local system clock is set correctly. </ol> Before every checkin to <b>trunk</b>: 5. No compiler warnings on the development machine. 6. Changes will not cause problems on a future <b>bisect</b>. |
Added www/checkin_names.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | <title>Check-in Names</title> <table align="right" border="1" width="33%" cellpadding="10"> <tr><td> <h3>Executive Summary</h3> <p>A check-in can be identified using any of the following names: <ul> <li> SHA1 hash prefix <li> Tag or branchname <li> Timestamp: <i>YYYY-MM-DD HH:MM:SS</i> <li> <i>tag-name</i> <big><b>:</b></big> <i>timestamp</i> <li> Special names: <ul> <li> <b>tip</b> <li> <b>current</b> <li> <b>next</b> <li> <b>previous</b> <li> <b>ckout</b> </ul> </ul> </td></tr> </table> Many Fossil [/help|commands] and [./webui.wiki | web-interface] URLs accept check-in names as an argument. For example, the "[/help/info|info]" command accepts an optional check-in name to identify the specific checkout about which information is desired: <blockquote> <tt>fossil info</tt> <i>checkin-name</i> </blockquote> You are perhaps reading this page from the following URL: <blockquote> http://www.fossil-scm.org/fossil/doc/<b>trunk</b>/www/checkin_names.wiki </blockquote> The URL above is an example of an [./embeddeddoc.wiki | embedded documentation] page in Fossil. The bold term of the pathname is a check-in name that determines which version of the documentation to display. Fossil provides a variety of ways to specify a check-in. This document describes the various methods. <h2>Canonical Check-in Name</h2> The canonical name of a checkin is the SHA1 hash of its [./fileformat.wiki#manifest | manifest] expressed as a 40-character lowercase hexadecimal number. For example: <blockquote><pre> fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9 </pre></blockquote> The full 40-character SHA1 hash is unwieldy to remember and type, though, so Fossil also accepts a unique prefix of the hash, using any combination of upper and lower case letters, as long as the prefix is at least 4 characters long. Hence the following commands all accomplish the same thing as the above: <blockquote><pre> fossil info e5a734a19a9 fossil info E5a734A fossil info e5a7 </blockquote> Many web-interface screens identify check-ins by 10- or 16-character prefix of canonical name. <h2>Tags And Branch Names</h2> Using a tag or branch name where a check-in name is expected causes Fossil to choose the most recent check-in with that tag or branch name. So, for example, as of this writing the most recent check-in that is tagged with "release" is [d0753799e44]. So the command: <blockquote><pre> fossil info release </pre></blockquote> Results in the following input: <blockquote><pre> uuid: d0753799e447b795933e9f266233767d84aa1d84 2010-11-01 14:23:35 UTC parent: 4e1241f3236236187ad2a8f205323c05b98c9895 2010-10-31 21:51:11 UTC child: 4a094f46ade70bd9d1e4ffa48cbe94b4d3750aef 2010-11-01 18:52:37 UTC child: f4033ec09ee6bb2a73fa588c217527a1f311bd27 2010-11-01 23:38:34 UTC tags: trunk, release comment: Fix a typo in the file format documentation reported on the Tcl/Tk chatroom. (user: drh) </pre></blockquote> There are multiple check-ins that are tagged with "release" but (as of this writing) the [d0753799e44] check-in is the most recent so it is the one that is selected. Note that unlike other command DVCSes, a "branch" in Fossil is not anything special; it is simply a sequence of check-ins that share a common tag. So the same mechanism that resolves tag names also resolves branch names. Note also that there can (in theory) be an ambiguity between tag names and canonical names. Suppose, for example, you had a check-in with the canonical name deed28aa99a835f01fa06d5b4a41ecc2121bf419 and you also happened to have tagged a different check-in with "deed2". If you use the "deed2" name, does it choose the canonical name or the tag name? In such cases, you can prefix the tag name with "tag:". For example: <blockquote><tt> fossil info tag:deed2 </tt></blockquote> The "tag:deed2" name will refer to the most recent check-in tagged with "deed2" not to the check-in whose canonical name begins with "deed2". <h2>Timestamps</h2> A timestamp in one of the formats shown below means the most recent check-in that occurs no later than the timestamp given: * <i>YYYY-MM-DD</i> * <i>YYYY-MM-DD HH:MM</i> * <i>YYYY-MM-DD HH:MM:SS</i> The space between the day and the year can optionally be replaced by an uppercase <b>T</b> and the entire timestamp can optionally be followed by "<b>utc</b>". In its default configuration, Fossil interprets and displays all dates in Universal Coordinated Time (UTC). This tends to work the best for distributed projects where participants are scattered around the globe. But there is an option on the Admin/Timeline page of the web-interface to switch to local time. The "<b>utc</b>" suffix on an timestamp check-in name is meaningless if Fossil is in the default mode of using UTC for everything, but if Fossil has been switched to localtime mode, then the "<b>utc</b>" suffix means to interpret that particular timestamp using UTC instead localtime. As an example, consider the homepage for the Fossil website itself: <blockquote> http://www.fossil-scm.org/fossil/doc/<b>trunk</b>/www/index.wiki </blockquote> The bold component of that URL is a check-in name. To see what the Fossil website looked like on January 1, 2009, one has merely to change the URL to the following: <blockquote> http://www.fossil-scm.org/fossil/doc/<b>2009-01-01</b>/www/index.wiki </blockquote> <h2>Tag And Timestamp</h2> A check-in name can also take the form of a tag or branch name followed by a colon and then a timestamp. The combination means to take the most recent check-in with the given tag or branch which is not more recent than the timestamp. So, for example: <blockquote> fossil update trunk:2010-07-01T14:30 </blockquote> Would cause Fossil to update the working check-out to be the most recent check-in on the trunk that is not more recent that 14:30 (UTC) on July 1, 2010. <h2>Special Tags</h2> The tag "tip" means the most recent check-in. The "tip" tag is roughly equivalent to the timestamp tag "5000-01-01". If the command is being run from a working check-out (not against a bare repository) then a few extra tags apply. The "current" tag means the current check-out. The "next" tag means the youngest child of the current check-out. And the "previous" tag means the primary (non-merge) parent of the current check-out. <h2>Additional Examples</h2> To view the changes in the most recent check-in prior to the version currently checked out: <blockquote><pre> fossil diff --from previous --to current </pre></blockquote> Suppose you are of the habit of tagging each release with a "release" tag. Then to see everything that has changed on the trunk since the last release: <blockquote><pre> fossil diff --from release --to trunk </pre></blockquote> |
Added www/contribute.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | <title>Contributing To Fossil</title> Users are encouraged to contributed enhancements back to the Fossil project. This note outlines some of the procedures for making useful contributions. <h2>1.0 Contributor Agreement</h2> In order to accept your contributions, we <u>must</u> have a [./copyright-release.pdf | Contributor Agreement (PDF)] (or [./copyright-release.html | as HTML]) on file for you. We require this in order to maintain clear title to the Fossil code and prevent the introduction of code with incompatible licenses or other entanglements that might cause legal problems for Fossil users. Many larger companies and other lawyer-rich organizations require this as a precondition to using Fossil. If you do not wish to submit a Contributor Agreement, we would still welcome your suggestions and example code, but we will not use your code directly - we will be forced to reimplement your changes from scratch which might take longer. <h2>2.0 Submitting Patches</h2> Suggested changes or bug fixes can be submitted by creating a patch against the current source tree. Email patches to <a href="mailto:drh@sqlite.org">drh@sqlite.org</a>. Be sure to describe in detail what the patch does and which version of Fossil it is written against. A contributor agreement is not strictly necessary to submit a patch. However, without a contributor agreement on file, your patch will be used for reference only - it will not be applied to the code. This may delay acceptance of your patch. Your patches or changes might not be accepted even if you do have a contributor agreement on file. Please do not take this personally or as an affront to your coding ability. Sometimes patches are rejected because they seem to be taking the project in a direction that the architect does not want to go. Or, there might be an alternative implementation of the same feature being prepared separately. <h2>3.0 Check-in Privileges</h2> Check-in privileges are granted on a case-by-case basis. Your chances of getting check-in privileges are much improved if you have a history of submitting quality patches and/or making thoughtful posts on the [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/ | mailing list]. A contributor agreement is, of course, a prerequisite for check-in privileges.</p> Contributors are asked to make all non-trivial changes on a branch. The Fossil Architect (Richard Hipp) will merge changes onto the trunk.</p> Contributors are required to following the [./checkin.wiki | pre-checkin checklist] prior to every checkin to the Fossil self-hosting repository. This checklist is short and succinct and should only require a few seconds to follow. Contributors should print out a copy of the pre-checkin checklist and keep it on a notecard beside their workstations, for quick reference. Contributors should review the [./style.wiki | Coding Style Guidelines] and mimic the coding style used through the rest of the Fossil source code. Your code should blend in. A third-party reader should be unable to distinguish your code from any other code in the source corpus. <h2>4.0 Testing</h2> Fossil has the beginnings of a [../test/release-checklist.wiki | release checklist] but this is an area that needs further work. (Your contributions here are welcomed!) Contributors with check-in privileges are expected to run the release checklist on any major changes they contribute, and if appropriate expand the checklist and/or the automated test scripts to cover their additions. <h2>5.0 See Also</h2> * [./build.wiki | How To Build And Install Fossil] * [./makefile.wiki | The Fossil Build Process] * [./tech_overview.wiki | A Technical Overview of Fossil] |
Changes to www/copyright-release.html.
1 | <h1 align="center"> | < | > > | < | < | < | > | > | < > > > > > > | | < < > > | > | > > | > > > > > > > > > | > | > > > > | > | | > > > > > > | < < > > | > > > | > > | > | | > > > > | > | | > | | > | > | | < | > | < | | | | > > | > > | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 | <h1 align="center"> Fossil SCM Contributor Agreement </h1> <p> This agreement applies to your contribution of material to the Fossil Software Configuration Management System ("Fossil") that is mananged by Hipp, Wyrick & Company, Inc. ("Hwaci") and sets out the intellectual property rights you grant to Hwaci in the contributed material. The terms "contribution" and "contributed material" mean any source code, object code, patch, tool, sample, graphic, specification, manual, documentation, or any other material posted, submitted, or uploaded by you to the Fossil project. The term "you" means the person identified and signing at the bottom of this document. If your contribution is on behalf of a company, the term "you" also means the company identified in the signature area below. <ol> <li><p> With respect to any worldwide copyrights, or copyright applications and registrations, in your contribution: <ul> <li> You hereby assign to Hwaci joint ownership, and to the extent that such assignment is or becomes invalid, ineffective or unenforceable, you hereby grant to Hwaci a perpetual, irrevocable, non-exclusive, worldwide, no-charge, royalty-free, unrestricted license to exercise all rights under those copyrights, including the right to sublicense. <li> You agree that both you and Hwaci can do all things in relation to your contribution as if each of us were the sole owners, and if one of us makes a derivative work of your contribution, the one who makes (or has made) the derivative work will be the sole owner of that derivative work. <li> You agree that you will not assert any moral rights in your contribution against Hwaci, Hwaci's licensees or transferees, or any other user or consumer of your contribution. <li> You agree that Hwaci may register a copyright in your contribution and exercise all ownership rights associated with it. <li> You agree that neither you nor Hwaci has any duty to consult with, obtain the consent of, or pay or render an accounting to the other for any use or distribution of your contribution. </ul> <li><p> With respect to any patents you own, or that you can license without payment to any third party, and which are relevant to your contribution, you hereby grant to Hwaci a perpetual, irrevocable, non-exclusive, worldwide, no-charge, royalty-free license to make, have made, use, sell, offer to sell, import, and otherwise transfer your contribution in whole or in part, alone or in combination with or included in any product, work or materials arising out of the Fossil project, and to sublicense these same rights. <li><p> Except as set out above, you keep all right, title, and interest in your contribution. The rights that you grant to Hwaci under this agreement are effective on the date that you first submitted your contribution to the Fossil project, even if your submission took place before the date that you sign this agreement. <li><p> You represent and warrant the following: <ul> <li> Your contribution is an original work and that you can legally grant the rights set out in this agreement. <li> Your contribution does not, to the best of your knowledge and belief, violate any third party's copyrights, trademarks, patents, or other intellectual property rights. <li> You are authorized to sign this agreement on behalf of your company (if appliable). </ul> </ol> <p>By filling in the following information and signing your name, you agree to be bound by all of the terms set forth in this agreement. Please print clearly.</p> <center> <p><table width="80%" border="1" cellpadding="0" cellspacing="0"> <tr><td width="20%" valign="top">Your name & email:</td><td width="80%"> <!-- Replace this line with your name and email --> <p> </td></tr> <tr><td valign="top">Company name:<br>(if applicable)</td><td> <!-- Replace this line with your company name --> <p> </td></tr> <tr><td valign="top">Postal address:</td><td> <!-- Replace this line and the next line with your postal address --> <p> </p><p> </p><p> </p> </td></tr> <tr><td valign="top">Signature:</td><td> <p> </td></tr> <tr><td valign="top">Date:</td><td> <p> </td></tr> </table> </center> <p>Send completed forms to: <blockquote> Hipp, Wyrick & Company, Inc.<br> 6200 Maple Cove Lane<br> Charlotte, NC 28269-1086<br> USA </p> |
Added www/copyright-release.pdf.
cannot compute difference between binary files
Changes to www/custom_ticket.wiki.
1 | <nowiki> | > < > | 1 2 3 4 5 6 7 8 9 10 | <title>Customizing The Ticket System</title> <nowiki> <h2>Introduction</h2> <p> This guide will explain how to add the "assigned_to" and "submitted_by" fields to the ticket system in Fossil, as well as making the system more useful. You must have "admin" access to the repository to implement these instructions. </p> <h2>First modify the TICKET table</h2><blockquote> |
︙ | ︙ |
Changes to www/delta_encoder_algorithm.wiki.
1 | <nowiki> | > < < | | 1 2 3 4 5 6 7 8 9 10 | <title>Fossil Delta Encoding Algorithm</title> <nowiki> <h2>Abstract</h2> <p>A key component for the efficient storage of multiple revisions of a file in fossil repositories is the use of delta-compression, i.e. to store only the changes between revisions instead of the whole file.</p> <p>This document describes the encoding algorithm used by Fossil to |
︙ | ︙ |
Changes to www/delta_format.wiki.
1 | <nowiki> | > < < | | 1 2 3 4 5 6 7 8 9 10 | <title>Fossil Delta Format</title> <nowiki> <h2>Abstract</h2> <p>Fossil achieves efficient storage and low-bandwidth synchronization through the use of delta-compression. Instead of storing or transmitting the complete content of an artifact, fossil stores or transmits only the changes relative to a related artifact. </p> |
︙ | ︙ |
Changes to www/faq.tcl.
1 2 3 4 5 6 7 8 9 10 11 12 13 | #!/usr/bin/tclsh # # Run this to generate the FAQ # set cnt 1 proc faq {question answer} { set ::faq($::cnt) [list [string trim $question] [string trim $answer]] incr ::cnt } faq { What GUIs are available for fossil? } { | | > | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | #!/usr/bin/tclsh # # Run this to generate the FAQ # set cnt 1 proc faq {question answer} { set ::faq($::cnt) [list [string trim $question] [string trim $answer]] incr ::cnt } faq { What GUIs are available for fossil? } { The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: <blockquote> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </blockquote> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> if you are within an open check-out.) } faq { What is the difference between a "branch" and a "fork"? } { This is a big question - too big to answer in a FAQ. Please read the <a href="branching.wiki">Branching, Forking, Merging, and Tagging</a> document. } faq { How do I create a new branch? } { There are lots of ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. You can also add the "--bgcolor <i>COLOR</i>" option to give the branch a specific background color on timelines. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: <blockquote> <b>fossil [/help/branch|branch] new</b> <i>BRANCH-NAME BASIS</i> </blockquote> The <i>BRANCH-NAME</i> argument is the name of the new branch and the <i>BASIS</i> argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. On the "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button. } faq { How do I tag a check-in? } { There are several ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add a tag to that check-in using the "--tag <i>TAGNAME</i>" command-line option. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: <blockquote> <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the tmline, then click on the link to go the detailed information page for that check-in. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. There are controls on the edit page that allow new tags to be added and existing tags to be removed. } faq { How do I create a private branch that won't get pushed back to the main repository. } { Use the <b>--private</b> command-line option on the <b>commit</b> command. The result will be a check-in which exists on |
︙ | ︙ | |||
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | you have everything in your private branch the way you want it, you can then merge your private branch back into the trunk and push. Only the final merge operation will appear in other repositories. It will seem as if all the changes that occurred on your private branch occurred in a single check-in. Of course, you can also keep your branch private forever simply by not merging the changes in the private branch back into the trunk. } faq { How can I delete inappropriate content from my fossil repository? } { See the article on [./shunning.wiki | "shunning"] for details. } faq { How do I make a clone of the fossil self-hosting repository? } { Any of the following commands should work: <blockquote><pre> | > > | | | | | > > > > > > > > | 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | you have everything in your private branch the way you want it, you can then merge your private branch back into the trunk and push. Only the final merge operation will appear in other repositories. It will seem as if all the changes that occurred on your private branch occurred in a single check-in. Of course, you can also keep your branch private forever simply by not merging the changes in the private branch back into the trunk. [./private.wiki | Additional information] } faq { How can I delete inappropriate content from my fossil repository? } { See the article on [./shunning.wiki | "shunning"] for details. } faq { How do I make a clone of the fossil self-hosting repository? } { Any of the following commands should work: <blockquote><pre> fossil [/help/clone|clone] http://www.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] http://www2.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] http://www3.fossil-scm.org/site.cgi fossil.fossil </pre></blockquote> Once you have the repository cloned, you can open a local check-out as follows: <blockquote><pre> mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil </pre></blockquote> Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing: <blockquote><pre> fossil [/help/update|update] </pre></blockquote> } faq { How do I import or export content from and to other version control systems? } { Please see [./inout.wiki | Import And Export] } ############################################################################# # Code to actually generate the FAQ # puts "<title>Fossil FAQ</title>" puts "<h1 align=\"center\">Frequently Asked Questions</h1>\n" puts "<p>Note: See also <a href=\"qandc.wiki\">Questions and Criticisms</a>.\n" |
︙ | ︙ |
Changes to www/faq.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil FAQ</title> <h1 align="center">Frequently Asked Questions</h1> <p>Note: See also <a href="qandc.wiki">Questions and Criticisms</a>. <ol> <li><a href="#q1">What GUIs are available for fossil?</a></li> <li><a href="#q2">What is the difference between a "branch" and a "fork"?</a></li> | | > | | | > | > | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > > | | | | | | | | | > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | <title>Fossil FAQ</title> <h1 align="center">Frequently Asked Questions</h1> <p>Note: See also <a href="qandc.wiki">Questions and Criticisms</a>. <ol> <li><a href="#q1">What GUIs are available for fossil?</a></li> <li><a href="#q2">What is the difference between a "branch" and a "fork"?</a></li> <li><a href="#q3">How do I create a new branch?</a></li> <li><a href="#q4">How do I tag a check-in?</a></li> <li><a href="#q5">How do I create a private branch that won't get pushed back to the main repository.</a></li> <li><a href="#q6">How can I delete inappropriate content from my fossil repository?</a></li> <li><a href="#q7">How do I make a clone of the fossil self-hosting repository?</a></li> <li><a href="#q8">How do I import or export content from and to other version control systems?</a></li> </ol> <hr> <a name="q1"></a> <p><b>(1) What GUIs are available for fossil?</b></p> <blockquote>The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: <blockquote> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </blockquote> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> if you are within an open check-out.)</blockquote></li> <a name="q2"></a> <p><b>(2) What is the difference between a "branch" and a "fork"?</b></p> <blockquote>This is a big question - too big to answer in a FAQ. Please read the <a href="branching.wiki">Branching, Forking, Merging, and Tagging</a> document.</blockquote></li> <a name="q3"></a> <p><b>(3) How do I create a new branch?</b></p> <blockquote>There are lots of ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. You can also add the "--bgcolor <i>COLOR</i>" option to give the branch a specific background color on timelines. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: <blockquote> <b>fossil [/help/branch|branch] new</b> <i>BRANCH-NAME BASIS</i> </blockquote> The <i>BRANCH-NAME</i> argument is the name of the new branch and the <i>BASIS</i> argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. On the "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button.</blockquote></li> <a name="q4"></a> <p><b>(4) How do I tag a check-in?</b></p> <blockquote>There are several ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add a tag to that check-in using the "--tag <i>TAGNAME</i>" command-line option. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: <blockquote> <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the tmline, then click on the link to go the detailed information page for that check-in. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. There are controls on the edit page that allow new tags to be added and existing tags to be removed.</blockquote></li> <a name="q5"></a> <p><b>(5) How do I create a private branch that won't get pushed back to the main repository.</b></p> <blockquote>Use the <b>--private</b> command-line option on the <b>commit</b> command. The result will be a check-in which exists on your local repository only and is never pushed to other repositories. All descendents of a private check-in are also private. Unless you specify something different using the <b>--branch</b> and/or <b>--bgcolor</b> options, the new private check-in will be put on a branch named "private" with an orange background color. You can merge from the trunk into your private branch in order to keep your private branch in sync with the latest changes on the trunk. Once you have everything in your private branch the way you want it, you can then merge your private branch back into the trunk and push. Only the final merge operation will appear in other repositories. It will seem as if all the changes that occurred on your private branch occurred in a single check-in. Of course, you can also keep your branch private forever simply by not merging the changes in the private branch back into the trunk. [./private.wiki | Additional information]</blockquote></li> <a name="q6"></a> <p><b>(6) How can I delete inappropriate content from my fossil repository?</b></p> <blockquote>See the article on [./shunning.wiki | "shunning"] for details.</blockquote></li> <a name="q7"></a> <p><b>(7) How do I make a clone of the fossil self-hosting repository?</b></p> <blockquote>Any of the following commands should work: <blockquote><pre> fossil [/help/clone|clone] http://www.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] http://www2.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] http://www3.fossil-scm.org/site.cgi fossil.fossil </pre></blockquote> Once you have the repository cloned, you can open a local check-out as follows: <blockquote><pre> mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil </pre></blockquote> Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing: <blockquote><pre> fossil [/help/update|update] </pre></blockquote></blockquote></li> <a name="q8"></a> <p><b>(8) How do I import or export content from and to other version control systems?</b></p> <blockquote>Please see [./inout.wiki | Import And Export]</blockquote></li> </ol> |
Changes to www/fileformat.wiki.
︙ | ︙ | |||
98 99 100 101 102 103 104 105 106 107 108 109 110 111 | <blockquote> <b>B</b> <i>baseline-manifest</i><br> <b>C</b> <i>checkin-comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br> <b>F</b> <i>filename</i> <i>SHA1-hash</i> <i>permissions</i> <i>old-name</i><br> <b>P</b> <i>SHA1-hash</i>+<br> <b>R</b> <i>repository-checksum</i><br> <b>T</b> (<b>+</b>|<b>-</b>|<b>*</b>)<i>tag-name <b>*</b> ?value?</i><br> <b>U</b> <i>user-login</i><br> <b>Z</b> <i>manifest-checksum</i> </blockquote> A manifest may optionally have a single B-card. The B-card specifies | > | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | <blockquote> <b>B</b> <i>baseline-manifest</i><br> <b>C</b> <i>checkin-comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br> <b>F</b> <i>filename</i> <i>SHA1-hash</i> <i>permissions</i> <i>old-name</i><br> <b>P</b> <i>SHA1-hash</i>+<br> <b>Q</b> (<b>+</b>|<b>-</b>)<i>SHA1-hash ?SHA1-hash?</i><br> <b>R</b> <i>repository-checksum</i><br> <b>T</b> (<b>+</b>|<b>-</b>|<b>*</b>)<i>tag-name <b>*</b> ?value?</i><br> <b>U</b> <i>user-login</i><br> <b>Z</b> <i>manifest-checksum</i> </blockquote> A manifest may optionally have a single B-card. The B-card specifies |
︙ | ︙ | |||
126 127 128 129 130 131 132 | space and newline, no other whitespace characters are allowed in the check-in comment. Nor are any unprintable characters allowed in the comment. A manifest must have exactly one D-card. The sole argument to the D-card is a date-time stamp in the ISO8601 format. The date and time should be in coordinated universal time (UTC). | | | > | 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | space and newline, no other whitespace characters are allowed in the check-in comment. Nor are any unprintable characters allowed in the comment. A manifest must have exactly one D-card. The sole argument to the D-card is a date-time stamp in the ISO8601 format. The date and time should be in coordinated universal time (UTC). The format one of: <blockquote> <i>YYYY</i><b>-</b><i>MM</i><b>-</b><i>DD</i><b>T</b><i>HH</i><b>:</b><i>MM</i><b>:</b><i>SS</i><br> <i>YYYY</i><b>-</b><i>MM</i><b>-</b><i>DD</i><b>T</b><i>HH</i><b>:</b><i>MM</i><b>:</b><i>SS</i><b>.</b><i>SSS</i> </blockquote> A manifest has zero or more F-cards. Each F-card identifies a file that is part of the check-in. There are one, two, three, or four arguments. The first argument is the pathname of the file in the check-in relative to the root of the project file hierarchy. No ".." or "." directories are allowed |
︙ | ︙ | |||
167 168 169 170 171 172 173 174 175 176 177 178 179 180 | to the P-card must be unique to that line. The first predecessor is the direct ancestor of the manifest. Other arguments define manifests with which the first was merged to yield the current manifest. Most manifests have a P-card with a single argument. The first manifest in the project has no ancestors and thus has no P-card. A manifest may optionally have a single R-card. The R-card has a single argument which is the MD5 checksum of all files in the check-in except the manifest itself. The checksum is expressed as 32-characters of lowercase hexadecimal. The checksum is computed as follows: For each file in the check-in (except for the manifest itself) in strict sorted lexicographical order, take the pathname of the file relative to the root of the | > > > > > > > > > > > > > > > > > > > | 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 | to the P-card must be unique to that line. The first predecessor is the direct ancestor of the manifest. Other arguments define manifests with which the first was merged to yield the current manifest. Most manifests have a P-card with a single argument. The first manifest in the project has no ancestors and thus has no P-card. A manifest has zero or more Q-cards. A Q-card is similar to a P-card in that it defines a predecessor to the current check-in. But whereas a P-card defines the immediate ancestor or a merge ancestor, the Q-card is used to identify a single check-in or a small range of check-ins which were cherry-picked for inclusion in or exclusion from the the current manifest. The first argument of the Q-card is the artifact ID of another manifest (the "target") which has had its changes included or excluded in the current manifest. The target is preceeded by "+" or "-" to show inclusion or exclusion, respectively. The optional second argument to the Q-card is another manifest artifact ID which is the "baseline" for the cherry-pick. If omitted, the baseline is the primary parent of the target. The changes included or excluded consist of all changes moving from the baseline to the target. The Q-card was added to the interface specification on 2011-02-26. Older versions of Fossil will reject manifests that contain Q-cards. A manifest may optionally have a single R-card. The R-card has a single argument which is the MD5 checksum of all files in the check-in except the manifest itself. The checksum is expressed as 32-characters of lowercase hexadecimal. The checksum is computed as follows: For each file in the check-in (except for the manifest itself) in strict sorted lexicographical order, take the pathname of the file relative to the root of the |
︙ | ︙ | |||
197 198 199 200 201 202 203 | symbolic name when the check-in is intended to start a new branch. Each manifest has a single U-card. The argument to the U-card is the login of the user who created the manifest. The login name is encoded using the same character escapes as is used for the check-in comment argument to the C-card. | | | | 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 | symbolic name when the check-in is intended to start a new branch. Each manifest has a single U-card. The argument to the U-card is the login of the user who created the manifest. The login name is encoded using the same character escapes as is used for the check-in comment argument to the C-card. A manifest must have a single Z-card as its last line. The argument to the Z-card is a 32-character lowercase hexadecimal MD5 hash of all prior lines of the manifest up to and including the newline character that immediately precedes the "Z". The Z-card is a sanity check to prove that the manifest is well-formed and consistent. A sample manifest from Fossil itself can be seen [/artifact/28987096ac | here]. <a name="cluster"></a> |
︙ | ︙ | |||
243 244 245 246 247 248 249 | </blockquote> A cluster contains one or more "M" cards followed by a single "Z" line. Each M card has a single argument which is the artifact ID of another artifact in the repository. The Z card work exactly like the Z card of a manifest. The argument to the Z card is the lower-case hexadecimal representation of the MD5 checksum of all | | < | 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | </blockquote> A cluster contains one or more "M" cards followed by a single "Z" line. Each M card has a single argument which is the artifact ID of another artifact in the repository. The Z card work exactly like the Z card of a manifest. The argument to the Z card is the lower-case hexadecimal representation of the MD5 checksum of all prior cards in the cluster. The Z-card is required. An example cluster from Fossil can be seen [/artifact/d03dbdd73a2a8 | here]. <a name="ctrl"></a> <h2>3.0 Control Artifacts</h2> |
︙ | ︙ | |||
283 284 285 286 287 288 289 | The T card represents a [./branching.wiki#tags | tag or property] that is applied to some other artifact. The T card has two or three values. The second argument is the 40 character lowercase artifact ID of the artifact to which the tag is to be applied. The first value is the tag name. The first character of the tag | | | | | | 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 | The T card represents a [./branching.wiki#tags | tag or property] that is applied to some other artifact. The T card has two or three values. The second argument is the 40 character lowercase artifact ID of the artifact to which the tag is to be applied. The first value is the tag name. The first character of the tag is either "+", "-", or "*". The "+" means the tag should be added to the artifact. The "-" means the tag should be removed. The "*" character means the tag should be added to the artifact and all direct descendants (but not descendents through a merge) down to but not including the first descendant that contains a more recent "-" or "+" tag with the same name. The optional third argument is the value of the tag. A tag without a value is a boolean. When two or more tags with the same name are applied to the same artifact, the tag with the latest (most recent) date is used. Some tags have special meaning. The "comment" tag when applied to a check-in will override the check-in comment of that check-in for display purposes. The "user" tag overrides the name of the check-in user. The "date" tag overrides the check-in date. The "branch" tag sets the name of the branch that at check-in belongs to. Symbolic tags begin with the "sym-" prefix. The U card is the name of the user that created the control artifact. The Z card is the usual required artifact checksum. An example control artifacts can be seen [/info/9d302ccda8 | here]. <a name="wikichng"></a> <h2>4.0 Wiki Pages</h2> |
︙ | ︙ | |||
331 332 333 334 335 336 337 | <b>Z</b> <i>checksum</i> </blockquote> The D card is the date and time when the wiki page was edited. The P card specifies the parent wiki pages, if any. The L card gives the name of the wiki page. The U card specifies the login of the user who made this edit to the wiki page. The Z card is | | | 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 | <b>Z</b> <i>checksum</i> </blockquote> The D card is the date and time when the wiki page was edited. The P card specifies the parent wiki pages, if any. The L card gives the name of the wiki page. The U card specifies the login of the user who made this edit to the wiki page. The Z card is the usual checksum over the either artifact and is required. The W card is used to specify the text of the wiki page. The argument to the W card is an integer which is the number of bytes of text in the wiki page. That text follows the newline character that terminates the W card. The wiki text is always followed by one extra newline. |
︙ | ︙ | |||
358 359 360 361 362 363 364 | <b>K</b> <i>ticket-id</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </blockquote> The D card is the usual date and time stamp and represents the point in time when the change was entered. The U card is the login of the | | | > | | 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 | <b>K</b> <i>ticket-id</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </blockquote> The D card is the usual date and time stamp and represents the point in time when the change was entered. The U card is the login of the programmer who entered this change. The Z card is the required checksum over the entire artifact. Every ticket has a distinct ticket-id: 40-character lower-case hexadecimal number. The ticket-id is given in the K-card. A ticket exists if it contains one or more changes. The first "change" to a ticket is what brings the ticket into existence. J cards specify changes to the "value" of "fields" in the ticket. If the <i>value</i> parameter of the J card is omitted, then the field is set to an empty string. Each fossil server has a ticket configuration which specifies the fields its |
︙ | ︙ | |||
420 421 422 423 424 425 426 427 428 429 430 431 432 433 | A single D card is required to give the date and time when the attachment was applied. A single U card gives the name of the user to added the attachment. If an attachment is added anonymously, then the U card may be omitted. The Z card is the usual checksum over the rest of the attachment artifact. <a name="event"></a> <h2>7.0 Events</h2> An event artifact associates a timeline comment and a page of text (similar to a wiki page) with a point in time. Events can be used | > | 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 | A single D card is required to give the date and time when the attachment was applied. A single U card gives the name of the user to added the attachment. If an attachment is added anonymously, then the U card may be omitted. The Z card is the usual checksum over the rest of the attachment artifact. The Z card is required. <a name="event"></a> <h2>7.0 Events</h2> An event artifact associates a timeline comment and a page of text (similar to a wiki page) with a point in time. Events can be used |
︙ | ︙ | |||
481 482 483 484 485 486 487 | The optional U card gives name of the user who entered the event. A single W card provides wiki text for the document associated with the event. The format of the W card is exactly the same as for a [#wikichng | wiki artifact]. | | | 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 | The optional U card gives name of the user who entered the event. A single W card provides wiki text for the document associated with the event. The format of the W card is exactly the same as for a [#wikichng | wiki artifact]. The Z card is the required checksum over the rest of the artifact. <a name="summary"></a> <h2>8.0 Card Summary</h2> The following table summaries the various kinds of cards that appear on Fossil artifacts: |
︙ | ︙ | |||
610 611 612 613 614 615 616 617 618 619 620 621 622 623 | </tr> <tr> <td><b>P</b> <i>uuid ...</i></td> <td align=center><b>X</b></td> <td align=center> </td> <td align=center> </td> <td align=center><b>X</b></td> <td align=center> </td> <td align=center> </td> <td align=center> </td> </tr> <tr> <td><b>R</b> <i>md5sum</i></td> <td align=center><b>X</b></td> | > > > > > > > > > > | 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 | </tr> <tr> <td><b>P</b> <i>uuid ...</i></td> <td align=center><b>X</b></td> <td align=center> </td> <td align=center> </td> <td align=center><b>X</b></td> <td align=center> </td> <td align=center> </td> <td align=center> </td> </tr> <tr> <td><b>Q</b> (<b>+</b>|<b>-</b>)<i>uuid uuid</i></td> <td align=center><b>X</b></td> <td align=center> </td> <td align=center> </td> <td align=center> </td> <td align=center> </td> <td align=center> </td> <td align=center> </td> </tr> <tr> <td><b>R</b> <i>md5sum</i></td> <td align=center><b>X</b></td> |
︙ | ︙ |
Added www/fossil-v-git.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | <title>Fossil Versus Git</title> <h2>1.0 Don't Stress!</h2> If you start out using one DVCS and later decide you like the other better, it is [./inout.wiki | easy to change]. But it also helps to be informed about the differences between [http://git-scm.com | Git] and Fossil. See the table below for a high-level summary and the text that follows for more details. Keep in mind that you are reading this on a Fossil website, so the information here might be biased in favor of Fossil. Ask around with people who have used both Fossil and Git for other opinions. <h2>2.0 Executive Summary:</h2> <blockquote><center><table border=1 cellpadding=5> <tr><th width="50%">GIT</th><th width="50%">FOSSIL</th></tr> <tr><td>File versioning only</td> <td>Versioning, Tickets, Wiki, and Blog/News</td></tr> <tr><td>Sharding</td><td>Replicating</td></tr> <tr><td>Huge community</td><td>Road less traveled</td></tr> <tr><td>Complex</td><td>Intuitive</td></tr> <tr><td>Separate web tools</td><td>Integrated Web interface</td></tr> <tr><td>Lots of little tools</td><td>Single executable</td></tr> <tr><td>Pile-of-files repository</td><td>Single file repository</td></tr> <tr><td>Uses "<tt>rebase</tt>"</td><td>Immutable</td></tr> <tr><td>GPL</td><td>BSD</td></tr> </table></center></blockquote> <h2>3.0 Discussion</h2> <h3>3.1 Feature Set</h3> Git provides file versioning services only, whereas Fossil adds an integrated [./wikitheory.wiki | wiki], [./bugtheory.wiki | ticketing & bug tracking], [./embedddeddoc.wiki | embedded documentation], and [./event.wiki | News/Blog features]. These additional capabilities are available for Git as 3rd-party user-installed add-ons, but with Fossil they are integrated into the design. One way to describe Fossil is that it is "[https://github.com/ | github]-in-a-box". <h3>3.2 Sharding versus Replicating</h3> Git makes it easy for each repository in a project to hold a subset of the branches for that project. In fact, it is entirely possible and not uncommon for no repository in the project to hold all the different code versions for a project. Instead the information is distributed. Individual developers have one or more private branches. A hierarchy of integrators merge changes from individual developers into collaborative branches, until all the changes are merged together at the top-level master branch. And all of this can be accomplished without having to have all the code in any one repository. Developers or groups of developers can share only those branches that they want to share and keep other branchs of the project private. This is analogous to sharding an a distributed database. Fossil allows private branches, but its default mode is to share everything. And so in a Fossil project, all respositories tend to contain all of the content at all times. This is analogous to replication in a distributed database. The Git model works best for large projects, like the Linux kernel for which Git was designed. Linus Torvalds does not need or want to see a thousand different branches, one for each contributor. Git allows intermediary "gate-keepers" to merge changes from multiple lower-level developers into a single branch and only present Linus with a handful of branches at a time. Git encourages a programming model where each developer works in his or her own branch and then merges changes up the hierarchy until they reach the master branch. Fossil is designed for smaller and non-hierarchical teams where all developers are operating directly on the master branch, or at most a small number of well defined branches. The [concepts.wiki#workflow | autosync] mode of Fossil makes it easy for multiple developers to work on a single branch and maintain linear development on that branch and avoid needless forking and merging. <h3>3.3 Community</h3> Git has a huge user community. If following the herd and being like everybody else is important to you, then you should choose Git. Fossil is clearly the "road less traveled": <blockquote> Two roads diverged in a wood, and I —<br> I took the one less traveled by,<br> And that has made all the difference.<br> <small>- Robert Frost, <i>The Road Not Taken</i>, 1916</small> </blockquote> </i></blockquote> Among the advantages of Git's huge user community are that new team members may already be familiar with Git's operation and hence can bypass the VCS learning curve. Also, if you need an add-on tool or script of some kind, a Google search will likely turn up a suitable tool that you can just download and use. A huge community also means that somebody else has likely already encountered and fixed the bugs so that Git will work for you and your project as advertised. Among the advantages of the "road less traveled" is that your particular project will be bigger percentage of the total user base, and is thus more likely to receive personal attention from the Fossil maintainers if you do encounter problems. <h3>3.4 Complexity</h3> Git is a complex system. It can be tricky to use and requires a fair amount of knowledge and experience to master. Fossil strives to be a much simpler system that can be learned and mastered much more quickly. Fossil strives to have fewer "gotchas" and quirks that can trip up a developer. The ideal VCS should just get out of the way of the developer and allow the developer to focus 100% of their thinking on the project under development. One should not have to stop and think about how to operate the VCS. Of course, no VCS is ideal. Every VCS requires the developer to think about version control to some extent. But one wants to minimize the thinking about version control. Git requires the developer to maintain a more complex mental model than most other DVCSes. Git takes longer to learn. And you have to spend more time thinking about what you are doing with Git. Fossil strives for simplicity. Fossil wants to be easy to learn and to require little thinking about how to operating it. Reports from the field indicate that Fossil is mostly successful at this effort. <h3>3.5 Web Interface</h3> Git has a web interface, but it requires a fair amount of setup and an external web server. Fossil comes with a fully functional [./webui.wiki | built-in web-server] and a really simple mechanism (the "<tt>fossil ui</tt>" command) to automatically start the web server and bring up a web browser to navigate it. The web interface for Fossil is not only easier to set up, it is also more powerful and easier to use. The web interface to Fossil is a practical replacement to the 3rd-party "GUI Tools" that users often employ to operate Git. <h3>3.6 Implementation Strategy</h3> Git consists of a collection of many little programs. Git needs to be "installed" using some kind of installer or package tool. Git can be tricky to install and get working, especially for users without administrative privileges. Fossil is a single self-contained executable. To "install" Fossil one has merely to download a precompiled binary and place that binary somewhere on $PATH. To uninstall Fossil, simply delete the binary. To upgrade Fossil, replace the old binary with a new one. Fossil is designed to be trivial to install, uninstall, and upgrade so that developers can spend more time working on their own projects and much less time configuring their version control system. <h3>3.7 Repository Storage</h3> A Git repository is a "pile-of-files" in the ".git" directory at the root of the working checkout. There is a one-to-one correspondence between repositories and working checkouts. A power-loss or system crash in the middle of Git operation can damage or corrupt the Git repository. A Fossil repository consists of a single disk file. A single Fossil repository can serve multiple simultaneous working checkouts. A Fossil repository is an SQLite database, so it highly resistant to damage from a power-loss or system crash - incomplete transactions are simply rolled back after the system reboots. <h3>3.8 Audit Trail</h3> Git features the "rebase" command which can be used to change the sequence of check-ins in the repository. Rebase can be used to "clean up" a complex sequence of check-ins to make their intent easier for others to understand. From another point of view, rebase can be used to "rewrite history" - to do what [http://en.wikipedia.org/wiki/Winston_Smith | Winston Smith] did for a living in Orwell's novel [http://en.wikipedia.org/wiki/Nineteen_Eighty-Four | 1984]. Fossil deliberately avoids rewriting history. Fossil strives to follow the accountants philosophy of never erasing anything. Mistakes are fixed by entering a correction, with an explanation of why the correction is needed. This can make the history of a project messy, but it also makes it more honest. The lack of a "rebase" function is considered a feature of Fossil, not a bug. <h3>3.9 License</h3> Both Git and Fossil are open-source. Git is under [http://www.gnu.org/licenses/gpl.html | GPL] whereas Fossil is under the [http://en.wikipedia.org/wiki/BSD_licenses | two-clause BSD license]. The difference should not be of a concern to most users. However, some corporate lawyers have objections to using GPL products and are more comfortable with a BSD-style license. |
Changes to www/index.wiki.
|
| | | < | > | < < > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | <title>Fossil</title> <p align="center"> <font size="3"> <i>Simple, high-reliability, distributed software configuration management</i> </font> </p> <h3>Why Use Fossil?</h3> <table border="0" cellspacing="10" bgcolor="white" align="right" cellpadding="2"> <tr><td bgcolor="#446979"> <table border="0" cellpadding="10" bgcolor="white"> <tr><td> <ul> <li> [http://www.fossil-scm.org/download.html | Download] <li> [./quickstart.wiki | Quick Start] <li> [./build.wiki | Install] <li> [../COPYRIGHT-BSD2.txt | License] <li> [/timeline | Recent changes] <li> [./faq.wiki | FAQ] <li> [./permutedindex.wiki | Doc Index] <li> Mailing list <ul> <li> [http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users | sign-up] <li> [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org | archives] <ul> </ul> </td></tr> |
︙ | ︙ | |||
48 49 50 51 52 53 54 | integrated package. 2. <b>Web Interface</b> - Fossil has a built-in and easy-to-use [./webui.wiki | web interface] that simplifies project tracking and promotes situational awareness. Simply type "fossil ui" from within any check-out and Fossil automatically opens your web browser in a page that gives detailed | > | > > > > > > > > > | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | integrated package. 2. <b>Web Interface</b> - Fossil has a built-in and easy-to-use [./webui.wiki | web interface] that simplifies project tracking and promotes situational awareness. Simply type "fossil ui" from within any check-out and Fossil automatically opens your web browser in a page that gives detailed [/timeline?n=100&y=ci | graphical history] and status information on that project. This entire website (except the [http://www.fossil-scm.org/download.html | download] page) is just a running instance of Fossil. The pages you see here are all [./wikitheory.wiki | wiki] or [./embeddeddoc.wiki | embedded documentation]. When you clone Fossil from one of its [./selfhost.wiki | self-hosting repositories], you get more than just source code - you get this entire website. 3. <b>Autosync</b> - Fossil supports [./concepts.wiki#workflow | "autosync" mode] which helps to keep projects moving forward by reducing the amount of needless [./branching.wiki | forking and merging] often associated with distributed projects. |
︙ | ︙ | |||
94 95 96 97 98 99 100 | the repository are consistent prior to each commit. In over three years of operation, no work has ever been lost after having been committed to a Fossil repository. <hr> <h3>Links For Fossil Users:</h3> | | > | 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | the repository are consistent prior to each commit. In over three years of operation, no work has ever been lost after having been committed to a Fossil repository. <hr> <h3>Links For Fossil Users:</h3> * [./reviews.wiki | Testimonials] from satisfied fossil users and [./quotes.wiki | Quotes] about Fossil and other DVCSes. * [./faq.wiki | FAQ] * The [./concepts.wiki | concepts] behind fossil * [./quickstart.wiki | Quick Start] guide to using fossil * [./qandc.wiki | Questions & Criticisms] directed at fossil. * [./build.wiki | Building And Installing] * Fossil supports [./embeddeddoc.wiki | embedded documentation] that is versioned along with project source code. |
︙ | ︙ | |||
129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | * Documentation on the [http://www.sqliteconcepts.org/THManual.pdf | TH1 Script Language] used to configure the ticketing subsystem. * A free hosting server for Fossil repositories is available at [http://chiselapp.com/]. * How to [./server.wiki | set up a server] for your repository. * Customizing the [./custom_ticket.wiki | ticket system]. <h3>Links For Fossil Developer:</h3> * [./theory1.wiki | Thoughts On The Design Of Fossil]. * [./pop.wiki | Principles Of Operation] * The [./fileformat.wiki | file format] used by every content file stored in the repository. * The [./delta_format.wiki | format of deltas] used to efficiently store changes between file revisions. * The [./delta_encoder_algorithm.wiki | encoder algorithm] used to efficiently generate deltas. * The [./sync.wiki | synchronization protocol]. | > > > > > > | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | * Documentation on the [http://www.sqliteconcepts.org/THManual.pdf | TH1 Script Language] used to configure the ticketing subsystem. * A free hosting server for Fossil repositories is available at [http://chiselapp.com/]. * How to [./server.wiki | set up a server] for your repository. * Customizing the [./custom_ticket.wiki | ticket system]. * Methods to [./checkin_names.wiki | identify a specific check-in]. * [./inout.wiki | Import and export] from and to Git. * [./fossil-v-git.wiki | Fossil versus Git]. <h3>Links For Fossil Developer:</h3> * [./contribute.wiki | Contributing] code or documentation to the Fossil project. * [./theory1.wiki | Thoughts On The Design Of Fossil]. * [./pop.wiki | Principles Of Operation] * [./tech_overview.wiki | A Technical Overview Of Fossil]. * The [./fileformat.wiki | file format] used by every content file stored in the repository. * The [./delta_format.wiki | format of deltas] used to efficiently store changes between file revisions. * The [./delta_encoder_algorithm.wiki | encoder algorithm] used to efficiently generate deltas. * The [./sync.wiki | synchronization protocol]. |
Added www/inout.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | <title>Import And Export</title> Fossil has the ability to import and export repositories from and to [http://git-scm.com/ | Git]. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary. <h2>Git → Fossil</h2> To import a Git repository into Fossil, run commands like this: <blockquote><pre> cd git-repo git fast-export --all | fossil import --git new-repo.fossil </pre></blockquote> In other words, simply pipe the output of the "git fast-export" command into the "fossil import --git" command. The 3rd argument to the "fossil import" command is the name of a new Fossil repository that is created to hold the Git content. The --git option is not actually required. The git-fast-export file format is currently the only VCS interchange format that Fossil understands. But future versions of Fossil might be enhanced to understand other VCS interchange formats, and so for compatibility, use of the --git option is recommended. <h2>Fossil → Git</h2> To convert a Fossil repository into a Git repository, run commands like this: <blockquote><pre> git init new-repo cd new-repo fossil export --git ../repo.fossil | git fast-import </pre></blockquote> In other words, create a new Git repository, then pipe the output from the "fossil export --git" command into the "git fast-import" command. Note that the "fossil export --git" command only exports the versioned files. Tickets and wiki and events are not exported, since Git does not understand those concepts. As with the "import" command, the --git option is not required since the git-fast-export file format is currently the only VCS interchange format that Fossil will generate. However, future versions of Fossil might add the ability to generate other VCS interchange formats, and so for compatibility, the use of the --git option recommented. |
Added www/makefile.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | <title>The Fossil Build Process</title> <h1>1.0 Introduction</h1> The build process for Fossil is tricky in that the source code needs to be processed by three different preprocessor programs before it is compiled. Most users will download a [http://www.fossil-scm.org/download.html | precompiled binary] so this is of no consequence to them, and even those who want to compile the code themselves can use one of the [./build.wiki | existing makefiles]. So must people do not need to be concerned with the build complexities of Fossil. But hard-core developers who desire a deep understanding of how Fossil is put together can benefit from reviewing this article. <h1>2.0 Source Code Tour</h1> The source code for Fossil is found in the [/dir?ci=trunk&name=src | src/] subdirectory of the source tree. The src/ subdirectory contains all code, including the code for the separate preprocessor programs. Each preprocessor program is a separate C program implemented in a single file of C source code. The three preprocessor programs are: 1. mkindex.c 2. translate.c 3. makeheaders.c Fossil makes use of [http://www.sqlite.org/ | SQLite] for on-disk storage. The SQLite implementation is contained in three source code files that do not participate in the preprocessing steps. These three files that implement SQLite are: 4. sqlite3.c 5. sqlite3.h 6. shell.c The sqlite3.c and sqlite3.h source files are byte-for-byte copies of a standard [http://www.sqlite.org/amalgamation.html | amalgamation]. The shell.c source file is code for the SQLite [http://www.sqlite.org/sqlite.html | command-line shell] that is used to help implement the [/help/sqlite3 | fossil sql] command. The shell.c source file has been modified slightly from the standard shell.c file in the SQLite release. Search for "Fossil Patch" in the shell.c source file of Fossil to see the changes. The TH1 script engine is implemented using files: 7. th.c 8. th.h These two files are imports like the SQLite source files, and so are not preprocessed. The src/ subdirectory also contains documentation about the makeheaders preprocessor program: 9. [../src/makeheaders.html | makeheaders.html] Click on the link to read this documentation. In addition there is a [http://www.tcl.tk/ | Tcl] script used to build the various makefiles: 10. makemake.tcl Running this Tcl script will automatically regenerate all makefiles. In order to add a new source file to the Fossil implementation, simply edit makemake.tcl to add the new filename, then rerun the script, and all of the makefiles for all targets will be rebuild. Finally, there is one of the makefiles generated by makemake.tcl: 11. main.mk The main.mk makefile is invoked from the Makefile in the top-level directory. The main.mk is generated by makemake.tcl and should not be hand edited. Other makefiles generated by makemake.tcl are in other subdirectories (currently all in the win/ subdirectory). All the other files in the src/ subdirectory (79 files at the time of this writing) are C source code files that are subject to the preprocessing steps described below. In the sequel, we will call these other files "src.c" in order to have a convenient name. The reader should understand that whenever "src.c" or "src.h" is used in the text that follows, we really mean all (79) other source files other than the exceptions described above. <h1>3.0 Automatically generated files</h1> The "VERSION.h" header file contains some C preprocessor macros that identify the version of Fossil that is to be build. The VERSION.h file is generated automatically from information extracted from the "manifest" and "manifest.uuid" source files in the root directory of the source tree. (The "manifest" and "manifest.uuid" files are automatically generated and updated by Fossil itself. See the [/help/setting | fossil set manifest] command for additional information.) Under unix, there is an AWK script that converts manifest and manifest.uuid into VERSION.h. For windows, there is a C program: win/version.c. (Unix builds could also used the version.c convert, if desired, but AWK seems easier there.) To run the VERSION.h generator, first compile the win/version.c source file into a command-line program (named "version.exe") than run: <blockquote><pre> version.exe manifest.uuid manifest VERSION.h </pre></blockquote> The pathnames in the above command might need to be adjusted to get the directories right. The point is that the manifest.uuid and manifest files in the root of the source tree are the first two arguments and the name of the generated VERSION.h file is the third and final argument. <h1>4.0 Preprocessing</h1> There are three preprocessors for the Fossil sources. The mkindex and translate preprocessors can be run in any order. The makeheaders preprocessor must be run after translate. <h2>4.1 The mkindex preprocessor</h2> The mkindex program scans the "src.c" source files looking for special comments that identify routines that implement of various Fossil commands, web interface methods, and help text comments. The mkindex program generates some C code that Fossil uses in order to dispatch commands and HTTP requests and to show on-line help. Compile the mkindex program from the mkindex.c source file. Then run: <blockquote><pre> ./mkindex src.c >page_index.h </pre></blockquote> Note that "src.c" in the above is a stand-in for the (79) regular source files of Fossil - all source files except for the exceptions described in section 2.0 above. The output of the mkindex program is a header file that is #include-ed by the main.c source file during the final compilation step. <h2>4.2 The translate preprocessor</h2> The translate preprocessor looks for lines of source code that begin with "@" and converts those lines into string constants or (depending on context) into special "printf" operations for generating the output of an HTTP request. The translate preprocessor is a simple C program whose sources are in the translate.c source file. The translate preprocess is run on each of the other ordinary source files separately, like this: <blockquote><pre> ./translate src.c >src_.c </pre></blockquote> In this case, the "src.c" file represents any single source file from the set of ordinary source files as described in section 2.0 above. Note that each source file is translated separately. By convention, the names of the translated source files are the names of the input sources with a single "_" character at the end. But a new makefile can use any naming convention it wants - the "_" is not critical to the build process. After being translated, the output files (the "src_.c" files) should be used for all subsequent preprocessing and compilation steps. <h2>4.3 The makeheaders preprocessor</h2> For each C source module "src.c", there is an automatically generated header module "src.h" that contains all of the datatype and procedure declarations needed by the source module. These header files are generated automatically by the makeheaders program. The sources to makeheaders are contained in a single file "makeheaders.c". Additional documentation on makeheaders can be found in [../src/makeheaders.html | src/makeheaders.html]. The makeheaders program is run once. It scans all inputs source files and generates header files for each one. Note that the sqlite3.c and shell.c source files are not scanned by makeheaders. Makeheaders only runs over "ordinary" source files, not the exceptional source files. However, makeheaders also uses some extra header files as input. The general format is like this: <blockquote><pre> makeheaders src_.c:src.h sqlite3.h th.h VERSION.h </pre></blockquote> In the example above the "src_.c" and "src.h" names represent all of the (79) ordinary C source files, each as a separate argument. <h1>5.0 Compilation</h1> After all generated files have been created and all ordinary source files have been preprocessed, the generated and preprocessed files can be combined into a single executable using a C compiler. This can be done all at once, or each preprocessed source file can be compiled into a separate object code file and the resulting object code files linked together in a final step. Some files require special C-preprocessor macro definitions. When compiling sqlite.c, the following macros are recommended: * -Dlocaltime=fossil_localtime * -DSQLITE_OMIT_LOAD_EXTENSION=1 * -DSQLITE_ENABLE_LOCKING_STYLE=0 * -DSQLITE_THREADSAFE=0 * -DSQLITE_DEFAULT_FILE_FORMAT=4 * -DSQLITE_ENABLE_STAT2 The first and second symbol definitions above are required; the others are merely recommended. The "localtime()" library function in SQLite must be redefined to invoke fossil_localtime() instead. The fossil_localtime() routine will invoke either gmtime() or localtime() depending on the "Use UTC" setting for the fossil repository. Extension loading is omitted as a security measure. Fossil is single-threaded so mutexing is disabled in SQLite as a performance enhancement. When compiling the shell.c source file, these macros are required: * -Dmain=sqlite3_main * -DSQLITE_OMIT_LOAD_EXTENSION=1 The "main()" routine in the shell must be changed into sqlite3_main() to prevent it from colliding with the real main() in Fossil, and to give Fossil an entry point to jump to when the [/help/sqlite3 | fossil sql] command is invoked. All the other source code files can be compiled without any special options. <h1>6.0 Linkage</h1> Fossil needs to be linked against [http://www.zlib.net | zlib]. If the HTTPS option is enabled, then it will also need to link against the appropriate SSL implementation. And, of course, Fossil needs to link against the standard C library. No other libraries or external dependences are used. |
Changes to www/mkdownload.tcl.
1 2 | #!/usr/bin/tclsh # | | > > | | > > > | < < | | < | | | < < < < < | < | | | | | > | | | | | > > > > > > | > | | > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | #!/usr/bin/tclsh # # Run this script to build the "download.html" page. Also generate # the fossil_download_checksums.html page. # # set out [open download.html w] puts $out \ {<html> <head> <title>Fossil: Downloads</title> <link rel="stylesheet" href="/fossil/style.css" type="text/css" media="screen"> </head> <body> <div class="header"> <div class="logo"> <img src="/fossil/doc/tip/www/fossil_logo_small.gif" alt="logo"> </div> <div class="title">Fossil Downloads</div> </div> <div class="mainmenu"><a href='/fossil/doc/tip/www/index.wiki'>Home</a><a href='/fossil/timeline'>Timeline</a><a href='/fossil/brlist'>Branches</a><a href='/fossil/taglist'>Tags</a><a href='/fossil/reportlist'>Tickets</a><a href='/fossil/wiki'>Wiki</a><a href='/fossil/login'>Login</a></div> <div class="content"> <p> <center><font size=4> <b>To install Fossil →</b> download the stand-alone executable and put it on your $PATH. </font><p><small> RPMs available <a href="http://download.opensuse.org/repositories/home:/rmax:/fossil/"> here.</a> Cryptographic checksums for download files are <a href="http://www.hwaci.com/fossil_download_checksums.html">here</a>. </small></p> </center> <table cellpadding="10"> } # Find all all unique timestamps. # foreach file [glob -nocomplain download/fossil-*.zip] { if {[regexp {(\d+).zip$} $file all datetime] && [string length $datetime]>=14} { set adate($datetime) 1 } } # Do all dates from newest to oldest # foreach datetime [lsort -decr [array names adate]] { set dt [string range $datetime 0 3]-[string range $datetime 4 5]- append dt "[string range $datetime 6 7] " append dt "[string range $datetime 8 9]:[string range $datetime 10 11]:" append dt "[string range $datetime 12 13]" set link [string map {{ } +} $dt] set hr http://www.fossil-scm.org/fossil/timeline?c=$link&y=ci puts $out "<tr><td colspan=6 align=left><hr>" puts $out "<center><b><a href=\"$hr\">$dt</a></b></center>" puts $out "</td></tr>" foreach {prefix suffix img desc} { fossil-linux-x86 zip linux.gif {Linux x86} fossil-linux-amd64 zip linux64.gif {Linux x86_64} fossil-macosx-x86 zip mac.gif {Mac 10.5 x86} fossil-openbsd-x86 zip openbsd.gif {OpenBSD 4.7 x86} fossil-w32 zip win32.gif {Windows} fossil-src tar.gz src.gif {Source Tarball} } { set filename download/$prefix-$datetime.$suffix if {[file exists $filename]} { set size [file size $filename] set units bytes if {$size>1024*1024} { set size [format %.2f [expr {$size/(1024.0*1024.0)}]] set units MiB } elseif {$size>1024} { set size [format %.2f [expr {$size/(1024.0)}]] set units KiB } puts $out "<td align=center valign=bottom><a href=\"$filename\">" puts $out "<img src=\"build-icons/$img\" border=0><br>$desc</a><br>" puts $out "$size $units</td>" } else { puts $out "<td> </td>" } } puts $out "</tr>" if {[file exists download/releasenotes-$datetime.html]} { puts $out "<tr><td colspan=6 align=left>" set rn [open download/releasenotes-$datetime.html] puts $out "[read $rn]" close $rn puts $out "</td></tr>" } } puts $out "<tr><td colspan=5><hr></td></tr>" puts $out {</table> </body> </html> } close $out # Generate the checksum page # set out [open fossil_download_checksums.html w] puts $out {<html> <title>Fossil Download Checksums</title> <body> <h1 align="center">Checksums For Fossil Downloads</h1> <p>The following table shows the SHA1 checksums for the precompiled binaries available on the <a href="http://www.fossil-scm.org/download.html">Fossil website</a>.</p> <pre>} foreach file [lsort [glob -nocomplain download/fossil-*.zip]] { set sha1sum [lindex [exec sha1sum $file] 0] puts $out "$sha1sum [file tail $file]" } puts $out {</pre></body></html} close $out |
Added www/mkindex.tcl.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | #!/bin/sh # # Run this TCL script to generate a WIKI page that contains a # permuted index of the various documentation files. # # tclsh mkindex.tcl >permutedindex.wiki # set doclist { bugtheory.wiki {Bug Tracking In Fossil} branching.wiki {Branching, Forking, Merging, and Tagging} build.wiki {Building and Installing Fossil} checkin_names.wiki {Checkin And Version Names} copyright-release.html {Contributor License Agreement} concepts.wiki {Fossil Core Concepts} contribute.wiki {Contributing Code or Documentation To The Fossil Project} custom_ticket.wiki {Customizing The Ticket System} delta_encoder_algorithm.wiki {Fossil Delta Encoding Algorithm} delta_format.wiki {Fossil Delta Format} embeddeddoc.wiki {Embedded Project Documentation} event.wiki {Events} faq.wiki {Frequently Asked Questions} fileformat.wiki {Fossil File Format} fossil-v-git.wiki {Fossil Versus Git} index.wiki {Home Page} inout.wiki {Import And Export To And From Git} makefile.wiki {The Fossil Build Process} password.wiki {Password Management And Authentication} pop.wiki {Principles Of Operations} private.wiki {Creating, Syncing, and Deleting Private Branches} qandc.wiki {Questions And Criticisms} quickstart.wiki {Fossil Quick Start Guide} quotes.wiki {Quotes: What People Are Saying About Fossil, Git, and DVCSes in General} ../test/release-checklist.wiki {Pre-Release Testing Checklist} selfcheck.wiki {Fossil Repository Integrity Self Checks} selfhost.wiki {Fossil Self Hosting Repositories} server.wiki {How To Configure A Fossil Server} shunning.wiki {Shunning: Deleting Content From Fossil} stats.wiki {Performance Statistics} style.wiki {Source Code Style Guidelines} sync.wiki {The Fossil Sync Protocol} tech_overview.wiki {A Technical Overview Of The Design And Implementation Of Fossil} tech_overview.wiki {SQLite Databases Used By Fossil} theory1.wiki {Thoughts On The Design Of The Fossil DVCS} webui.wiki {The Fossil Web Interface} wikitheory.wiki {Wiki In Fossil} } set permindex {} set stopwords {fossil and a in of on the to are about used by} foreach {file title} $doclist { set n [llength $title] lappend permindex [list $title $file] for {set i 0} {$i<$n-1} {incr i} { set prefix [lrange $title 0 $i] set suffix [lrange $title [expr {$i+1}] end] set firstword [string tolower [lindex $suffix 0]] if {[lsearch $stopwords $firstword]<0} { lappend permindex [list "$suffix — $prefix" $file] } } } set permindex [lsort -dict $permindex] puts "<title>Permuted Index Of Fossil Documentation</title>" puts "<nowiki>" puts "<ul>" foreach entry $permindex { foreach {title file} $entry break puts "<li><a href=\"$file\">$title</a></li>" } puts "</ul>" |
Added www/permutedindex.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 | <title>Permuted Index Of Fossil Documentation</title> <nowiki> <ul> <li><a href="event.wiki">Events</a></li> <li><a href="tech_overview.wiki">A Technical Overview Of The Design And Implementation Of Fossil</a></li> <li><a href="copyright-release.html">Agreement — Contributor License</a></li> <li><a href="delta_encoder_algorithm.wiki">Algorithm — Fossil Delta Encoding</a></li> <li><a href="faq.wiki">Asked Questions — Frequently</a></li> <li><a href="password.wiki">Authentication — Password Management And</a></li> <li><a href="private.wiki">Branches — Creating, Syncing, and Deleting Private</a></li> <li><a href="branching.wiki">Branching, Forking, Merging, and Tagging</a></li> <li><a href="bugtheory.wiki">Bug Tracking In Fossil</a></li> <li><a href="makefile.wiki">Build Process — The Fossil</a></li> <li><a href="build.wiki">Building and Installing Fossil</a></li> <li><a href="checkin_names.wiki">Checkin And Version Names</a></li> <li><a href="../test/release-checklist.wiki">Checklist — Pre-Release Testing</a></li> <li><a href="selfcheck.wiki">Checks — Fossil Repository Integrity Self</a></li> <li><a href="contribute.wiki">Code or Documentation To The Fossil Project — Contributing</a></li> <li><a href="style.wiki">Code Style Guidelines — Source</a></li> <li><a href="concepts.wiki">Concepts — Fossil Core</a></li> <li><a href="server.wiki">Configure A Fossil Server — How To</a></li> <li><a href="shunning.wiki">Content From Fossil — Shunning: Deleting</a></li> <li><a href="contribute.wiki">Contributing Code or Documentation To The Fossil Project</a></li> <li><a href="copyright-release.html">Contributor License Agreement</a></li> <li><a href="concepts.wiki">Core Concepts — Fossil</a></li> <li><a href="private.wiki">Creating, Syncing, and Deleting Private Branches</a></li> <li><a href="qandc.wiki">Criticisms — Questions And</a></li> <li><a href="custom_ticket.wiki">Customizing The Ticket System</a></li> <li><a href="tech_overview.wiki">Databases Used By Fossil — SQLite</a></li> <li><a href="shunning.wiki">Deleting Content From Fossil — Shunning:</a></li> <li><a href="private.wiki">Deleting Private Branches — Creating, Syncing, and</a></li> <li><a href="delta_encoder_algorithm.wiki">Delta Encoding Algorithm — Fossil</a></li> <li><a href="delta_format.wiki">Delta Format — Fossil</a></li> <li><a href="tech_overview.wiki">Design And Implementation Of Fossil — A Technical Overview Of The</a></li> <li><a href="theory1.wiki">Design Of The Fossil DVCS — Thoughts On The</a></li> <li><a href="embeddeddoc.wiki">Documentation — Embedded Project</a></li> <li><a href="contribute.wiki">Documentation To The Fossil Project — Contributing Code or</a></li> <li><a href="theory1.wiki">DVCS — Thoughts On The Design Of The Fossil</a></li> <li><a href="quotes.wiki">DVCSes in General — Quotes: What People Are Saying About Fossil, Git, and</a></li> <li><a href="embeddeddoc.wiki">Embedded Project Documentation</a></li> <li><a href="delta_encoder_algorithm.wiki">Encoding Algorithm — Fossil Delta</a></li> <li><a href="inout.wiki">Export To And From Git — Import And</a></li> <li><a href="fileformat.wiki">File Format — Fossil</a></li> <li><a href="branching.wiki">Forking, Merging, and Tagging — Branching,</a></li> <li><a href="delta_format.wiki">Format — Fossil Delta</a></li> <li><a href="fileformat.wiki">Format — Fossil File</a></li> <li><a href="concepts.wiki">Fossil Core Concepts</a></li> <li><a href="delta_encoder_algorithm.wiki">Fossil Delta Encoding Algorithm</a></li> <li><a href="delta_format.wiki">Fossil Delta Format</a></li> <li><a href="fileformat.wiki">Fossil File Format</a></li> <li><a href="quickstart.wiki">Fossil Quick Start Guide</a></li> <li><a href="selfcheck.wiki">Fossil Repository Integrity Self Checks</a></li> <li><a href="selfhost.wiki">Fossil Self Hosting Repositories</a></li> <li><a href="fossil-v-git.wiki">Fossil Versus Git</a></li> <li><a href="quotes.wiki">Fossil, Git, and DVCSes in General — Quotes: What People Are Saying About</a></li> <li><a href="faq.wiki">Frequently Asked Questions</a></li> <li><a href="shunning.wiki">From Fossil — Shunning: Deleting Content</a></li> <li><a href="inout.wiki">From Git — Import And Export To And</a></li> <li><a href="quotes.wiki">General — Quotes: What People Are Saying About Fossil, Git, and DVCSes in</a></li> <li><a href="fossil-v-git.wiki">Git — Fossil Versus</a></li> <li><a href="inout.wiki">Git — Import And Export To And From</a></li> <li><a href="quotes.wiki">Git, and DVCSes in General — Quotes: What People Are Saying About Fossil,</a></li> <li><a href="quickstart.wiki">Guide — Fossil Quick Start</a></li> <li><a href="style.wiki">Guidelines — Source Code Style</a></li> <li><a href="index.wiki">Home Page</a></li> <li><a href="selfhost.wiki">Hosting Repositories — Fossil Self</a></li> <li><a href="server.wiki">How To Configure A Fossil Server</a></li> <li><a href="tech_overview.wiki">Implementation Of Fossil — A Technical Overview Of The Design And</a></li> <li><a href="inout.wiki">Import And Export To And From Git</a></li> <li><a href="build.wiki">Installing Fossil — Building and</a></li> <li><a href="selfcheck.wiki">Integrity Self Checks — Fossil Repository</a></li> <li><a href="webui.wiki">Interface — The Fossil Web</a></li> <li><a href="copyright-release.html">License Agreement — Contributor</a></li> <li><a href="password.wiki">Management And Authentication — Password</a></li> <li><a href="branching.wiki">Merging, and Tagging — Branching, Forking,</a></li> <li><a href="checkin_names.wiki">Names — Checkin And Version</a></li> <li><a href="pop.wiki">Operations — Principles Of</a></li> <li><a href="contribute.wiki">or Documentation To The Fossil Project — Contributing Code</a></li> <li><a href="tech_overview.wiki">Overview Of The Design And Implementation Of Fossil — A Technical</a></li> <li><a href="index.wiki">Page — Home</a></li> <li><a href="password.wiki">Password Management And Authentication</a></li> <li><a href="quotes.wiki">People Are Saying About Fossil, Git, and DVCSes in General — Quotes: What</a></li> <li><a href="stats.wiki">Performance Statistics</a></li> <li><a href="../test/release-checklist.wiki">Pre-Release Testing Checklist</a></li> <li><a href="pop.wiki">Principles Of Operations</a></li> <li><a href="private.wiki">Private Branches — Creating, Syncing, and Deleting</a></li> <li><a href="makefile.wiki">Process — The Fossil Build</a></li> <li><a href="contribute.wiki">Project — Contributing Code or Documentation To The Fossil</a></li> <li><a href="embeddeddoc.wiki">Project Documentation — Embedded</a></li> <li><a href="sync.wiki">Protocol — The Fossil Sync</a></li> <li><a href="faq.wiki">Questions — Frequently Asked</a></li> <li><a href="qandc.wiki">Questions And Criticisms</a></li> <li><a href="quickstart.wiki">Quick Start Guide — Fossil</a></li> <li><a href="quotes.wiki">Quotes: What People Are Saying About Fossil, Git, and DVCSes in General</a></li> <li><a href="selfhost.wiki">Repositories — Fossil Self Hosting</a></li> <li><a href="selfcheck.wiki">Repository Integrity Self Checks — Fossil</a></li> <li><a href="quotes.wiki">Saying About Fossil, Git, and DVCSes in General — Quotes: What People Are</a></li> <li><a href="selfcheck.wiki">Self Checks — Fossil Repository Integrity</a></li> <li><a href="selfhost.wiki">Self Hosting Repositories — Fossil</a></li> <li><a href="server.wiki">Server — How To Configure A Fossil</a></li> <li><a href="shunning.wiki">Shunning: Deleting Content From Fossil</a></li> <li><a href="style.wiki">Source Code Style Guidelines</a></li> <li><a href="tech_overview.wiki">SQLite Databases Used By Fossil</a></li> <li><a href="quickstart.wiki">Start Guide — Fossil Quick</a></li> <li><a href="stats.wiki">Statistics — Performance</a></li> <li><a href="style.wiki">Style Guidelines — Source Code</a></li> <li><a href="sync.wiki">Sync Protocol — The Fossil</a></li> <li><a href="private.wiki">Syncing, and Deleting Private Branches — Creating,</a></li> <li><a href="custom_ticket.wiki">System — Customizing The Ticket</a></li> <li><a href="branching.wiki">Tagging — Branching, Forking, Merging, and</a></li> <li><a href="tech_overview.wiki">Technical Overview Of The Design And Implementation Of Fossil — A</a></li> <li><a href="../test/release-checklist.wiki">Testing Checklist — Pre-Release</a></li> <li><a href="makefile.wiki">The Fossil Build Process</a></li> <li><a href="sync.wiki">The Fossil Sync Protocol</a></li> <li><a href="webui.wiki">The Fossil Web Interface</a></li> <li><a href="theory1.wiki">Thoughts On The Design Of The Fossil DVCS</a></li> <li><a href="custom_ticket.wiki">Ticket System — Customizing The</a></li> <li><a href="bugtheory.wiki">Tracking In Fossil — Bug</a></li> <li><a href="checkin_names.wiki">Version Names — Checkin And</a></li> <li><a href="fossil-v-git.wiki">Versus Git — Fossil</a></li> <li><a href="webui.wiki">Web Interface — The Fossil</a></li> <li><a href="quotes.wiki">What People Are Saying About Fossil, Git, and DVCSes in General — Quotes:</a></li> <li><a href="wikitheory.wiki">Wiki In Fossil</a></li> </ul> |
Added www/private.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | <title>Private Branches</title> By default, everything you check into a Fossil repository is shared to all clones of that repository. In Fossil, you don't push and pull individual branches; you push and pull everything all at once. But sometimes users want to keep some private work that is not shared with others. This might be a preliminary or experimental change that needs further refinement before it is shared and which might never be shared at all. To do this in Fossil, simply commit the change with the --private command-line option: <blockquote><pre> fossil commit --private </pre></blockquote> The --private option causes Fossil to put the check-in in a new branch named "private". That branch will not participate in subsequent clone, sync, push, or pull operations. The branch will remain on the one local repository where it was created. Note that you only use the --private option for the first checkin that creates the private branch. Additional checkins into the private branch remain private automatically. <h2>Publishing Private Changes</h2> After additional work, one might desire to publish the changes associated with a private branch. The usual way to do this is to merge those changes into a public branch. For example: <blockquote><pre> fossil update trunk fossil merge private fossil commit </pre></blockquote> The private branch remains private. (There is no way to convert a private branch into a public branch.) But all of the changes associated with the private branch are now folded into the public branch and are hence visible to other users of the project. <h2>Syncing Private Branches</h2> A private branch normally stays on the one repository where it was originally created. But sometimes you want to share private branches with another repository. For example, you might be building a cross-platform application and have separate repositories on your windows laptop, your linux desktop, and your iMac. You can transfer private branches between these machines by using the --private option on the "sync", "push", "pull", and "clone" commands. For example, if you are running "fossil server" on your linux box and you want to clone that repository to your Mac, including all private branches, use: <blockquote><pre> fossil clone --private http://user@linux.localnetwork:8080/ mac-clone.fossil </pre></blockquote> You'll have to supply a username and password in order for this to work. Fossil will not clone (or sync) private branches anonymously. Furthermore, you have to enable the "Private" capability (the "x" capability) in order to enable private branch syncing. The Admin ("a") and Superuser ("s") do <u>not</u> imply Private ("x"); you'll have to set "x" in addition to "a" or "s". This is a little extra work, but there is a purpose here: the interface is designed to make it very difficult to accidently sync a private branch to a public server. It is highly recommended that you leave the "x" capability turned off on all repositories used for collaboration (repositories to which many people push and pull) and only enable "x" for local repositories when you need to share private branches. Private branch sync only works if you use the --private command-line option. Private branches are never synced via the auto-sync mechanism. Once again, this restriction is designed to make it hard to accidently push private branches beyond their intended audience. <h2>Purging Private Branches</h2> You can remove all private branches from a repository using this command: <blockquote><pre> fossil scrub --private </pre></blockquote> Note that the above is a permanent and irreversible change. You will be asked to confirm before continuing. Once the private branches are removed, they cannot be retrieved (unless you have synced them to another repository.) So be careful with the command. <h2>Additional Notes</h2> All of the features above apply to <u>all</u> private branches in a single repository at once. There is no mechanism in Fossil (currently) that allows you to push, pull, clone, sync, or scrub and individual private branch within a repository that contains multiple private branches. |
Changes to www/quickstart.wiki.
1 | <title>Fossil Quick Start Guide</title> | < | < < | | | | < | | | | | | < < | | < < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | <title>Fossil Quick Start Guide</title> <h1 align="center">Fossil Quick Start</h1> <p>This is a guide to get you started using fossil quickly and painlessly.</p> <h2>Installing</h2> <p>Fossil is a single self-contained C program. You need to either download a <a href="http://www.fossil-scm.org/download.html">precompiled binary</a> or <a href="build.wiki">build it yourself</a> from sources. Install fossil by putting the fossil binary someplace on your PATH environment variable.</p> <a name="fslclone"></a> <h2>General Work Flow</h2> <p>Fossil works with repository files (a database with the project's complete history) and with checked-out local trees (the working directory you use to do your work). The workflow looks like this:</p> <ul> <li>Create or clone a repository file. ([/help/new|fossil new] or [/help/clone | fossil clone]) <li>Check out a local tree. ([/help/open | fossil open]) <li>Perform operations on the repository (including repository configuration). <li><em>Optionally</em> close the local tree. ([/help/close | fossil close], but this is rarely used.) </ul> <p>The following sections will give you a brief overview of these operations.</p> <h2>Starting A New Project</h2> <p>To start a new project with fossil, create a new empty repository this way: ([/help/new | more info]) </p> <blockquote> <b>fossil new </b><i> repository-filename</i> </blockquote> <h2>Cloning An Existing Repository</h2> <p>Most fossil operations interact with a repository that is on the local disk drive, not on a remote system. Hence, before accessing a remote repository it is necessary to make a local copy of that repository. Making a local copy of a remote repository is called "cloning".</p> <p>Clone a remote repository as follows: ([/help/clone | more info])</p> <blockquote> <b>fossil clone</b> <i>URL repository-filename</i> </blockquote> <p>The <i>URL</i> above is the http URL for the fossil repository you want to clone, and it may include a "user:password" part, e.g. |
︙ | ︙ | |||
80 81 82 83 84 85 86 | which in the example above is named "myclone.fossil". You can name your repositories anything you want. The ".fossil" suffix is not required.</p> <p>Note: If you are behind a restrictive firewall, you might need to <a href="#proxy">specify an HTTP proxy</a> to use.</p> | > > > > > > > | | | | | > | < | | | | | > | | | | | | | < < < | > | | | | | | | | | > | | > > | | < < | | | | > > | > > > > > > > > > > > > > > > > > > > > | | | | > | | | | | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | which in the example above is named "myclone.fossil". You can name your repositories anything you want. The ".fossil" suffix is not required.</p> <p>Note: If you are behind a restrictive firewall, you might need to <a href="#proxy">specify an HTTP proxy</a> to use.</p> <h2>Importing From Another Version Control System</h2> <p>Rather than start a new project, or clone an existing Fossil project, you might prefer to <a href="./inout.wiki">import an existing Git project</a> into Fossil using the [/help/import | fossil import] command. <h2>Checking Out A Local Tree</h2> <p>To work on a project in fossil, you need to check out a local copy of the source tree. Create the directory you want to be the root of your tree and cd into that directory. Then do this: ([/help/open | more info])</p> <blockquote> <b>fossil open </b><i> repository-filename</i> </blockquote> <p>This leaves you with the newest version of the tree checked out. From anywhere underneath the root of your local tree, you can type commands like the following to find out the status of your local tree:</p> <blockquote> <b>[/help/info | fossil info]</b><br> <b>[/help/status | fossil status]</b><br> <b>[/help/changes | fossil changes]</b><br> <b>[/help/diff | fossil diff]</b><br> <b>[/help/timeline | fossil timeline]</b><br> <b>[/help/ls | fossil ls]</b><br> <b>[/help/branch | fossil branch]</b><br> </blockquote> <h2>Configuring Your Local Repository</h2> <p>When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some local configuration. This is easily accomplished using the web-server that is built into fossil. Start the fossil webserver like this: ([/help/ui | more info])</p> <blockquote> <b>fossil ui </b><i> repository-filename</i> </blockquote> <p>You can omit the <i>repository-filename</i> from the command above if you are inside a checked-out local tree.</p> <p>This starts a web server then automatically launches your web browser and makes it point to this web server. If your system has an unusual configuration, fossil might not be able to figure out how to start your web browser. In that case, first tell fossil where to find your web browser using a command like this:</p> <blockquote> <b>fossil setting web-browser </b><i> path-to-web-browser</i> </blockquote> <p>By default, fossil does not require a login for HTTP connections coming in from the IP loopback address 127.0.0.1. You can, and perhaps should, change this after you create a few users.</p> <p>When you are finished configuring, just press Control-C or use the <b>kill</b> command to shut down the mini-server.</p> <h2>Making Changes</h2> <p>To add new files to your project, or remove old files, use these commands:</p> <blockquote> <b>[/help/add | fossil add]</b> <i>file...</i><br> <b>[/help/rm | fossil rm]</b> <i>file...</i> </blockquote> <p>You can also edit files freely. Once you are ready to commit your changes, type:</p> <blockquote> <b>[/help/commit | fossil commit]</b> </blockquote> <p>You will be prompted for check-in comments using whatever editor is specified by your VISUAL or EDITOR environment variable.</p> <h2>Sharing Changes</h2> <p>The changes you [/help/commit | commit] are only on your local repository. To share those changes with other repositories, do:</p> <blockquote> <b>[/help/push | fossil push]</b> <i>URL</i> </blockquote> <p>Where <i>URL</i> is the http: URL of the server repository you want to share your changes with. If you omit the <i>URL</i> argument, fossil will use whatever server you most recently synced with.</p> <p>The [/help/push | push] command only sends your changes to others. To Receive changes from others, use [/help/pull | pull]. Or go both ways at once using [/help/sync | sync]:</p> <blockquote> <b>[/help/pull | fossil pull]</b> <i>URL</i><br> <b>[/help/sync | fossil sync]</b> <i>URL</i> </blockquote> <p>When you pull in changes from others, they go into your repository, not into your checked-out local tree. To get the changes into your local tree, use [/help/update | update]:</p> <blockquote> <b>[/help/update | fossil update]</b> <i>VERSION</i> </blockquote> <p>The <i>VERSION</i> can be the name of a branch or tag or any abbreviation to the 40-character artifact identifier for a particular check-in, or it can be a date/time stamp. ([./checkin_names.wiki | more info]) If you omit the <i>VERSION</i>, then fossil moves you to the latest version of the branch your are currently on.</p> <h2>Branching And Merging</h2> <p>You can create branches by doing multiple commits off of the same base version. To merge two branches back together, first [/help/update | update] to the leaf of one branch. Then do a [/help/merge | merge] of the leaf of the other branch:</p> <blockquote> <b>[/help/merge | fossil merge]</b> <i>VERSION</i> </blockquote> <p>The <i>VERSION</i> can be any of the forms allowed for [/help/update | update]. After performing the merge, you will normally want to test it to make sure it does not break anything, then [/help/commit | commit] your changes. In the default configuration, the [/help/commit|commit] command will also automatically [/help/push|push] your changes, but that feature can be disabled. (More information about [./concepts.wiki#workflow|autosync] and how to disable it.) Remember that your coworkers can not see your changes until you commit and push them.</p> <p>The merge command has options to cherrypick individual changes, or to back out individual changes.</p> <p>If a merge or update doesn't work out (perhaps something breaks or there are many merge conflicts) then you back up using:</p> <blockquote> <b>[/help/undo | fossil undo]</b> </blockquote> <p>This will back out the changes that the merge or update made to the working checkout. There is also a [/help/redo|redo] command if you undo by mistake. Undo and redo only work for changes that have not yet been checked in using commit and there is only a single level of undo/redo.</p> <a name="serversetup"></a> <h2>Setting Up A Server</h2> <p>The easiest way to set up a server is:</p> <blockquote> <b>[/help/server | fossil server]</b> <i>repository-filename</i> </blockquote> <p>Or</b> <blockquote> <b>[/help/ui | fossil ui]</b> <i>repository-filename</i> </blockquote> <p>The <b>ui</b> command is intended for accessing the web interface from a local desktop. The <b>ui</b> command binds to the loopback IP address only (and is thus makes the web interface visible only on the local machine) and it automatically start your web browser pointing at the server. For cross-machine collaboration, use the <b>server</b> command, |
︙ | ︙ | |||
275 276 277 278 279 280 281 | <p>Adjust the paths to suit your installation, of course. Notice that fossil runs as root. This is not required - you can run it as an unprivileged user. But it is more secure to run fossil as root. When you do run fossil as root, it automatically puts itself in a chroot jail in the same directory as the repository, then drops root privileges prior to reading any information from the request.</p> | > | | | 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 | <p>Adjust the paths to suit your installation, of course. Notice that fossil runs as root. This is not required - you can run it as an unprivileged user. But it is more secure to run fossil as root. When you do run fossil as root, it automatically puts itself in a chroot jail in the same directory as the repository, then drops root privileges prior to reading any information from the request.</p> <a name="proxy"></a> <h2>HTTP Proxies</h2> <p>If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using a command-line option on commands that use the network, <b>sync</b>, <b>clone</b>, <b>push</b>, and <b>pull</b>.</p> <blockquote> <b>fossil clone </b><i>URL</i> <b>--proxy</b> <i>Proxy-URL</i> </blockquote> <p>It is annoying to have to type in the proxy URL every time you sync your project, though, so you can make the proxy configuration persistent using the [/help/setting | setting] command:</p> <blockquote> <b>fossil setting proxy </b><i>Proxy-URL</i> </blockquote> <p>Or, you can set the "<b>http_proxy</b>" environment variable:</p> |
︙ | ︙ | |||
318 319 320 321 322 323 324 | is easily done on the command-line. For example, to sync with a co-workers repository on your LAN, you might type:</p> <blockquote> <b>fossil sync http://192.168.1.36:8080/ --proxy off</b> </blockquote> | | < | < < < < < < < < | 341 342 343 344 345 346 347 348 349 350 351 352 | is easily done on the command-line. For example, to sync with a co-workers repository on your LAN, you might type:</p> <blockquote> <b>fossil sync http://192.168.1.36:8080/ --proxy off</b> </blockquote> <h2>More Hints</h2> <p>A [/help | complete list of commands] is available. <p>Explore and have fun!</p> |
Added www/quotes.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... <h2>On The Usability Of Git:</h2> <ol> <li>Git approaches the useability of iptables, which is to say, utterly unusable unless you have the manpage tattooed on you arm. <blockquote> <i>by mml at [http://news.ycombinator.com/item?id=1433387]</i> </blockquote> <li><nowiki>It's simplest to think of the state of your [git] repository as a point in a high-dimensional "code-space", in which branches are represented as n-dimensional membranes, mapping the spatial loci of successive commits onto the projected manifold of each cloned repository.</nowiki> <blockquote> <i>At [http://tartley.com/?p=1267]</i> </blockquote> <li>We've been using git and github for a few months now, and it's not intuitive... I'm hoping someone will make a set of standard wrappers/GUI for making git bearable. <blockquote> <i>maro at [http://news.ycombinator.com/item?id=1433387]</i> </blockquote> </ol> <h2>On The Usability Of Fossil:</h2> <ol> <li value=4> Fossil mesmerizes me with simplicity especially after I struggled to get a bug-tracking system to work with mercurial. <blockquote> <i>rawjeev at [http://stackoverflow.com/questions/156322/what-do-people-think-of-the-fossil-dvcs]</i> </blockquote> <li>Fossil is awesome!!! I have never seen an app like that before, such simplicity and flexibility!!! <blockquote> <i>zengr at [http://stackoverflow.com/questions/138621/best-version-control-for-lone-developer]</i> </blockquote> </ol> <h2>On Git Versus Fossil</h2> <ol> <li value=6> Just want to say thanks for fossil making my life easier.... Also <nowiki>[for]</nowiki> not having a misanthropic command line interface. <blockquote> <i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i> </blockquote> <li>We use it at a large university to manage code that small teams write. The runs everywhere, ease of installation and portability is something that seems to be a good fit with the environment we have (highly ditrobuted, sometimes very restrictive firewalls, OSX/Win/Linux). We are happy with it and teaching a Msc/Phd student (read complete novice) fossil has just been a smoother ride than Git was. <blockquote> <i>viablepanic at [http://www.reddit.com/r/programming/comments/bxcto/why_not_fossil_scm/]</i> </blockquote> </ol> |
Changes to www/reviews.wiki.
|
| > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | <title>Reviews</title> <b>External links:</b> * [http://sheddingbikes.com/posts/1276624594.html | Why I Use Fossil] by Zed A. Shaw. * [http://nixtu.blogspot.com/2010/03/fossil-dvcs-on-go-first-impressions.html | Fossil DVCS on the Go - First Impressions] by Juho Vepsäläinen. * [http://blog.fupps.com/2010/12/04/exploring-the-fossil-dvcs | Exploring the Fossil DVCS] by Jan-Piet Mens. * [http://blog.mired.org/2011/02/fossil-sweet-spot-in-vcs-space.html | Fossil - a sweet spot in the VCS space] by Mike Meyer. <b>See Also:</b> * [./quotes.wiki | Short Quotes on Fossil, Git, And DVCSes] <b>Daniel writes on 2009-01-06:</b> <blockquote> The reasons I use fossil are that it's the only version control I have found that I can get working through the VERY annoying MS firewalls at work.. (albeit through an ntlm proxy) and I just love single .exe applications! </blockquote> <b>Joshua Paine on 2010-10-22:</b> <blockquote> With one of my several hats on, I'm in a small team using git. Another team member just checked some stuff into trunk that should have been on a branch. Nothing else had happened since, so in fossil I would have just edited that commit and put it on a new branch. In git that can't actually be done without danger once other people have pulled, so I had to create a new commit rolling back the changes, then branch and cherry pick the earlier changes, then figure out how to make my new branch shared instead of private. Just want to say thanks for fossil making my life easier on most of my projects, and being able to move commits to another branch after the fact and shared-by-default branches are good features. Also not having a misanthropic command line interface. </blockquote> <b>Stephen Beal writes on 2009-01-11:</b> <blockquote> Sometime in late 2007 I came across a link to fossil on <a href="http://www.sqlite.org/">sqlite.org</a>. It was a good thing I bookmarked it, because I was never able to find the link again (it might have been in a bug report or something). The |
︙ | ︙ |
Changes to www/selfcheck.wiki.
︙ | ︙ | |||
11 12 13 14 15 16 17 | Fossil has been hosting itself and many other projects for years now. Many bugs have been encountered. But, thanks in large part to the defensive measures described here, no data has been lost. The integrity checks are doing their job well.</p> <h2>Atomic Check-ins With Rollback</h2> | | | > > | > | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | Fossil has been hosting itself and many other projects for years now. Many bugs have been encountered. But, thanks in large part to the defensive measures described here, no data has been lost. The integrity checks are doing their job well.</p> <h2>Atomic Check-ins With Rollback</h2> The fossil repository is stored in an <a href="http://www.sqlite.org/">SQLite</a> database file. ([./tech_overview.wiki | Addition information] about the repository file format.) SQLite is very mature and stable and has been in wide-spread use for many years, so we are confident it will not cause repository corruption. SQLite databases do not corrupt even if a program or system crash or power failure occurs in the middle of the update. If some kind of crash does occur in the middle of a change, then all the changes are rolled back the next time that the database is accessed. A check-in operation in fossil makes many changes to the repository database. But all these changes happen within a single transaction. If something goes wrong in the middle of the commit, even if that something is a power failure or OS crash, then the transaction is rolled back and the database is unchanged. <h2>Verification Of Delta Encodings Prior To Transaction Commit</h2> The content files that comprise the global state of a fossil repository are stored in the repository as a tree. The leaves of the tree are stored as zlib-compressed BLOBs. Interior nodes are deltas from their |
︙ | ︙ | |||
64 65 66 67 68 69 70 | files. <h2>Checksum Over All Files In A Check-in</h2> Manifest artifacts that define a check-in have two fields (the R-card and Z-card) that record MD5 hashes of the manifest itself and of all other files in the manifest. Prior to any check-in | | | | 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | files. <h2>Checksum Over All Files In A Check-in</h2> Manifest artifacts that define a check-in have two fields (the R-card and Z-card) that record MD5 hashes of the manifest itself and of all other files in the manifest. Prior to any check-in commit, these checksums are verified to ensure that the checkin agrees exactly with what is on disk. Similarly, the repository checksum is verified after a checkout to make sure that the entire repository was checked out correctly. Note that these added checks use a different hash (MD5 instead of SHA1) in order to avoid common-mode failures in the hash algorithm implementation. |
︙ | ︙ |
Changes to www/selfhost.wiki.
1 | <title>Fossil Self-Hosting Repositories</title> | < < | > | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | <title>Fossil Self-Hosting Repositories</title> Fossil has self-hosted since 2007-07-21. As of this writing (2009-08-24) there are three publicly accessible repositories for the Fossil source code: 1. [http://www.fossil-scm.org/] 2. [http://www2.fossil-scm.org/] 3. [http://www3.fossil-scm.org/site.cgi] The canonical repository is (1). Repositories (2) and (3) automatically stay in synchronization with (1) via a <a href="http://en.wikipedia.org/wiki/Cron">cron job</a> that invokes "fossil sync" at regular intervals. Note that the two secondary repositories are more than just read-only mirrors. All three servers support full read/write capabilities. Changes (such as new tickets or wiki or check-ins) can be implemented on any of the three servers and those changes automatically propagate to the other two servers. Server (1) runs as a CGI script on a <a href="http://www.linode.com/">Linode 1024</a> located in Dallas, TX - on the same virtual machine that hosts <a href="http://www.sqlite.org/">SQLite</a> and over a dozen other smaller projects. This demonstrates that Fossil does not require much server power. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> Server (3) runs as a CGI script on a shared hosting account at <a href="http://www.he.net/">Hurricane Electric</a> in Fremont, CA. This server demonstrates the ability of Fossil to run on an economical shared-host web account with no privileges beyond port 80 HTTP access and CGI. It is not necessary to have a dedicated server to run Fossil. As far as we are aware, Fossil is the only full-featured configuration management system that can run in such a restricted environment. The CGI script that runs on the Hurricane Electric server is the same as the CGI script shown above, except that the pathnames are modified to suit the environment: <blockquote><pre> #!/home/hwaci/bin/fossil repository: /home/hwaci/fossil/fossil.fossil </pre></blockquote> Server (3) is synchronized with the canonical server (1) by running the following command via cron: <blockquote><pre> /home/hwaci/bin/fossil sync -R /home/hwaci/fossil/fossil.fossil </pre></blockquote> Server (2) is a <a href="http://www.linode.com/">Linode 512</a> located in Newark, NJ and set up just like the canonical server (1) with the addition of a cron job for synchronization as in server (3). |
Changes to www/server.wiki.
1 | <nowiki> | > < | > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | <title>How To Configure A Fossil Server</title> <nowiki> <p>This guide is intended to help guide you in setting up a Fossil server.</p> <h2>Standalone server</h2><blockquote> The easiest way to set up a Fossil server is to use the <tt>server</tt> or <tt>ui</tt> command. Assuming the repository you are interested in serving is in the file "<tt>repo.fossil</tt>", you can use either of these commands to start Fossil as a server: <ul> <li><tt>fossil server repo.fossil</tt> <li><tt>fossil ui repo.fossil</tt> </ul> <p> Both of these commands start a Fossil server on port 8080 on the local machine, which can be accessed with the URL: <tt>http://localhost:8080/</tt> using any handy web browser. The difference between the two commands is that "ui", in addition to starting the Fossil server, also starts a web browser and points it to the URL mentioned above. On the other hand, the "ui" command binds to the loopback IP address only (127.0.0.1) so that the "ui" command cannot be used to serve content to a different machine. </p> <p> NOTES: <ol> <li>The option "--port NNN" will start the server on port "NNN" instead of 8080. <li>If port 8080 is already being used (perhaps by another Fossil server), then Fossil will use the next available port number. |
︙ | ︙ | |||
68 69 70 71 72 73 74 | <p> Once the script is set up correctly, and assuming your server is also set correctly, you should be able to access your repository with a URL like: <tt>http://mydomain.org/cgi-bin/repo</tt> (assuming the "repo" script is accessible under "cgi-bin", which would be a typical deployment on Apache for instance). </p> </blockquote> <h3>Serving multiple repositories with one script</h3><blockquote> <p> | | > > > | | 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | <p> Once the script is set up correctly, and assuming your server is also set correctly, you should be able to access your repository with a URL like: <tt>http://mydomain.org/cgi-bin/repo</tt> (assuming the "repo" script is accessible under "cgi-bin", which would be a typical deployment on Apache for instance). </p> </blockquote> <h3>Serving multiple repositories with one script</h3><blockquote> <p> This scenario is almost identical to the previous one. However, here we will assume you have multiple repositories, in one directory. (Call the directory 'fossils'). All repositories served, in this case, must use the ".fossil" filename suffix. As before, create a script (again, 'repo'): <blockquote><tt> #!/path-to/fossil<br> directory: /path-to-repo/fossils<br> notfound: http://url-to-go-to-if-repo-not-found/ </tt></blockquote> </p> <p> Once deployed, a URL like: <tt>http://mydomain.org/cgi-bin/repo/XYZ</tt> will serve up the repository "fossils/XYX.fossil" (if it exists). This makes serving multiple projects on one server pretty painless. </p> </blockquote> <h2>Securing a repository with SSL</h2><blockquote> <p> Using either of the CGI script approaches, it is trivial to use SSL to secure the server. Simply set up the Fossil CGI scripts etc. as above, but modify the Apache (or IIS, etc.) server to require SSL (that is, a URL with "https://") in order to access the CGI script directory. This may also be accomplished (on Apache, at least) using appropriate ".htaccess" rules. </p> |
︙ | ︙ |
Added www/style.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | <title>Coding Style</title> Fossil source code should following the style guidelines below. <b>General points:</b>: 10. No line of code exceeds 80 characters in length. (Occasional exceptions are made for HTML text on @-lines.) 11. There are no tab characters. 12. Line terminators are \n only. Do not use a \r\n line terminator. 13. 2-space indentation is used. Remember: No tabs. 14. Comments contain no spelling or grammatical errors. (Abbreviations and sentence fragments are acceptable when trying to fit a comment on a single line as long as the meaning is clear.) 15. The tone of comments is professional and courteous. Comments contain no profanity, obscenity, or innuendo. 16. All C-code conforms to ANSI C-89. 17. All comments and identifiers are in English. 18. The program is single-threaded. Do not use threads. (One except to this is the HTTP server implementation for windows, which we do not know how to implement without the use of threads.) <b>C preprocessor macros</b>: 20. The purpose of every preprocessor macros is clearly explained in a comment associated with its definition. 21. Every preprocessor macro is used at least once. 22. The names of preprocessor macros clearly reflect their use. 23. Assumptions about the relative values of related macros are verified by asserts. Example: <tt>assert(READ_LOCK+1==WRITE_LOCK);</tt> <b>Function header comments</b>: 30. Every function has a header comment describing the purpose and use of the function. 31. Function header comment defines the behavior of the function in sufficient detail to allow the function to be reimplemented from scratch without reference to the original code. 32. Functions that perform dynamic memory allocation (either directly or indirectly via subfunctions) say so in their header comments. <b>Function bodies</b>: <ol> <li value=40> The name of a function clearly reflects its purpose. <li> Automatic variables are small, not large objects or arrays. Avoid excessive stack usage. <li> The check-list items for functions also apply to major subsections within a function. <li> All code subblocks are enclosed in {...}. <li> <b>assert() macros are used as follows </b>: <ol type="a"> <li> Function preconditions are clearly stated and verified by asserts. <li> Invariants are identified by asserts. </ol> </ol> <b>Class (struct) declarations</b>: 50. The purpose and use of every class (a.k.a. structure) is clearly defined in the header comment of its declaration. 51. The purpose and use of every class member is clearly defined either in the header comment of the class declaration or when the member is declared or both. 52. The names of class members clearly reflect their use. 53. Invariants for classes are clearly defined. <b>Variables and class instances</b>: 60. The purpose and use of every variable is defined by a comment at the variable definition. 61. The names of variables clearly reflect their use. 62. Related variables have related names. (ex: aSavepoint and nSavepoint.) 63. Variables have minimum practical scope. 64. Automatic variables are small, not large objects or arrays. 65. Constants are "const". 66. Invariants on variables or groups of variables are defined and tested by asserts. 67. When a variable that refers to the same value is used within multiple scopes, the same name is used in all cases. 68. When variables refer to different values, different names are used even when the names are in different scopes. 69. Variable names with wide scope are sufficiently distinctive to allow searching for them using grep. |
Changes to www/sync.wiki.
|
| | | 1 2 3 4 5 6 7 8 | <title>The Fossil Sync Protocol</title> <p>Fossil supports commands <b>push</b>, <b>pull</b>, and <b>sync</b> for transferring information from one repository to another. The command is run on the client repository. A URL for the server repository is specified as part of the command. This document describes what happens behind the scenes in order to synchronize the information on the two repositories.</p> |
︙ | ︙ | |||
221 222 223 224 225 226 227 | during a clone. This is how the client determines what project code to put in the new repository it is constructing.</p> <h3>3.5 Clone Cards</h3> <p>A clone card works like a pull card in that it is sent from client to server in order to tell the server that the client | | > | | > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > | 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | during a clone. This is how the client determines what project code to put in the new repository it is constructing.</p> <h3>3.5 Clone Cards</h3> <p>A clone card works like a pull card in that it is sent from client to server in order to tell the server that the client wants to pull content. The clone card comes in two formats. Older clients use the no-argument format and newer clients use the two-argument format.</p> <blockquote> <b>clone</b><br> <b>clone</b> <i>protocol-version sequence-number</i> </blockquote> <h4>3.5.1 Protocol 2</h4> <p>The latest clients send a two-argument clone message with a protocol version of "2". (Future versions of Fossil might use larger protocol version numbers.) The sequence-number sent is the number of artifacts received so far. For the first clone message, the sequence number if 0. The server will respond by sending file cards for some number of artifacts up to the maximum message size. <p>The server will also send a single "clone_seqno" card to the client so that the client can know where the server left off. <blockquote> <b>clone_seqno</b> <i>sequence-number</i> </blockquote> <p>The clone message in subquence HTTP requests for the same clone operation will use the sequence-number from the clone_seqno of the previous reply.</p> <p>In response to an initial clone message, the server also sends the client a push message so that the client can discover the projectcode for this project.</p> <h4>3.5.2 Legacy Protocol</h4> <p>Older clients send a clone card with no argument. The server responds to a blank clone card by sending an "igot" card for every artifact in the repository. The client will then issue "gimme" cards to pull down all the content it needs. <p>The legacy protocol works well for smaller repositories (50MB with 50,000 artifacts) but is too slow and unwieldy for larger repositories. The version 2 protocol is an effort to improve performance. Further performance improvements with higher-numbered clone protocols are possible in future versions of Fossil. <h3>3.6 Igot Cards</h3> <p>An igot card can be sent from either client to server or from server to client in order to indicate that the sender holds a copy of a particular artifact. The format is:</p> <blockquote> |
︙ | ︙ | |||
285 286 287 288 289 290 291 | cookie and the server must structure the cookie payload in such a way that it can tell if the cookie it sees is its own cookie or a cookie from another server. (Typically the server will embed its servercode as part of the cookie.)</p> <h3>3.9 Request-Configuration Cards</h3> | > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > | 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | cookie and the server must structure the cookie payload in such a way that it can tell if the cookie it sees is its own cookie or a cookie from another server. (Typically the server will embed its servercode as part of the cookie.)</p> <h3>3.9 Request-Configuration Cards</h3> <p>A request-configuration or "reqconfig" card is sent from client to server in order to request that the server send back "configuration" data. "Configuration" data is information about users or website appearance or other administrative details which are not part of the persistent and versioned state of the project. For example, the "name" of the project, the default Cascading Style Sheet (CSS) for the web-interface, and the project logo displayed on the web-interface are all configuration data elements. <p>The reqconfig card is normally sent in response to the "fossil configuration pull" command. The format is as follows: <blockquote> <b>reqconfig</b> <i>configuration-name</i> </blockquote> <p>As of this writing ([2010-11-12]), the configuration-name must be one of the following values: <center><table border=0> <tr><td valign="top"> <ul> <li> css <li> header <li> footer <li> logo-mimetype <li> logo-image <li> project-name <li> project-description <li> manifest <li> index-page <ul></td><td valign="top"><ul> <li> timeline-block-markup <li> timeline-max-comment <li> ticket-table <li> ticket-common <li> ticket-newpage <li> ticket-viewpage <li> ticket-editpage <li> ticket-reportlist <li> ticket-report-template <ul></td><td valign="top"><ul> <li> ticket-key-template <li> ticket-title-expr <li> ticket-closed-expr <li> @reportfmt <li> @user <li> @concealed <li> @shun </ul></td></tr> </table></center> <p>New configuration-names are likely to be added in future releases of Fossil. If the server receives a configuration-name that it does not understand, the entire reqconfig card is silently ignored. The reqconfig card might also be ignored if the user lacks sufficient privilege to access the requested information. <p>The configuration-names that begin with an alphabetic character refer to values in the "config" table of the server database. For example, the "logo-image" configuration item refers to the project logo image that is configured on the Admin page of the [./webui.wiki | web-interface]. The value of the configuration item is returned to the client using a "config" card. <p>If the configuration-name begins with "@", that refers to a class of values instead of a single value. The content of these configuration items is returned in a "config" card that contains pure SQL text that is intended to be evaluated by the client. <p>The @user and @concealed configuration items contain sensitive information and are ignored for clients without sufficient privilege. <h3>3.10 Configuration Cards</h3> <p>A "config" card is used to send configuration information from client to server (in response to a "fossil configuration push" command) or from server to client (in response to a "fossil configuration pull" or "fossil clone" command). The format is as follows: <blockquote> <b>config</b> <i>configuration-name size</i> <b>\n</b> <i>content</i> </blockquote> <p>The server will only accept a config card if the user has "Admin" privilege. A client will only accept a config card if it had sent a corresponding reqconfig card in its request. <p>The content of the configuration item is used to overwrite the corresponding configuration data in the receiver. <h3>3.11 Error Cards</h3> <p>If the server discovers anything wrong with a request, it generates an error card in its reply. When the client sees the error card, it displays an error message to the user and aborts the sync operation. An error card looks like this:</p> |
︙ | ︙ |
Added www/tech_overview.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 | <title>Technical Overview</title> <h2 align="center"> A Technical Overview<br>Of The Design And Implementation<br>Of Fossil </h2> <h2>1.0 Introduction</h2> At its lowest level, a Fossil repository consists of an unordered set of immutable "artifacts". You might think of these artifacts as "files", since in many cases the artifacts exactly correspond to source code files that are stored in the Fossil repository. But other "control artifacts" are also included in the mix. These control artifacts define the relationships between artifacts - which files go together to form a particular version of the project, who checked in that version and when, what was the check-in comment, what wiki pages are included with the project, what are the edit histories of each wiki page, what bug reports or tickets are included, who contributed to the evolution of each ticket, and so forth, and so on. This low-level file format is called the "global state" of the repository, since this is the information that is synced to peer repositories using push and pull operations. The low-level file format is also called "enduring" since it is intended to last for many years. The details of the low-level, enduring, global file format are [./fileformat.wiki | described separately]. This article is about how Fossil is currently implemented. Instead of dealing with vague abstractions of "enduring file formats" as the [./fileformat.wiki | that other document] does, this article provides some detail on how Fossil actually stores information on disk. <h2>2.0 Three Databases</h2> Fossil stores state information in [http://www.sqlite.org/ | SQLite] database files. SQLite keeps an entire relational database, including multiple tables and indices, in a single disk file. The SQLite library allows the database files to be efficiently queried and updated using the industry-standard SQL language. And SQLite makes updates to these database files atomic, even if a system crashes or power failure occurs in the middle of the update, meaning that repository content is protected even during severe malfunctions. Fossil uses three separate classes of SQLite databases: <ol> <li>The configuration database <li>Repository databases <li>Checkout databases </ol> The configuration database is a one-per-user database that holds global configuration information used by Fossil. There is one repository database per project. The repository database is the file that people are normally referring to when they say "a Fossil repository". The checkout database is found in the working checkout for a project and contains state information that is unique to that working checkout. Fossil does not always use all three database files. The web interface, for example, typically only uses the repository database. And the [/help/all | fossil setting] command only opens the configuration database when the --global option is used. But other commands use all three databases at once. For example, the [/help/status | fossil status] command will first locate the checkout database, then use the checkout database to find the repository database, then open the configuration database. Whenever multiple databases are used at the same time, they are all opened on the same SQLite database connection using SQLite's [http://www.sqlite.org/lang_attach.html | ATTACH] command. The chart below provides a quick summary of how each of these database files are used by Fossil, with detailed discussion following. <center><table border="1" width="80%" cellpadding="0"> <tr> <td width="33%" valign="top"> <h3 align="center">Configuration Database<br>"~/.fossil"</h3> <ul> <li>Global [/help/setting |settings] <li>List of active repositories used by the [/help/all | all] command </ul> </td> <td width="34%" valign="top"> <h3 align="center">Repository Database<br>"<i>project</i>.fossil"</h3> <ul> <li>[./fileformat.wiki | Global state of the project] encoded using delta-compression <li>Local [/help/setting|settings] <li>Web interface display preferences <li>User credentials and permissions <li>Metadata about the global state to facilitate rapid queries </ul> </td> <td width="33%" valign="top"> <h3 align="center">Checkout Database<br>"_FOSSIL_"</h3> <ul> <li>The repository database used by this checkout <li>The version currently checked out <li>Other versions [/help/merge | merged] in but not yet [/help/commit | committed] <li>Changes from the [/help/add | add], [/help/delete | delete], and [/help/rename | rename] commands that have not yet been committed <li>"mtime" values and other information used to efficiently detect local edits <li>The "[/help/stash | stash]" <li>Information needed to "[/help/undo|undo]" or "[/help/redo|redo]" </ul> </td> </tr> </table> </center> <h3>2.1 The Configuration Database</h3> The configuration database holds cross-repository preferences and a list of all repositories for a single user. The [/help/setting | fossil setting] command can be used to specify various operating parameters and preferences for Fossil repositories. Settings can apply to a single repository, or they can apply globally to all repositories for a user. If both a global and a repository value exists for a setting, then the repository-specific value takes precedence. All of the settings have reasonable defaults, and so many users will never need to change them. But if changes to settings are desired, the configuration database provides a way to change settings for all repositories with a single command, rather than having to change the setting individually on each repository. The configuration database also maintains a list of repositories. This list is used by the [/help/all | fossil all] command in order to run various operations such as "sync" or "rebuild" on all repositories managed by a user. On unix systems, the configuration database is named ".fossil" and is located in the user's home directory. On windows, the configuration database is named "_fossil" (using an underscore as the first character instead of a dot) and is located in the directory specified by the LOCALAPPDATA, APPDATA, or HOMEPATH environment variables, in that order. <h3>2.2 Repository Databases</h3> The repository database is the file that is commonly referred to as "the repository". This is because the repository database contains, among other things, the complete revision, ticket, and wiki history for a project. It is customary to name the repository database after then name of the project, with a ".fossil" suffix. For example, the repository database for the self-hosting Fossil repository is called "fossil.fossil" and the repository database for SQLite is called "sqlite.fossil". <h4>2.2.1 Global Project State</h4> The bulk of the repository database (typically 75 to 85%) consists of the artifacts that comprise the [./fileformat.wiki | enduring, global, shared state] of the project. The artifacts are stored as BLOBs, compressed using [http://www.zlib.net/ | zlib compression] and, where applicable, using [./delta_encoder_algorithm.wiki | delta compression]. The combination of zlib and delta compression results in a considerable space savings. For the SQLite project, at the time of this writing, the total size of all artifacts is over 1.7 GB but thanks to the combined zlib and delta compression, that content only takes up 51.4 MB of space in the repository database, for a compression ratio of about 33:1. Note that the zlib and delta compression is not an inherent part of the Fossil file format; it is just an optimization. The enduring file format for Fossil is the unordered set of artifacts. The compression techniques are just a detail of how the current implementation of Fossil happens to store these artifacts efficiently on disk. All of the original uncompressed and undeltaed artifacts can be extracted from a Fossil repository database using the [/help/deconstruct | fossil deconstruct] command. Individual artifacts can be extracted using the [/help/artifact | fossil artifact] command. When accessing the repository database using raw SQL and the [/help/sqlite3 | fossil sql] command, the extension function "<tt>content()</tt>" with a single argument which is the SHA1 hash of an artifact will return the complete undeleted and uncompressed content of that artifact. Going the other way, the [/help/reconstruct | fossil reconstruct] command will scan a directory hierarchy and add all files found to a new repository database. The [/help/import | fossil import] command works by reading the input git-fast-export stream and using it to construct corresponding artifacts which are then written into the repository database. <h4>2.2.2 Project Metadata</h4> The global project state information in the repository database is supplemented by computed metadata that makes querying the project state more efficient. Metadata includes but information such as the following: * The names for all files found in any checkin. * All check-ins that modify a given file * Parents and children of each checkin. * Potential timeline rows. * The names of all symbolic tags and the checkins they apply to. * The names of all wiki pages and the artifacts that comprise each wiki page. * Attachments and the wiki pages or tickets they apply to. * Current content of each ticket. * Cross-references between tickets, checkins, and wiki pages. The metadata is held in various SQL tables in the repository database. The metadata is designed to facilitate queries for the various timelines and reports that Fossil generates. As the functionality of Fossil evolves, the schema for the metadata can and does change from time to time. But schema changes do no invalidate the repository. Remember that the metadata contains no new information - only information that has been extracted from the canonical artifacts and saved in a more useful form. Hence, when the metadata schema changes, the prior metadata can be discarded and the entire metadata corpus can be recomputed from the canonical artifacts. That is what the [/help/rebuild | fossil rebuild] command does. <h4>2.2.3 Display And Processing Preferences</h4> The repository database also holds information used to help format the display of web pages and configuration settings that override the global configuration settings for the specific repository. All of this information (and the user credentials and privileges too) is local to each repository database; it is not shared between repositories by [/help/sync | fossil sync]. That is because it is entirely reasonable that two different websites for the same project might have completely different display preferences and user communities. One instance of the project might be a fork of the other, for example, which pulls from the other but never pushes and extends the project in ways that the keepers of the other website disapprove of. Display and processing information includes the following: * The name and description of the project * The CSS file, header, and footer used by all web pages * The project logo image * Fields of tickets that are considered "significant" and which are therefore collected from artifacts and made available for display * Templates for screens to view, edit, and create tickets * Ticket report formats and display preferences * Local values for [/help/setting | settings] that override the global values defined in the per-user configuration database. Though the display and processing preferences do not move between repository instances using [/help/sync | fossil sync], this information can be shared between repositories using the [/help/config | fossil config push] and [/help/config | fossil config pull] commands. The display and processing information is also copied into new repositories when they are created using [/help/clone | fossil clone]. <h4>2.2.4 User Credentials And Privileges</h4> Just because two development teams are collaborating on a project and allow push and/or pull between their repositories does not mean that they trust each other enough to share passwords and access privileges. Hence the names and emails and passwords and privileges of users are considered private information that is kept locally in each repository. Each repository database has a table holding the username, privileges, and login credentials for users authorized to interact with that particular database. In addition, there is a table named "concealed" that maps the SHA1 hash of each users email address back into their true email address. The concealed table allows just the SHA1 hash of email addresses to be stored in tickets, and thus prevents actual email addresses from falling into the hands of spammers who happen to clone the repository. The content of the user and concealed tables can be pushed and pulled using the [/help/config | fossil config push] and [/help/config | fossil config pull] commands with the "user" and "email" as the AREA argument, but only if you have administrative privileges on the remote repository. <h4>2.2.5 Shunned Artifact List</h4> The set of canonical artifacts for a project - the global state for the project - is intended to be an append-only database. In other words, new artifacts can be added but artifacts can never be removed. But it sometimes happens that inappropriate content can be mistakenly or maliciously added to a repository. When that happens, the only way to get rid of the content is to [./shunning.wiki | "shun"] it. The "shun" table in the repository database records the SHA1 hash of all shunned artifacts. The shun table can be pushed or pulled using the [/help/config | fossil config] command with the "shun" AREA argument. The shun table is also copied during a [/help/clone | clone]. <h3>2.3 Checkout Databases</h3> Unlike several other popular DVCSes, Fossil allows a single repository to have multiple working checkouts. Each working checkout has a single database in its root directory that records the state of that checkout. The checkout database is named "_FOSSIL_" by default, but can be renamed to ".fos" if desired. (Future versions of Fossil might make ".fos" the default name.) The checkout database records information such as the following: * The full pathname of the repository database file. * The version that is currently checked out. * Files that have been [/help/add | added], [/help/rm | removed], or [/help/mv | renamed] but not yet committed. * The mtime and size of files as they were originally checked out, in order to expedite checking which files have been edited. * Other checkins that have been [/help/merge | merged] into the working checkout but not yet committed. * Copies of files prior to the most recent undoable operation - needed to implement the [/help/undo | undo] and [/help/redo | redo] commands. * The [/help/stash | stash]. * State information for the [/help/bisect | bisect] command. For Fossil commands that run from within a working checkout, the first thing that happens is that Fossil locates the checkout database. Fossil first looks in the current directory. If not found there, it looks in the parent directory. If not found there, the parent of the parent. And so forth until either the checkout database is found or the search reaches the root of the filesystem. (In the latter case, Fossil returns an error, of course.) Once the checkout database is located, it is used to locate the repository database. Notice that the checkout database contains a pointer to the repository database but that the repository database has no record of the checkout databases. That means that a working checkout directory tree can be freely renamed or copied or deleted without consequence. But the repository database file, on the other hand, has to stay in the same place with the same name or else the open checkout databases will not be able to find it. A checkout database is created by the [/help/open | fossil open] command. A checkout database is deleted by [/help/close | fossil close]. The fossil close command really isn't needed; one can accomplish the same thing simply by deleting the checkout database. Note that the stash, the undo stack, and the state of the bisect command are all contained within the checkout database. That means that the fossil close command will delete all stash content, the undo stack, and the bisect state. The close command is not undoable. Use it with care. |
Changes to www/webui.wiki.
|
| | | > > > < > > > > > | | | | | > > > > > | | > > | | | | | > | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | <title>The Fossil Web Interface</title> One of the innovative features of Fossil is its built-in web interface. This web interface provides everything you need to run a software development project: * [./bugtheory.wiki | Ticketing and bug tracking] * [./wikitheory.wiki | Wiki] * [./embeddeddoc.wiki | On-line documentation] * Status information * Timelines * Graphs of revision and branching history * [./event.wiki | Blogs, News, and Announcements] * File and version lists and differences * Download historical versions as ZIP archives * Historical change data * Add and remove tags on checkins * Move checkins between branches * Revise checkin comments * Manage user credentials and access permissions * And so forth... You get all of this, and more, for free when you use Fossil. There are no extra programs to install or setup. Everything you need is already pre-configured and built into the self-contained, stand-alone Fossil executable. As an example of how useful this web interface can be, the entire [./index.wiki | Fossil website] (except for the [http://www.fossil-scm.org/download.html | download page]), including the document you are now reading, is rendered using the Fossil web interface, with no enhancements, and little customization. <blockquote> <b>Key point:</b> <i>The Fossil website is just a running instance of Fossil! </blockquote> Note also that because Fossil is a distributed system, you can run the web interface on your local machine while off network (for example, while on an airplane) including making changes to wiki pages and/or trouble ticket, then synchronize with your co-workers after you reconnect. When you clone a Fossil repository, you don't just get the project source code, you get the entire project management website. <h2>Drop-Dead Simple Startup</h2> To start using the built-in Fossil web interface on an existing Fossil repository, simply type this: <b>fossil ui existing-repository.fossil</b> Substitute the name of your repository, of course. The "ui" command will start a webserver running (it figures out an available TCP port to use on its own) and then automatically launches your web browser to point at that server. If you run the "ui" command from within an open check-out, you can omit the repository name: <b>fossil ui</b> The latter case is a very useful short-cut when you are working on a Fossil project and you want to quickly do some work with the web interface. Notice that Fossil automatically finds an unused TCP port to run the server own and automatically points your web browser to the correct URL. So there is never any fumbling around trying to find an open port or to type arcane strings into your browser URL entry box. The interface just pops right up, ready to run. The Fossil web interface is also very easy to setup and run on a network server, as either a CGI program or from inetd. Details on how to do that are described further below. <h2>Things To Do Using The Web Interface</h2> You can view <b>timelines</b> of changes to the project. The default "Timeline" link on the menu bar takes you to a page that shows the 20 most recent check-ins, wiki page edits, ticket/bug-report changes, and/or blog entries. This gives a very useful snapshot of what has been happening lately on the project. You can click to go further back in time, if needed. Or follow hyperlinks to see details, including diffs and annotated diffs, of individual check-ins, wiki page edits, ticket changes, and blog edits. You can view and edit <b>tickets and bug reports</b> by following the "Tickets" link on the menu bar. Fossil is backed by an SQL database, so users with appropriate permissions can write new ticket report formats based on SQL query statements. Fossil is careful to prevent ticket report formats from doing any mischief on the database (it only allows SELECT statements to run) and it restricts |
︙ | ︙ | |||
84 85 86 87 88 89 90 | You can view and edit <b>wiki</b> by following the "Wiki" link on the menu bar. Fossil uses simple and easy-to-remember [/wiki_rules | wiki formatting rules] so you won't have to spend a lot of time learning a new markup language. And, as with tickets, all of your edits will automatically merge with those of your co-workers when your repository synchronizes. | | | | 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | You can view and edit <b>wiki</b> by following the "Wiki" link on the menu bar. Fossil uses simple and easy-to-remember [/wiki_rules | wiki formatting rules] so you won't have to spend a lot of time learning a new markup language. And, as with tickets, all of your edits will automatically merge with those of your co-workers when your repository synchronizes. You can view summary reports of <b>branches</b> in the check-in graph by visiting the "Branche" links on the menu bar. From those pages you can follow hyperlinks to get additional details. These screens allow you to easily keep track of what is going on with separate subteams within your project team. The "Files" link on the menu allows you to browse though the <b>file hierarchy</b> of the project and to view complete changes histories on individual files, with hyperlinks to the check-ins that made those |
︙ | ︙ | |||
114 115 116 117 118 119 120 | for the entire page. You can even change around the main menu. Timeline display preferences can be edited. The page that is brought up as the "Home" page can be changed. It is often useful to set the "Home" page to be a wiki page or an embedded document. <h2>Installing On A Network Server</h2> | | | | | | 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 | for the entire page. You can even change around the main menu. Timeline display preferences can be edited. The page that is brought up as the "Home" page can be changed. It is often useful to set the "Home" page to be a wiki page or an embedded document. <h2>Installing On A Network Server</h2> When you create a new Fossil project and after you have configured it like you want it using the web interface, you can make the project available to a distributed team by simply copying the single repository file up to a web server that supports CGI. Just put the <b>sample-project.fossil</b> file in a directory where CGI scripts have both read and write permission on the file and the directory that contains the file, then add a CGI script that looks something like this: <verbatim> #!/usr/local/bin/fossil repository: /home/www/sample-project.fossil </verbatim> Adjust the script above so that the paths are correct for your system, of course, and also make sure the Fossil binary is installed on the server. But that is <u>all</u> you have to do. You now have everything you need to host a distributed software development project in less than five minutes using a two-line CGI script. You don't have a CGI-capable web server running on your server machine? Not a problem. The Fossil interface can also be launched via inetd or xinetd. An inetd configuration line sufficient to launch the Fossil web interface looks like this: <verbatim> 80 stream tcp nowait.1000 root /usr/local/bin/fossil \ /usr/local/bin/fossil http /home/www/sample-project.fossil </verbatim> As always, you'll want to adjust the pathnames to whatever is appropriate for your system. The xinetd setup uses a different syntax but follows the same idea. |
Changes to www/wikitheory.wiki.
|
| | > | 1 2 3 4 5 6 7 8 9 | <title>Wiki In Fossil</title> <h2>Introduction</h2> Fossil uses [/wiki_rules | wiki markup] for many things: * Stand-alone wiki pages. * Description and comments in [./bugtheory.wiki | bug reports]. * Check-in comments. * [./embeddeddoc.wiki | Embedded documentation] files whose |
︙ | ︙ | |||
63 64 65 66 67 68 69 | Some project prefer to store their documentation in wiki. There is nothing wrong with that. But other projects prefer to keep documentation as part of the source tree, so that it is versioned along with the source tree and so that only developers with check-in privileges can change it. Embedded documentation serves this latter purpose. Both forms of documentation use the exact same wiki markup language. Some projects may choose to use both forms of documentation at the same time. Because the same | | | 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | Some project prefer to store their documentation in wiki. There is nothing wrong with that. But other projects prefer to keep documentation as part of the source tree, so that it is versioned along with the source tree and so that only developers with check-in privileges can change it. Embedded documentation serves this latter purpose. Both forms of documentation use the exact same wiki markup language. Some projects may choose to use both forms of documentation at the same time. Because the same format is used, it is trivial to move a file from wiki to embedded documentation or back again as the project evolves. <h2>Bug-reports and check-in comments</h2> The comments on check-ins and the text in the descriptions of bug reports both use wiki formatting. Exactly the same set of formatting rules apply. There is never a need to learn one formatting language for documentation and a different markup for bugs or for check-in comments. |